Jump to content

Apple's new A8X processor dominates the benchmarks

solosdk

8 core sucks balls, big.LITTLE is for OEMs who cannot be bothered to scale power for their cores, make strong proper cores, and want to impress dumb people. 

 

Applications will use one quad-core or the other, not both. 

 

I know, samsung already has that, and it's amazing in single core performance.

 

 

oh wait, no it isn't...

I have never seen a LTT forum member as ignorant as this guy, I presume "Captain Slow" means something.

Link to comment
Share on other sites

Link to post
Share on other sites

I have never seen a LTT forum member as ignorant as this guy, I presume "Captain Slow" means something.

 

your presumption means you don't watch top gear, 

 

only boring people no watch top gear. 

If you judge a fish by its ability to climb a tree, it will live its whole life thinking it's stupid.  - Albert Einstein

Link to comment
Share on other sites

Link to post
Share on other sites

Lol @ the APPLE HATE!

 

You people disgust me.

 

Apple pushes the industry, and if they have a better processor well you know the Android people have to catch up as well. Phones are indeed becoming like computers,  before I buy tablets or phones now I look at the freaking processor. Remember when we didnt give two shits about phone processors. 

 

Get with the times people, faster is better. Don't be like the filthy game DEV who claim "Resolution is just a number" or that "We don't need 60fps". 

[ Cruel Angel ]:     Exterior  -   BENQ XL2420T   |   SteelSeries MLG Sensei   |   Corsair K70 RED   |   Corsair 900D  |                                                                                                    CPU:    -   4.7Ghz @ 1.425v             |

                             Interior    -   i7 4770k   |    Maximus VI Formula    |   Corsair Vengeance Pro 16GB    |   ASUS GTX 980 Strix SLIx2  |  840 Pro 512Gb    |    WD Black 2TB  |           RAM:   -   2400Mhz OC @ 1.650v    |

                             Cooling   -   XSPC 120mm x7 Total Radiator Space   |   XSPC RayStorm    |    PrimoChill Tubing/Res  |                                                                                             GPU:   -   1000Mhz @ 1.158            |

Link to comment
Share on other sites

Link to post
Share on other sites

People are under the very common misconception that the Snapdragon 80x is much faster than the A7, but in reality the A7 has much higher single core performance despite the lower clock speed and very similar multicore performance despite being down 2 cores. As it is right now, the A7 has no competition for single core performance and the A8 improves upon that. Only the Tegra k1 can match Apple in single threaded performance, but that's not ready for mobile phones. For me, the two most noticeable things when using a new cpu is the improved IPC and efficiency-Apple is able to do both. So in short, Apple is able to deliver high performance cores at low power consumption (just like Intel) while Snapdragon requires more cores to match the multithreaded performance of the A7, which requires highly optimized applications. The mobile space right now is very very similar to the current situation between Intel and AMD; there's essentially no competition between Snapdragon and the A7/A8.

Interesting this might be like last year where the iPad mini with retina and the Air had the same clocks but the mini throttled, resulting in lower performance. Maybe the 6 is throttling under heavy loads, whereas the Air 2 has more surface area to dissipate the heat and able to maintain the higher clocks?

Either that, or the extra cache that comes from having an extra core, or even some minor architecture improvements.

"Unofficially Official" Leading Scientific Research and Development Officer of the Official Star Citizen LTT Conglomerate | Reaper Squad, Idris Captain | 1x Aurora LN


Game developer, AI researcher, Developing the UOLTT mobile apps


G SIX [My Mac Pro G5 CaseMod Thread]

Link to comment
Share on other sites

Link to post
Share on other sites

Lol @ the APPLE HATE!

 

You people disgust me.

 

Apple pushes the industry, and if they have a better processor well you know the Android people have to catch up as well. Phones are indeed becoming like computers,  before I buy tablets or phones now I look at the freaking processor. Remember when we didnt give two shits about phone processors. 

 

Get with the times people, faster is better. Don't be like the filthy game DEV who claim "Resolution is just a number" or that "We don't need 60fps". 

 

Just curious as to which posts you consider are apple hate.  Also all the leading tech companies push the industry, that's basically the result of competition.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

8 core sucks balls, big.LITTLE is for OEMs who cannot be bothered to scale power for their cores, make strong proper cores, and want to impress dumb people. 

 

Applications will use one quad-core or the other, not both.

I assume you think switchable graphics such as Nvidia Optimus "sucks balls" as well, right? Because it is completely illogical to use cores designed for low power in combination with cores designed for high performance, right?

Even if you think it's for dumb people you still have to realize that it is pretty much impossible to make a single architecture that's best at everything. It's kind of like arguing GPUs are only there to impress dumb people because CPU companies should make their CPU powerful enough to run everything in software.

I honestly have no idea what you mean with "applications will use one quad core or the other, not both". Do you mean that you think ARM made big.LITTLE expecting apps to use all cores? Of course they won't. We barely have applications on the desktop that uses 8 cores. If that's what you mean then you have missed the point of big.LITTLE.

I know, samsung already has that, and it's amazing in single core performance.

 

 

oh wait, no it isn't...

It's running in ARMv7 mode and we don't know if it's thermal throttling. Don't try to make it sound like the Cortex A57 is bad at IPC.
Link to comment
Share on other sites

Link to post
Share on other sites

When I likened them to toys, I guess I was talking about not just software limitations but also limits in the hardware as well. I'd love to build my own tablet but I think it'll be a long time before that's possible :/

Also, as far as capabilities go with all those Apple apps you mentioned, whether someone can is not the question one ought ask; whether people will is what need be known. It makes a lot of sense to me to do sketching and other drawings on an iPad over a PC or other desktop but it seems kind of strange to want to edit/render those on the same device as a finished product. >:

 

The way I see it, many people are getting tablets with LTE and 3G to go on holiday with, or travel. Good battery life, and does all the basics a laptop does.

Now just get an SD Card reader and someone can make little trip videos, do sketches, and and edit photos on the go, all on a compact device that's smaller, and cheaper than most ultrabooks.

 

I do agree that personally I would also want to do all the editing, rendering, and transcoding on a desktop, or even good laptop; but it cuts down on what a person needs and can do the job for modest work as mentioned.

 

No one will be editing a TV show, advert or movie on them any time soon, but for hobbyist little things it can be nice.

 

iPad Air + GoPro and you have a nice little setup for basic things.

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

I assume you think switchable graphics such as Nvidia Optimus "sucks balls" as well, right? Because it is completely illogical to use cores designed for low power in combination with cores designed for high performance, right?

Even if you think it's for dumb people you still have to realize that it is pretty much impossible to make a single architecture that's best at everything. It's kind of like arguing GPUs are only there to impress dumb people because CPU companies should make their CPU powerful enough to run everything in software.

I honestly have no idea what you mean with "applications will use one quad core or the other, not both". Do you mean that you think ARM made big.LITTLE expecting apps to use all cores? Of course they won't. We barely have applications on the desktop that uses 8 cores. If that's what you mean then you have missed the point of big.LITTLE.

It's running in ARMv7 mode and we don't know if it's thermal throttling. Don't try to make it sound like the Cortex A57 is bad at IPC.

 

LOL Optimus is nothing like big.LITTLE, here is what big.LITTLE philophy would look like if Nvidia implemented it.

 

you have 1 GPU with 2000 cuda cores, and consumes 200w under load, but it's powerful.

then nvidia adds a separate die on the same pcb, a second gpu, this time with 200 cuda cores and isn't powerful, but only consumes 5w.

the gpus cannot be used in SLI because there is too much of a performance difference.

 

Nvidia advertises the GPU as something like "2200 cuda cores and only 5w of power consumption!"

 

 

That's big.LITTLE

If you judge a fish by its ability to climb a tree, it will live its whole life thinking it's stupid.  - Albert Einstein

Link to comment
Share on other sites

Link to post
Share on other sites

LOL Optimus is nothing like big.LITTLE, here is what big.LITTLE philophy would look like if Nvidia implemented it.

 

you have 1 GPU with 2000 cuda cores, and consumes 200w under load, but it's powerful.

then nvidia adds a separate die on the same pcb, a second gpu, this time with 200 cuda cores and isn't powerful, but only consumes 5w.

the gpus cannot be used in SLI because there is too much of a performance difference.

 

Nvidia advertises the GPU as something like "2200 cuda cores and only 5w of power consumption!"

 

That's big.LITTLE

big.LITTLE is just like Optimus, but for CPUs and it can actually use both at once.

I have absolutely no idea why you're bringing in advertisement into the debate. The performance difference is not the reason why they can't be used in tandem by the way. You just have to look at hybrid Crossfire to realize that. It doesn't work flawlessly but it does work.

 

Here is how I would describe big.LITTLE:

"Hey, isn't it wasteful that we have to power this big and power hungry core with a lot of different stages in the pipeline when we're just going to process this very simple data? Why not put a really efficient and low power core to handle that, and also put in a really powerful core for the more demanding stuff? That way we get a very big dynamic power range."

 

There is 0 drawbacks to big.LITTLE, and a huge benefit (bigger dynamic power and performance range).

If you think it is bad then you have clearly not done your homework.

Link to comment
Share on other sites

Link to post
Share on other sites

big.LITTLE is just like Optimus, but for CPUs and it can actually use both at once.

I have absolutely no idea why you're bringing in advertisement into the debate. The performance difference is not the reason why they can't be used in tandem by the way. You just have to look at hybrid Crossfire to realize that. It doesn't work flawlessly but it does work.

 

Here is how I would describe big.LITTLE:

"Hey, isn't it wasteful that we have to power this big and power hungry core with a lot of different stages in the pipeline when we're just going to process this very simple data? Why not put a really efficient and low power core to handle that, and also put in a really powerful core for the more demanding stuff? That way we get a very big dynamic power range."

 

There is 0 drawbacks to big.LITTLE, and a huge benefit (bigger dynamic power and performance range).

If you think it is bad then you have clearly not done your homework.

 

or you could just have 2 cores that scale properly and proper single threaded performance.

 

if you honestly think there is 0 drawback with big.LITTLE... i have nothing to say to you anymore, Im done trying to educat you.

If you judge a fish by its ability to climb a tree, it will live its whole life thinking it's stupid.  - Albert Einstein

Link to comment
Share on other sites

Link to post
Share on other sites

i wonder how good stock quad A57 are in big little config

 

also triple core :)

ahh the  days of triple AMD cpus good times

people used to unlock the 4th core since they weren't fused off :P

So that wasn't a myth that they an extra core on the chip.

  ﷲ   Muslim Member  ﷲ

KennyS and ScreaM are my role models in CSGO.

CPU: i3-4130 Motherboard: Gigabyte H81M-S2PH RAM: 8GB Kingston hyperx fury HDD: WD caviar black 1TB GPU: MSI 750TI twin frozr II Case: Aerocool Xpredator X3 PSU: Corsair RM650

Link to comment
Share on other sites

Link to post
Share on other sites

I will wait for Denver and see, I'm counting on the Nexus 9 to destroy it.

  ﷲ   Muslim Member  ﷲ

KennyS and ScreaM are my role models in CSGO.

CPU: i3-4130 Motherboard: Gigabyte H81M-S2PH RAM: 8GB Kingston hyperx fury HDD: WD caviar black 1TB GPU: MSI 750TI twin frozr II Case: Aerocool Xpredator X3 PSU: Corsair RM650

Link to comment
Share on other sites

Link to post
Share on other sites

or you could just have 2 cores that scale properly and proper single threaded performance.

 

if you honestly think there is 0 drawback with big.LITTLE... i have nothing to say to you anymore, Im done trying to educat you.

No, tell me what the drawbacks of big.LITTLE is. You can't just go "lol you're dumb so I am leaving". I really think that you have missed the point of big.LITTLE and so far it seems like I am the one educating you, not the other way around.

Sure you can make a big core that scales decently at lower frequencies, but it still won't be as efficient as a core specifically designed to be efficient. What's next, you're going to argue against hardware accelerated H.264 decoding because "they should just make the CPU efficient enough to do it in software!".

 

Why have a big, 15+ pipeline depth, OoO core running at let's say 200MHz when a really simple and small in order core at 200MHz could do the same thing but with far less power used?

Link to comment
Share on other sites

Link to post
Share on other sites

So that wasn't a myth that they an extra core on the chip.

u see what happened was AMD sold low quality quadcore versions as triple core CPUs

some were very unstable if u unlocked it but there was a lot of demand for it since it was cheap

so AMD pretty much had to sell quad cores as triple core CPUs which made tinkerers happy :)

If your grave doesn't say "rest in peace" on it You are automatically drafted into the skeleton war.

Link to comment
Share on other sites

Link to post
Share on other sites

No, tell me what the drawbacks of big.LITTLE is. You can't just go "lol you're dumb so I am leaving". I really think that you have missed the point of big.LITTLE and so far it seems like I am the one educating you, not the other way around.

Sure you can make a big core that scales decently at lower frequencies, but it still won't be as efficient as a core specifically designed to be efficient. What's next, you're going to argue against hardware accelerated H.264 decoding because "they should just make the CPU efficient enough to do it in software!".

 

Why have a big, 15+ pipeline depth, OoO core running at let's say 200MHz when a really simple and small in order core at 200MHz could do the same thing but with far less power used?

had this talk the other day with my bro he still doesnt get it  :P

 

i think they only disadvantage would be the extra die space needed

If your grave doesn't say "rest in peace" on it You are automatically drafted into the skeleton war.

Link to comment
Share on other sites

Link to post
Share on other sites

had this talk the other day with my bro he still doesnt get it  :P

 

i think they only disadvantage would be the extra die space needed

Yeah it does take up a bit more, but far from as much as you might think.

I don't have any stats for A53 + A57, but here is a quad core A15 + quad core A7:

post-216-0-43454500-1414097959.jpg

 

If you want numbers instead of a visuals, the A7 part is 3.8mm2 and the A15 is 19mm2. That's on Samsung's 28nm HKMG process though. The Exynos 7 Octa and Snapdragon 810 will use 20nm. It would be interesting to see a die shot of the Exynos 7 Octa so we can get some numbers on how big A53 and A57 are.

 

So yeah... 1 quad core LITTLE takes up about as much space as a single big core. I think it's pretty obvious that there are great power advantages to be had by running a light program on a single LITTLE core compared to a single big core, no matter how fine tuned the big core is to low power consumption.

Link to comment
Share on other sites

Link to post
Share on other sites

I will wait for Denver and see, I'm counting on the Nexus 9 to destroy it.

It already does, I am presuming this is Denver.

 

Single Core

  • A8X = 1812
  • Denver = 1903

Multiple Core

  • A8X @ 3 Cores = 4477
  • Denver @ 2 Cores = 3166

Obvious multiple core win for the A8X given the extra core.

Link to comment
Share on other sites

Link to post
Share on other sites

What I told people a week ago and nobody believed me... Also, that multicore score is insane!

Owner of a top of the line 13" MacBook Pro with Retina Display (Dual Boot OS X El Capitan & Win 10):
Core i7-4558U @ 3.2GHz II Intel Iris @ 1200MHz II 1TB Apple/Samsung SSD II 16 GB RAM @ 1600MHz

Link to comment
Share on other sites

Link to post
Share on other sites

It already does, I am presuming this is Denver.

 

Single Core

  • A8X = 1812
  • Denver = 1903

Multiple Core

  • A8X @ 3 Cores = 4477
  • Denver @ 2 Cores = 3166

Obvious multiple core win for the A8X given the extra core.

 

So, what do you mean, by "it already is"? It is slightly faster than the A8X in single core, but loses completely in Multicore. It is, in no way 'destroying it'.

Owner of a top of the line 13" MacBook Pro with Retina Display (Dual Boot OS X El Capitan & Win 10):
Core i7-4558U @ 3.2GHz II Intel Iris @ 1200MHz II 1TB Apple/Samsung SSD II 16 GB RAM @ 1600MHz

Link to comment
Share on other sites

Link to post
Share on other sites

So, what do you mean, by "it already is"? It is slightly faster than the A8X in single core, but loses completely in Multicore. It is, in no way 'destroying it'.

 

exactly, when anything benchmarks results that are within 10-20% people should really refrain from using aggressive adjectives. It just makes them look like they are trying to hard to prove a point that has little effect real life.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

wow i care so much

FOR REAL THO RIGHT...

4690K // 212 EVO // Z97-PRO // Vengeance 16GB // GTX 770 GTX 970 // MX100 128GB // Toshiba 1TB // Air 540 // HX650

Logitech G502 RGB // Corsair K65 RGB (MX Red)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×