PDA

View Full Version : 100% CPU Usage!!! HELP



IMDBestFckDRest
12-08-2014, 07:30 PM
I know Assassin's Creed Unity is poorly optimized that not even a super computer able to get stable 60 FPS lol. Well i have 2 years old rig but GPU upgrade to GTX 970 last week so i am able to max UNITY @ 1080p and i get stable 50-60 fps mostly but with as with everyone it gets to 40-45 often even at the same place where i get 55-60 stable. Now the thing is when i check my cpu usage while gaming in UNITY its mostly 90%-100% (All 4 cores). Is my CPU being bottleneck here? OR the game really demands 100% usage no matter what cpu you throw at it?

Specs:

i5 2500K OC 4.6Ghz
8GB GSkill Sniper Ram
P8 Z77 VPRO
Seasonic SII 620W Bronze
GTX 970 Amp Extreme Edition (Stock frequency)

Wrath2Zero
12-08-2014, 11:39 PM
What's your GPU initialization in percentage?

RVSage
12-09-2014, 12:36 AM
same happens to me
Both my CPU and GPU are at 100 %
i5-4960K
GTX 970 SC

It's almost like stress testing my system..

Anyone who plays the game can see this if you monitor it
Nothing new

EstrayOne
12-09-2014, 12:40 AM
Same.
FX 8350 (90% usage on all 8 cores)
MSI GTX 970 (1466/7800)
8gb 1333Mhz RAM

Getting around 35 (eg. notre dame) to stable 60 (with vsync)

RVSage
12-09-2014, 12:43 AM
Summary.. It's badly optimized... This is what the Devs called " Jammed instruction queue" .. Maybe after patch 4 it wont be 100% i hope so...

YazX_
12-09-2014, 12:47 AM
Well, if you ask me that's a great thing and it means the game is scaling well and utilizing the full potential of the 4-cores, for me i see 50% usage since i have an i7. anyway, just monitor your CPU Temps as you have it OCed but note this will not be a problem or something to worry about even if it reached max TDP since it will throttle down.



Summary.. It's badly optimized... This is what the Devs called " Jammed instruction queue" .. Maybe after patch 4 it wont be 100% i hope so...

Sorry but i fail to understand how its badly unoptimized IN TERMS of CPU scaling and utilization when the CPU is fully utilized?!

RVSage
12-09-2014, 12:49 AM
Well, if you ask me that's a great thing and it means the game is scaling well and utilizing the full potential of the 4-cores, for me i see 50% usage since i have an i7. anyway, just monitor your CPU Temps as you have it OCed but note this will not be a problem or something to worry about even if it reached max TDP since it will throttle down.

I partially agree .. But not 100% of all cores.. Thats what happens to me.. I would say like 80% of all cores is highly optimal not 100 %..

Wrath2Zero
12-09-2014, 02:14 AM
It's not really supposed to be 100% on all cores, in fact APIs like Mantle and shader cache from NVIDIA reduce CPU usage. If your GPU is being utilised 95%/99% all the time then there is no bottleneck. My FX 8350 is about 40-50% on all 8 cores, 99% GPU usage. IF you get low GPU usage then you CPU is bottlenecking your GPU.

RVSage
12-09-2014, 02:59 AM
It's not really supposed to be 100% on all cores, in fact APIs like Mantle and shader cache from NVIDIA reduce CPU usage. If your GPU is being utilised 95%/99% all the time then there is no bottleneck. My FX 8350 is about 40-50% on all 8 cores, 99% GPU usage. IF you get low GPU usage then you CPU is bottlenecking your GPU.

Are you sure see.?? you say 40-50% .. while he says 90%... Confused

Same.
FX 8350 (90% usage on all 8 cores)
MSI GTX 970 (1466/7800)
8gb 1333Mhz RAM

Getting around 35 (eg. notre dame) to stable 60 (with vsync)

Wrath2Zero
12-09-2014, 03:04 AM
I can't really say about his setup but according to my setup this screenshot says it all in this area.

http://i.imgur.com/GwRAEcA.jpg

As long as my GPU is being used to it's fullest then it's not really a problem so I'm GPU bound. If GPU usage drops to the 70s/80s I'd be worried ,then I'd be CPU bound which would cause the bottleneck.

RVSage
12-09-2014, 04:54 AM
Here is a video I recorded today.. With a small.. fight at the infamous notre dame... This win 8.1 pro with zero bloatware.. ground up installation by me
https://www.youtube.com/watch?v=EO2_PQjDULM&feature=youtu.be

many places consistent 100% in all 4 cores
with 100% GPU usage..

Wrath2Zero
12-09-2014, 05:17 AM
Wow yeah I see, the 100% CPU usage is causing you really bad stutters. Can you overlock higher or try stock? Also, is NVIDIA shader cache on or off in your NVIDIA panel settings?

RVSage
12-09-2014, 05:51 AM
Wow yeah I see, the 100% CPU usage is causing you really bad stutters. Can you overlock higher or try stock? Also, is NVIDIA shader cache on or off in your NVIDIA panel settings?

Actually while playing i had zero stutters.. perhaps.. a video encoding issue.. in youtube.. I have tied both stock and overclock..same result.. shader cache is on

Wrath2Zero
12-09-2014, 06:28 AM
Well, I wouldn't worry then if it's not stuttering, performance is what it is because they optimised the game better for Intel and NVIDIA to get past their silly amounts of draw calls breaking DX11. The game is broken regardless because they decided to throw 50,000 draw calls at Direct X, Intel CPUs can cope with the stupid draw calls better but it's still considered broken. Basically the graphics API is bottlenecked, you get all sorts of problems like the glitching AI and popping NPCs, objects and all sorts of glitches.

RVSage
12-09-2014, 06:48 AM
Yup, if patch 4 is the performance fixer... It should not just improve FPS.. but also should reduce CPU utilization...It basically the queue jamming that is causing 100% CPU utilization.. Maybe your lower FPS actually explains your CPU utilization.. But I cant say for sure

Wrath2Zero
12-09-2014, 08:28 AM
It seems it's badly optimised for AMD CPUs and GPUs end of story, they even get beat by i3's and dual core Intel CPUs, unlike Dragon Age: Inquisition where the difference between an AMD FX-8350 and an I7-5690X is 4fps. That engine can multi-thread properly.

small2assassins
12-09-2014, 10:08 AM
I wish is loaded the CPU to 100%, it loads my CPU to 40-50% and GPU is basically sleeping at around 20-30%, the overclock doesn't even kick in it's that low. I don't know what's so special about this game, Far cry 4 looks gorgeous and I ran it at high to ultra settings with 30-40 fps constantly.

IMDBestFckDRest
12-09-2014, 10:23 AM
Well, if you ask me that's a great thing and it means the game is scaling well and utilizing the full potential of the 4-cores, for me i see 50% usage since i have an i7. anyway, just monitor your CPU Temps as you have it OCed but note this will not be a problem or something to worry about even if it reached max TDP since it will throttle down.

Even after 5-6 hours of playing with cpu usage 90-100% max temps were 65degree C.

Anykeyer
12-09-2014, 11:39 AM
Well, I wouldn't worry then if it's not stuttering, performance is what it is because they optimised the game better for Intel and NVIDIA to get past their silly amounts of draw calls breaking DX11. The game is broken regardless because they decided to throw 50,000 draw calls at Direct X, Intel CPUs can cope with the stupid draw calls better but it's still considered broken. Basically the graphics API is bottlenecked, you get all sorts of problems like the glitching AI and popping NPCs, objects and all sorts of glitches.
What do you propose then? Cut down this game to nothing because someone said DX cant handle 50K draw calls? It can, provided you have a decent hardware. They simply pushing the limits here.


It seems it's badly optimised for AMD CPUs and GPUs end of story, they even get beat by i3's and dual core Intel CPUs, AMD released new drivers with huge fps boost in Unity. Proving it wasnt Ubisoft's fault after all.
As for AMD CPUs, there isnt much programmers can do for them. They are just bad. Unity's engine itself actually scales great across at least 16 CPU threads (tested on Intel i7 Extreme), but then there is API, and after that driver, both can have problems using multiple cores.


unlike Dragon Age: Inquisition where the difference between an AMD FX-8350 and an I7-5690X is 4fps. That engine can multi-thread properly.
Now THAT is bad optimisation. With those visuals (simple, bad, severely outdated, pick your word) DAI should be limited by CPU on every system with powerfull GPU. Yet it doesnt scale well between FX8350 and 5960X which is about twice as powerfull.

YazX_
12-09-2014, 12:32 PM
Even after 5-6 hours of playing with cpu usage 90-100% max temps were 65degree C.

65 for core temp for whole CPU temp? if its core temp, thats absolutely fine even if it hits 90, its still considered safe but not recommended although TJ Max is ~100 (talking about intel Sandy Bridge here , Ivy Bridge is like 103). the sweet spot is 65-75 for 100% utilization after hours for core temps.

IMDBestFckDRest
12-09-2014, 01:10 PM
65 for core temp for whole CPU temp? if its core temp, thats absolutely fine even if it hits 90, its still considered safe but not recommended although TJ Max is ~100 (talking about intel Sandy Bridge here , Ivy Bridge is like 103). the sweet spot is 65-75 for 100% utilization after hours for core temps.

ehhh now i don't know about that, using Hardware monitor min temps it shows is 30 degree c and max 65 degree c whle gaming 5-6 hours of Unity with 100% usage.

YazX_
12-09-2014, 02:52 PM
ehhh now i don't know about that, using Hardware monitor min temps it shows is 30 degree c and max 65 degree c whle gaming 5-6 hours of Unity with 100% usage.

it has been a while since i used HWM, but as far as i remember, you can expand the CPU node to see each core, check each core temp as this is the most important.

EstrayOne
12-09-2014, 03:18 PM
I fail to understand how you conclude that a game that uses all cores available is badly optimized....
I ran this game fine since launch. only issue i had was the acu.exe crash when online but patch 3 fixed it.
playing hour long co-op sessions with a friend and having a total blast.

seriously I realy can't understand how people expect this game to run on lower end hardware.... [Edit] (seriously the minimum is a GTX 680 people.....) just look at the graphics, the AI behaviour and those MASSIVE crowds.
I do think that the areas around notre dame and the cafe theatre can gain some FPS (getting 30-40 with Nvidia v-sync and tripple buffering) but other than that the game runs smooth as butter.

BladeUK606
12-09-2014, 06:30 PM
What do you propose then? Cut down this game to nothing because someone said DX cant handle 50K draw calls? It can, provided you have a decent hardware. They simply pushing the limits here.
.

You might want to read more about DirectX 11 and the way it works, it cannot handle anywhere close to 50k draw calls, however consoles can, so this game is poorly optimized for DirectX. In fact AMD claim it can handle 10k draw calls max, where the reality is 5k, but 10k due to multi threaded display lists.

It is nothing to do with hardware, the hardware could handle that easily, it's the API used behind that which cannot, which is why DirectX 12 is a huge step forward to PC gamers, as it will handle 7.5x the amount of draw calls that 11 can handle.

http://www.bit-tech.net/hardware/graphics/2011/03/16/farewell-to-directx/2

That article also explains the difference between Nvidia and AMD hardware and why some programs work better on one, than the other, you may want to read it.

Anykeyer
12-09-2014, 07:00 PM
You may read whatever marketing claims you would like. All day long.
Its pretty much obvious to anyone with a bit of his own brains that DX11 handles those 50k+ draw calls Unity feeds it.
Ofc I dont deny that there is a room for improvements.

That link is funny btw. They seem to be completely oblivios to the fact that Xbox runs on Windows kernel and uses DirectX

BladeUK606
12-09-2014, 09:39 PM
You may read whatever marketing claims you would like. All day long.
Its pretty much obvious to anyone with a bit of his own brains that DX11 handles those 50k+ draw calls Unity feeds it.
Ofc I dont deny that there is a room for improvements.

That link is funny btw. They seem to be completely oblivios to the fact that Xbox runs on Windows kernel and uses DirectX

It's not a marketing tool at all?! The power of the PC removes the overhead, meaning that it might seem it handles the draw calls, but it doesn't handle them at once, meaning consoles are better at handling draw calls then PCs are. The efficiency of modern CPUs. especially intel means it isn't an issue, until more load is put on the CPU for other tasks, this is where Unity falls down as it is CPU intensive anyway (AI of every character etc.) and the draw calls cannot be handled quick enough.

Do you even know what a draw call is? To make it clear it is when the CPU asks the GPU to draw something, when you have more draw calls, it is more CPU intensive, as in the whole reason of this thread....the CPU usage, which limits the FPS. A major issue a lot of people are having, on architecture that is way more powerful than consoles, even next gen consoles, which shouldn't happen.

Also everyone knows Xbox runs on a windows kernel and uses a MODIFIED DirectX, the whole reason for DirectX12 is to produce "console level efficiency", as said by Microsoft themselves: https://channel9.msdn.com/Events/Build/2014/3-564

Here's a few more links as you seem stubborn to see the bigger picture than the fact you get 30fps so you're ok (just to be clear my 8350 at 4.4Ghz and 780GTX can run the game to a decent performance, so I am not bashing the game or Ubisoft in anyway, just that PC gamers get a short deal as our machines are capable of soooo much more....do you get ten times the graphics and performance compared to a console? No, well you should as most machines are ten times as powerful!)

http://www.gamasutra.com/view/news/123987/AMD_DirectX_Holding_Back_Graphics_Performance_On_P C.php
http://gamingbolt.com/directx-12-and-mantle-have-draw-call-performance-of-7-5-times-of-directx-11

Wrath2Zero
12-10-2014, 06:03 AM
What do you propose then? Cut down this game to nothing because someone said DX cant handle 50K draw calls? It can, provided you have a decent hardware. They simply pushing the limits here.

AMD released new drivers with huge fps boost in Unity. Proving it wasnt Ubisoft's fault after all.
As for AMD CPUs, there isnt much programmers can do for them. They are just bad. Unity's engine itself actually scales great across at least 16 CPU threads (tested on Intel i7 Extreme), but then there is API, and after that driver, both can have problems using multiple cores.


Now THAT is bad optimisation. With those visuals (simple, bad, severely outdated, pick your word) DAI should be limited by CPU on every system with powerfull GPU. Yet it doesnt scale well between FX8350 and 5960X which is about twice as powerfull.


Where are you getting your information from? It's a known fact that DX11 has a peak limit of around 9,000 draw calls, Ubisoft know this, their engineers even said it. The CPU no matter what is still waiting for the draw calls from the API but the API cannot get out to the CPU fast though to be processed. The driver doesn't fix nothing in that regard, users see is numbers frame-rates is ok must be ok not broken, you don't see the lower level code or API is broken because it's not exposed to you, you see it on the top level on the screen.

You can see the benchmarks yourself, Unity is [Removed], the game even works on Intel Dual-cores, which makes no sense at all considering the minimum is a quad-core, but if you look at DA, Inquisition, a simulated dual-core is at 15fps, while a simulated quad-core is more than double that. It's pure single thread optimised code, that's why Unity runs well on Intel, and cr*p on AMD hardware, even the consoles run crap because it's so single-threaded.

Ubisoft are ranting at AMD's single-thread performance because they can't optimise like DICE and FrostBite. It's funny isn't it, how AMD got a driver out that improved FPS on AMD hardware, only because AMD helped and worked together, while Ubisoft released a broken game with a broken PC version at the API level, while mantle was available to them, just like DICE or anyone else, even a frigging Indy team made a patch for Sniper Elite 3.

All 4 Intel cores are maxing out, is that because of the draw calls being processed so it has to max out? CPU doesn't usually work that way since Mantle actually uses less CPU yet runs better, the same with NVIDIA's shader cache, it take load off the CPU. Also, reducing draw calls like Ubisoft claimed it was going to do would take the load off the CPU, not load it up like we're seeing.

Anykeyer
12-10-2014, 09:44 AM
It's not a marketing tool at all?! The power of the PC removes the overhead, meaning that it might seem it handles the draw calls, but it doesn't handle them at once,

Power removes the overhead?
Please stop.


it doesn't handle them at once,
It NEVER handles them at once. Thats why the are "calls", not "call". FFS learn how APIs work, what batches are and what calls are. And stop embarassing youself.


Where are you getting your information from? It's a known fact that DX11 has a peak limit of around 9,000 draw calls

Known fact? Interesting. You should revise your knowledge then. As real life example proves it wrong. If you are so brainwashed you believe marketing claims more than what you see its your problem.
DX may not be as effecient as Mantle at handling calls but there are no hard limits. All those numbers are just made up BS.


You can see the benchmarks yourself, Unity is [Removed], the game even works on Intel Dual-cores, which makes no sense at all considering the minimum is a quad-core, but if you look at DA, Inquisition, a simulated dual-core is at 15fps, while a simulated quad-core is more than double that
Sorry what? If thats true it means Unity is much more optimised, not the other way around.


while mantle was available to them
EA gets money for implemeting mantle. Vendor-specific APIs should not exist.

Wrath2Zero
12-10-2014, 09:57 AM
Might want to read this then.


The game (in its current state) is issuing approximately 50,000 draw calls on the DirectX 11 API. Problem is, DX11 is only equipped to handle ~10,000 peak draw calls. What happens after that is a severe bottleneck with most draw calls culled or incorrectly rendered, resulting in texture/NPCs popping all over the place. On the other hand, consoles have to-the-metal access and almost non-existent API Overhead but significantly underpowered hardware which is not able to cope with the stress of the multitude of polygons. Simply put, its a very very bad port for the PC Platform and an unoptimized (some would even go as far as saying, unfinished) title on the consoles.


DX11 is designed to handle ~10,000 peak draw calls. Mantle and DirectX 12 are designed to handle that number, but not DX11


Sorry what? If thats true it means Unity is much more optimised, not the other way around.

Are you serious? You somehow think a dual core Haswell can max out Unity?


EA gets money for implemeting mantle. Vendor-specific APIs should not exist.

Mantle isn't vendor specific, other devs have been testing it and using it as well, AMD talked about how optimised Frostbite is even without Mantle.

Anykeyer
12-10-2014, 10:00 AM
LOL. You might want to stop reading this ****.

Wrath2Zero
12-10-2014, 10:13 AM
Really? And why is that? You can't take the brutal truth that you're wrong. If you can prove that quote wrong then please go right ahead, please share you in-depth knowledge about DIrectX.

Anykeyer
12-10-2014, 10:41 AM
Its obvious its not true even w/o going into deep knowledge. Unity runs with its 50k and it runs well, thats all the proof you need.
besides since when this claim had any authority? last I checked it came from some troll news website that enjoys starting ****storms from nothing


Are you serious? You somehow think a dual core Haswell can max out Unity?
You said that, not me. i3 is 4 thread CPU btw. HT doubles its real life performance. You should never compare it with "simulated dual core"


Mantle isn't vendor specific, other devs have been testing it and using it as well, AMD talked about how optimised Frostbite is even without Mantle.
It is vendor specific. Only AMD GPUs based on GCN support it.
Yes, I know they proposed making it the industry standard a few weeks ago but thats just another BS. Its obviously catered for AMD GCN and wouldnt remain "low level" API on other GPU architectures w/o some heavy adaptation. On nvidia GPU its posible it would have even more overhead than DX has.

IMDBestFckDRest
12-10-2014, 10:49 AM
Ehhh where this thred is going now :/ BTW i just check youtube videos and UNITY on i5 is using all 4 cores to 100% and on i7 80-85%

Wrath2Zero
12-10-2014, 10:57 AM
Its obvious its not true even w/o going into deep knowledge. Unity runs with its 50k and it runs well, thats all the proof you need.
besides since when this claim had any authority? last I checked it came from some troll news website that enjoys starting ****storms from nothing


You said that, not me. i3 is 4 thread CPU btw. HT doubles its real life performance. You should never compare it with "simulated dual core"


It is vendor specific. Only AMD GPUs based on GCN support it.
Yes, I know they proposed making it the industry standard a few weeks ago but thats just another BS. Its obviously catered for AMD GCN and wouldnt remain "low level" API on other GPU architectures w/o some heavy adaptation. On nvidia GPU its posible it would have even more overhead than DX has.

Wrong, it's been turned for GCN, NVIDIA, Intel can have access to the Mantle source code with not cost.

HT does not double the performance at all, it doubled the threads not cores.

Anykeyer
12-10-2014, 11:07 AM
it's been turned for GCN
thats what I said

Wrong
You contradict youself

HT does not double the performance at all, it doubled the threads not cores.
Doubles threads just for lulz so app developers will have fun optimising for 4 threads instead of 2? Sounds legit..

RaulO4
12-10-2014, 07:47 PM
HOLD up. Mako, HT does NOT double the performance, HT helps with management of commands to the cpu but in the end its the core that does the operation.

since you have HT that improve management of instruction that improves performance. 4 real cores will always be better than 2 cores and 4 threads, just like my 4790k has 4 cores and 8 threads but will not be better than (if its the same cpu design) than an real 8 core cpu.


now both you need to stop, both of you are right and wrong,

mako right there is no hard limited to calls BUT DX11 in simple terms does not operate calls from 10lk or 12k as smooth that it does below that amount.

too overhead, MAKA is purely right here, no...powering through overhead does not remove overhead.

example you can drive a Toyota, lets say it takes 10 sec to reach 60

i drive a lambo, it reaches 60 at 3.5.

if we put weight on my car (over head) i can still beat your car with a 7.5 sec to 60.

but the over head is not gone, i am just powering through it.

RVSage
12-10-2014, 08:03 PM
HOLD up. Mako, HT does NOT double the performance, HT helps with management of commands to the cpu but in the end its the core that does the operation.

since you have HT that improve management of instruction that improves performance. 4 real cores will always be better than 2 cores and 4 threads, just like my 4790k has 4 cores and 8 threads but will not be better than (if its the same cpu design) than an real 8 core cpu.


now both you need to stop, both of you are right and wrong,

mako right there is no hard limited to calls BUT DX11 in simple terms does not operate calls from 10lk or 12k as smooth that it does below that amount.

too overhead, MAKA is purely right here, no...powering through overhead does not remove overhead.

example you can drive a Toyota, lets say it takes 10 sec to reach 60

i drive a lambo, it reaches 60 at 3.5.

if we put weight on my car (over head) i can still beat your car with a 7.5 sec to 60.

but the over head is not gone, i am just powering through it.
Okay let me put some terminologies straight here.. I work in the processor industry(I am a processor architect)..

Simultaneous Multi-threading (SMT) or hyper Threading(HT)


This is a single core with two threads (i.e two threads share the cores resources... This is a huge are improvement in terms of area efficiency for the chip.. And upto 50% improvement in performance for the most optimized code ...Never 2x performance improvement..)

Multi-core

This is where each core has it's own resources dedicated (Note SMT can applied on multicores.. Increasing the thread count)
But here is the catch...
It is not advisable to have more than 6 physical cores of the same type for a application processor in a given chip (Due to coherency management issues, and inherent bottleneck of available parallelism)... However... It is possible to have what is called a BIG/Little configuration.. i.e A high perf core with a low perf core (eg.. samsung exynos with 4 A53 and 4 A57 cores)..

Note in case of big-little configs, performance wont scale as expected ...

And even in 6 same core core configs , its difficult to get 6x .. I would say you can get about 5.7-5.8x..

But yes scaling can be achieved in the future where we can have 8 same cores in the future, if we invent new coherency management frameworks

RaulO4
12-10-2014, 08:13 PM
^^^^,

thanks for the clearer explanation, you first point agree with my statement of HT is not 2x
and everything else seems on point

class101
12-10-2014, 08:33 PM
I'm at 40-80% usage on a i7 4770K at 4.5GHz, Ultra, 1440p, 2Way SLI 980

Anykeyer
12-10-2014, 09:17 PM
Efficiency of HT depends on CPU and workload. 100% is reachable, one of the examples is winrar benchmark.
Also its worth noting that Haswell has much wider CPU cores (number of ALUs and execution ports) than previous Intel architectures and so HT gives it larger performance boost (closer to theoretical 100%).
World records for i5 and i7:
http://hwbot.org/hardware/processor/core_i5_4690k/
http://hwbot.org/hardware/processor/core_i7_4790k/

RVSage
12-10-2014, 10:46 PM
Efficiency of HT depends on CPU and workload. 100% is reachable, one of the examples is winrar benchmark.
Also its worth noting that Haswell has much wider CPU cores (number of ALUs and execution ports) than previous Intel architectures and so HT gives it larger performance boost (closer to theoretical 100%).
World records for i5 and i7:
http://hwbot.org/hardware/processor/core_i5_4690k/
http://hwbot.org/hardware/processor/core_i7_4790k/

You are confusing threads with SMT... When I run winrar yes it has two threads.. But that's a software thread(Which while running on two cores.. will give you 100 % speed up or 2x speedup).. Its different from a thread in the perspective of "HT"... I can assure the best gain from a hardware thread(That is a single core ) is 50 % max... So winrar running on two seperate cores will indeed give you close to 2x

Let me put it again clearly

Software Thread != Hardware thread... They are different... Hyperthread does not mean you have equivalent of two software threads....I can go into more intricate details.. But that's beyond scope our discussion.. I dont want to sound like a prof :P

YazX_
12-10-2014, 10:54 PM
HOLD up. Mako, HT does NOT double the performance, HT helps with management of commands to the cpu but in the end its the core that does the operation.

since you have HT that improve management of instruction that improves performance. 4 real cores will always be better than 2 cores and 4 threads, just like my 4790k has 4 cores and 8 threads but will not be better than (if its the same cpu design) than an real 8 core cpu.



OK, HT is not only management of instructions, it means that the CPU Core can execute two Threads/Processes by swapping when one of them is stalled, to put it in simpler way, lets say there are two Threads/Processes to be executed by the CPU Core (P1 and P2), P1 goes for execution and its waiting for an IO operation to complete, the core will take advantage of this waiting time and start executing P2, once P1 IO operation is done and its ready to continue execution, the core will suspend P2 and continue the execution of P1, and so on.

so in theory, if a program is using proper Multi-threading architecture, HT can reach upto 90% performance of Physical cores since with every thread there will be some waiting time especially for branch instructions and data dependency.

i do agree with physical cores always better than logical ones, but that doesn't mean that logical ones cannot scale to a very high extent to physical ones, it depends on the application and how its using Multi-threading and thats the drawback of HT.

RVSage
12-10-2014, 10:56 PM
OK, HT is not only management of instructions, it means that the CPU Core can execute two Threads/Processes by swapping when one of them is stalled, to put it in simpler way, lets say there are two Threads/Processes to be executed by the CPU Core (P1 and P2), P1 goes for execution and its waiting for an IO operation to complete, the core will take advantage of this waiting time and start executing P2, once P1 IO operation is done and its ready to continue execution, the core will suspend P2 and continue the execution of P1, and so on.

so in theory, if a program is using proper Multi-threading architecture, HT can reach upto 90% performance of Physical cores since with every thread there will be some waiting time especially for branch instructions and data dependency.

i do agree with physical cores always better than logical ones, but that doesn't mean that logical ones cannot scale to a very high extent to physical ones.

In theory yes... Logical cores as you put it can go there.. But when it comes to actual hardware.. No current gen hardware can achieve it...That's reality

RVSage
12-10-2014, 11:12 PM
Okay let me put up a simple analogy...

I have two classes(2 cores) of 2 students(2 threads).. There are 10 questions..

Special thing about the questions

Each question's answer contributes to the successive questions answer...

Analogy Map
Students - Threads
Classroom - cores
Questions - CPU Instructions(or program steps)
Pencil,paper,eraser - Resources (like ALU, FP units, e.t.c)

[B]Multi-Core Analogy
Each student has his own pencil and two separate sets of question papers, two erasers.. Both can do one question each parallely
problem solved in 5 steps (2x speed up.. single person will take 10 steps)

Hyper Threading Analogy
Each student has a one pencil (i.e Resources).They share eraser. Each student has a copy of the same set of papers.. But due to sharing eraser they generally take extra 2 steps (Due to speculative execution there will always be mistakes)
Which means apart from 5 steps they take an extra 1-2 steps to finish..And the basic idea of parallelism is that they cant be redundant on the process

Which means its not 2x improvement...

So.. basically due to the sharing of Hardware ALUs and units.. It's not possible to achieve more than 50 % in current hardware..
I can refactor this more.. But again I dont want to be a prof... I guess this should set things clean..

If the resources were not shared it is the same as multi-core isn't it???

Yes stalls do occur.. There is data dependency, control dependency... The real limiting factor is actual hardware resources per core..

Wrath2Zero
12-11-2014, 12:57 AM
thats what I said

You contradict youself

Doubles threads just for lulz so app developers will have fun optimising for 4 threads instead of 2? Sounds legit..

Why wouldn't AMD tune their architecture for their own GPUs for Mantle? It doesn't make any different because NVIDIA and Intel have access to the source code so they can do the same.

Again, double threads doesn't mean double the performance, get a grip and stop making things up.

Wrath2Zero
12-11-2014, 01:02 AM
Efficiency of HT depends on CPU and workload. 100% is reachable, one of the examples is winrar benchmark.
Also its worth noting that Haswell has much wider CPU cores (number of ALUs and execution ports) than previous Intel architectures and so HT gives it larger performance boost (closer to theoretical 100%).
World records for i5 and i7:
http://hwbot.org/hardware/processor/core_i5_4690k/
http://hwbot.org/hardware/processor/core_i7_4790k/

Wow,world record, I can do that easy with my old 80 FX-6300, default clock 3.5, overclocked to 4.5 stable. Funny as well, it can only be done with that 180 K CPUs because they're unlocked for a premium price plus I had 6 cores, easy done on air cooling.

http://valid.canardpc.com/06caj0

Intel sell you a 180 CPU with bad solider and cheap paste on the CPU with Haswell, no wonder they had heat issues by default, never mind overclocking.

Anykeyer
12-11-2014, 11:21 AM
You are confusing threads with SMT... Software Thread != Hardware thread... They are different... Hyperthread does not mean you have equivalent of two software threads....I can go into more intricate details.. But that's beyond scope our discussion.. I dont want to sound like a prof :P I know. Just using Intel's terminology here. They call their "virtual" cores threads http://ark.intel.com/products/80807 Lets not argue semantics. Doubling both "real" cores and "threads" gives a boost that in theory (under ideal circumstances) reaches 100%. And both can give you 0. This all depends on the app. There are far less "ifs" with "real" cores, so having "real" cores is almost always better, thats true.

So.. basically due to the sharing of Hardware ALUs and units.. It's not possible to achieve more than 50 % in current hardware.. It is proven to be possible. Just look at those hwbot links. In some cases i7 dominates i5 with nearly 2x difference

Why wouldn't AMD tune their architecture for their own GPUs for Mantle? It doesn't make any different because NVIDIA and Intel have access to the source code so they can do the same.
It does make a difference. Even if nvidia has full access to the source code and can rewrite it all there is one thing they cant do - change API itself, else they'll break compatibility wiith existing apps, effectively making their own vendor-specific API. Since the whole Mantle is catered for AMD GCN it can be highly inefficient on other hardware.
They had the same situdation with CUDA, only sides were different, its made by nvidia and all its revisions and compute modes are made with nvidia's low level architecture in mind
http://www.theinquirer.net/inquirer/news/1432307/amd-won-nvidia-cuda-run-gpus
http://www.tomshardware.com/news/roy-taylor-apu-opencl-cuda-physx,23797.html
Look how AMD despises vendor specific stuff here. They are right. But at the same time their PR acts completely different in Mantle's case. Just a bunch of hypocrites, everything they say is dictated by profits (everyting nvidia says too). And only fanboys would blindly believe everything they say.
We, as customers, do not need any vendor specific standards.

Wow,world record, I can do that easy with my old 80 FX-6300
Do what exactly? Beat any of those records? Go ahead and post it on hwbot then. You'll be the first AMD user to do so in a loooooooong time.
There is no need to remind us that you are headless AMD fanboy. You know whats funny? I used AMD CPUs, back when they didnt suck. But if the product is not fit for my purpose or there is much better product I simply dont buy it. Instead of bying it and then blaming the weatherman.

plus I had 6 cores
You might want to read more about FX CPU architecture. They arent "full" cores. Each FX die has 4 modules, 3 of them enabled in this 6xxx model. Each module has 2 sets of ALUs but they share almost all other stages between them. Kinda like anti HT.

EstrayOne
12-12-2014, 01:40 AM
What is hyperhtreading?

https://www.youtube.com/watch?v=wnS50lJicXc

Wrath2Zero
12-12-2014, 02:39 AM
I know. Just using Intel's terminology here. They call their "virtual" cores threads http://ark.intel.com/products/80807 Lets not argue semantics. Doubling both "real" cores and "threads" gives a boost that in theory (under ideal circumstances) reaches 100%. And both can give you 0. This all depends on the app. There are far less "ifs" with "real" cores, so having "real" cores is almost always better, thats true.
It is proven to be possible. Just look at those hwbot links. In some cases i7 dominates i5 with nearly 2x difference

It does make a difference. Even if nvidia has full access to the source code and can rewrite it all there is one thing they cant do - change API itself, else they'll break compatibility wiith existing apps, effectively making their own vendor-specific API. Since the whole Mantle is catered for AMD GCN it can be highly inefficient on other hardware.
They had the same situdation with CUDA, only sides were different, its made by nvidia and all its revisions and compute modes are made with nvidia's low level architecture in mind
http://www.theinquirer.net/inquirer/news/1432307/amd-won-nvidia-cuda-run-gpus
http://www.tomshardware.com/news/roy-taylor-apu-opencl-cuda-physx,23797.html
Look how AMD despises vendor specific stuff here. They are right. But at the same time their PR acts completely different in Mantle's case. Just a bunch of hypocrites, everything they say is dictated by profits (everyting nvidia says too). And only fanboys would blindly believe everything they say.
We, as customers, do not need any vendor specific standards.

Do what exactly? Beat any of those records? Go ahead and post it on hwbot then. You'll be the first AMD user to do so in a loooooooong time.
There is no need to remind us that you are headless AMD fanboy. You know whats funny? I used AMD CPUs, back when they didnt suck. But if the product is not fit for my purpose or there is much better product I simply dont buy it. Instead of bying it and then blaming the weatherman.

You might want to read more about FX CPU architecture. They arent "full" cores. Each FX die has 4 modules, 3 of them enabled in this 6xxx model. Each module has 2 sets of ALUs but they share almost all other stages between them. Kinda like anti HT.

I know the architecture and it's not about being a "fanboy", it's the simple fact that only K versionoff Intel CPUs can overclock that much and even that is not special compared to AMD CPUs which are unlocked. While Intel are still selling Dual/Quad cores in 2014 with powerful single-thread performance they are offering no innovation at all to developers or gamers, just MOAR frame-rate for the higher price, 6-8 core Intel CPUs are out of most people's price range and anyone who buys them for games has more money than sense and their IGP performance is a joke. The plus 6 cores remark was about heat, they can overclock 4.5Ghz + and use 6-8 cores, unlike i3I5/i7 which have 2-4 cores and still have problems, especially since Haswell released was a joke with cheap solider and paste.

Intel stuck in 2006 and the Intel fans are still buying dual-cores while devs keep programming for single-thread.

RaulO4
12-12-2014, 02:49 AM
^^^ devs are moving away from duel core really fast ever since the ps4 came out. by next year engines will be coding with mult core in mind.

duel core core are even being removed from phones... so putting money in a duel core is the fastest way to lose money within the next year

Numbtoyou
12-12-2014, 04:01 AM
I know the architecture and it's not about being a "fanboy", it's the simple fact that only K versionoff Intel CPUs can overclock that much and even that is not special compared to AMD CPUs which are unlocked. While Intel are still selling Dual/Quad cores in 2014 with powerful single-thread performance they are offering no innovation at all to developers or gamers, just MOAR frame-rate for the higher price, 6-8 core Intel CPUs are out of most people's price range and anyone who buys them for games has more money than sense and their IGP performance is a joke. The plus 6 cores remark was about heat, they can overclock 4.5Ghz + and use 6-8 cores, unlike i3I5/i7 which have 2-4 cores and still have problems, especially since Haswell released was a joke with cheap solider and paste.

Intel stuck in 2006 and the Intel fans are still buying dual-cores while devs keep programming for single-thread.

My hexcore i7 is at 4.6 amd working just fine....getting rid of the amd gpu brought the temp of the room down a fair bit too....

Wrath2Zero
12-12-2014, 04:41 AM
My hexcore i7 is at 4.6 amd working just fine....getting rid of the amd gpu brought the temp of the room down a fair bit too....

That's funny man, you're doing is wrong then, glad your 300/445 CPU doesn't get too hot and you couldn't even keep your case cool ,what a noob. I guess you're going to have a go at AMD's power usage next when it's irrelevant to anyone who knows anything. OMG 125Watt AMD CPU, yet Intel brings out a 140Watt 6 core and 140W 8 core with a 3.0Ghz clock speed at 3x/4x the price of AMD.

RaulO4
12-12-2014, 04:47 AM
That's funny man, you're doing is wrong then, glad your 350 CPU doesn't get too hot and you couldn't even keep your case cool ,what a noob.

its not about keeping your case cool, you can get your case cool but your room is still hot why? because heat needs to go somewhere, in the case of the pc case its removing heat away from the parts fast.

so if your part creates less heat at a given ratio (or performance) your room will be less hot.

the room temp of my room drop when i switch to an 970 and 4790k.
no matter if your pc temps are 30c to 90c your room will catch the heat your pc is throwing.


now amd cpu are Sweet all the way up to the 250/300$ price range but once you hit that 300 and up.... all intel that is just fact.

but going with an amd 8350 is rock solid, i pick up one at 100$ sell. putting that into my bro pc.

Wrath2Zero
12-12-2014, 04:58 AM
Just get better cooling then, got nothing to do with AMD being hotter and there are plenty of liquid coolers on the market. There is simply no need to pay such a stupid price for a CPU for games anyway. Games don't multi-thread that well not Intel are using the Enthusiast series thinking it's any use at that price? No AMD have great multi-threading performance at a great price, but Intel are stuck selling Dual and quad cores to the mainstream morons while charging 3 to 6 times that for 6-8 cores.

On top of that we get poor optimized games from Ubistutter which invalidate our hardware.

RaulO4
12-12-2014, 05:09 AM
Just get better cooling then, got nothing to do with AMD being hotter and there are plenty of liquid coolers on the market. There is simply no need to pay such a stupid price for a CPU for games anyway. Games don't multi-thread that well not Intel are using the Enthusiast series thinking it's any use at that price? No AMD have great multi-threading performance at a great price, but Intel are stuck selling Dual and quad cores to the mainstream morons while charging 3 to 6 times that for 6-8 cores.

its not about the cooling, i am talking about the temp in your room. no matter if you use the best cooler in the world, it works by removing the heat from your cpu to somewhere else in this ....case its your... case.

now yes, amd at that 200$ price is solid but the performance that it takes for amd to reach my 4790k at 4.0 is at a higher clock speed/ watt usage. which makes more heat, so in turn heats your room even more. (note, im talking about the heat in the room, i would always tell someone too oc a 8350 too save the money)

amd has 8 core cpu (well its really 2 in one core thing) but the 4 core of my system out perform them.

for the price is solid, but 8350 needs to be oc 4.5/4.7 to hit my 4790 performance at stock. once i oc my cpu it will never even get close.
i play games, model, edited, and design maps so i needed that little more power (dont have that $$$ for a work station pc so this was the best i could do)

its bound to happen amd stop giving **** and just went crazy with there apu. the 8350 is an old design that really great for the price.

Anykeyer
12-12-2014, 07:49 AM
I know the architecture and it's not about being a "fanboy", it's the simple fact that only K versionoff Intel CPUs can overclock that much and even that is not special compared to AMD CPUs which are unlocked. While Intel are still selling Dual/Quad cores in 2014 with powerful single-thread performance they are offering no innovation at all to developers or gamers, just MOAR frame-rate for the higher price, 6-8 core Intel CPUs are out of most people's price range and anyone who buys them for games has more money than sense and their IGP performance is a joke. The plus 6 cores remark was about heat, they can overclock 4.5Ghz + and use 6-8 cores, unlike i3I5/i7 which have 2-4 cores and still have problems, especially since Haswell released was a joke with cheap solider and paste.

Intel stuck in 2006 and the Intel fans are still buying dual-cores while devs keep programming for single-thread.

Thats a kind of speech I would expect from a fanboy. Increasing core count is not innovation. Improving architecture is.

6-8 core Intel CPUs are out of most people's price range and anyone who buys them for games has more money
You dont need Intell's 6-8 cores for games. 4 is more than enough to outperform competition in most games, funny man. Thats why people buy them. But ofc buying an inferior product and then complaining is much more sensible. LOL

and their IGP performance is a joke
You know whats funny? You dont even think about "minor" things like purpose or intended use. Yes, their IGP is weak, but why should I care? Im using descrete GPU anyway.

The plus 6 cores remark was about heat, they can overclock 4.5Ghz + and use 6-8 cores, unlike i3I5/i7 which have 2-4 cores and still have problems, especially since Haswell released was a joke with cheap solider and paste.
They still ouperform your wonderfull 6 core FX. Thats the only thing that matters.
You think AMD is some kind of charity, good guys to the rescue of poor and hungry? They were the first to release a consumer CPU at $999 price point. They had technological lead and wanted to exploit it. They are business after all.
Stop being a fanboy. Both companies want nothing more than your money.


That's funny man, you're doing is wrong then, glad your 300/445 CPU doesn't get too hot and you couldn't even keep your case cool ,what a noob. I guess you're going to have a go at AMD's power usage next when it's irrelevant to anyone who knows anything. OMG 125Watt AMD CPU, yet Intel brings out a 140Watt 6 core and 140W 8 core with a 3.0Ghz clock speed at 3x/4x the price of AMD.
Whats funny is your ignorance. Stop just counting cores. Intel has much better performance/watt and about the same performance/price (except i7 extreme). If you dont want Intel i7 Extreme prices - blame AMD. Thats right, AMD gives no competition so Intel is free to set any prices they want. Thats how it works outside of your fairy land.

RaulO4
12-12-2014, 08:26 AM
...hate what im about to say, Mako is on point

YazX_
12-12-2014, 11:49 PM
You think AMD is some kind of charity, good guys to the rescue of poor and hungry? They were the first to release a consumer CPU at $999 price point. They had technological lead and wanted to exploit it. They are business after all.
Stop being a fanboy. Both companies want nothing more than your money.


That summarize it all, whether Intel/Nvidia/AMD they want your money, if one has the lead with no competition, they will manipulate the market the way they wish, unfortunately, AMD had the lead before especially in x64 architecture (old Athlon days) but intel caught up very fast and for almost 7 consecutive years there is no actual competition from AMD at high end CPUs, same goes with GPUs, ATI had the lead then started a trade-off with nvidia and unfortunately when AMD bought ATI, it has gone worse and lost the lead, but at least the GPU department is doing much better than CPU, i remember this old video about Nvidia FX 5800 and the fan noise while ATI had the crown back then, Nvidia took the criticism well and released this funny video:



http://www.youtube.com/watch?v=WOVjZqC1AE4

Wrath2Zero
12-13-2014, 03:33 AM
Well I appologies for ranting about Intel, I was so close to getting an i5/Z97 and then just couldn't because of the price and paying a lot more for the CPU and K version on top, it's really frustrating. After a few weeks of thinking about it and browsing the web at reviews and benchmark and tests, I just come to the conclusion that I'm better off just sticking with AMD and getting a 8350, when my system is solid and performs well with the games I have. Why should I pay 120 extra for an Intel system with small boost in performance on a CPU I can't overclock much, unless I pay a premium for the K version, I just couldn't bring myself to do it purely out of morals.

And no, Intel does not a have a good performance/price ratio with any of their new CPUs https://www.cpubenchmark.net/cpu_value_available.html

AC_Mako just thinks he's right because he says so with a lot of misplaced knowledge..

Anykeyer
12-13-2014, 04:11 PM
You should really stop with personal attacks. You cant beat me on this field :cool:
What you still fail to understand is that most ppl dont buy products. They buy solutions. When I bough CPU for my gaming PC I looked at CPUs performance in games. In most games even i3 beats 8350. You can be angry for the rest of your life and blame game makers. But I prefer more practical, selfish, approach. I chose Intel's CPU for my current PC. If AMD will come on top I'll buy AMD CPU. Cant be easier.
As for "paying extra". Lower end products tend to have the most value, on almost every market. Its up to the consumer to find a sweet point. Nothing's wrong with buying i7 extreme. Or do you consider everyone who buys Porsche an idiot?

Wrath2Zero
12-13-2014, 07:59 PM
Again, lacking in knowledge about the i3 beating an 8350 because single-threaded games are still here in 2014 like 2006 when dual-core were the norm. Anyone who buys a dual-core clearly is getting a bad deal for other things, not just getting good performance in single-threaded games that are poorly optimised. With the knowledge you claim you have, you can't even explain the proper reason why the a dual-core beats a 6-8 core. It doesn't happen with Intel CPUs, and there is virtually no different at max settings because it goes GPU bound anyway and cores scale as expected on every CPU.

As for buying a Porsche comparison, yes, you're rather silly if you drive it around a city all day and can never fully get to feel it's performance on speed limited roads, rather like bad console ports and locked frame-rates. :p

YazX_
12-13-2014, 10:55 PM
Again, lacking in knowledge about the i3 beating an 8350 because single-threaded games are still here in 2014 like 2006 when dual-core were the norm. Anyone who buys a dual-core clearly is getting a bad deal for other things, not just getting good performance in single-threaded games that are poorly optimised. With the knowledge you claim you have, you can't even explain the proper reason why the a dual-core beats a 6-8 core. It doesn't happen with Intel CPUs, and there is virtually no different at max settings because it goes GPU bound anyway and cores scale as expected on every CPU.

As for buying a Porsche comparison, yes, you're rather silly if you drive it around a city all day and can never fully get to feel it's performance on speed limited roads, rather like bad console ports and locked frame-rates. :p

it is what it is, whether 2006 or 2014, AMD shines on Multi-Threaded performance, but when it comes to single Threaded, it just falls way behind, whether games or any software since games are softwares in the end, you can code your software to run on single thread or multi-threads so it always comes down to the application and how its coded.

Now, you cannot change the world, every developer code in a different way, i'm not talking about games nor Ubisoft specifically, in perfect coding multi-threaded softwares, yes AMD FX 8350 is the best bang for the buck since it can deliver good performance with 8 cores running in full, but its not that way and you cannot change it.

i have done many softwares in my life, starting from simple single threaded softwares to enterprise heavy multi-threaded applications, multi-threading is not easy to do, its not like setting a flag in code and you are done, it needs alot of work, sync between threads and increase the overall complexity of the application, thats without mentioning thread safety and other things, so most developers in general try to minimize the effort of multi-threading and keep the software either single threaded or just maintain minimal number of threads, for ACU specifically, they have done an amazing job with CPU scaling and multi-threading here.

Yes believe me, i would really love for that to change so i can get a 170$ CPU that can give me an awesome performance instead of paying 350$+ for Intel, everyone loves to save money, right?, but the sad truth its not going to happen for the vast majority of softwares especially games. that's why people who can afford Intel get it, so they can have good performance on single and multi-threaded applications without the hassle.

And as Mako said, Intel/AMD know that more than anyone else and the prices are set based on competing products, if AMD has a decent CPU that can compete on Single/Multi threaded applications, dont think they would sell it for 170$, it will be on par with intel's i7 price which is 350$ and for 1k for the very high end.

Wrath2Zero
12-14-2014, 01:26 AM
Meh, your opinion not mine.

Anykeyer
12-15-2014, 12:17 PM
Again, lacking in knowledge about the i3 beating an 8350 because single-threaded games are still here in 2014 like 2006 when dual-core were the norm. Anyone who buys a dual-core clearly is getting a bad deal for other things, not just getting good performance in single-threaded games that are poorly optimised. With the knowledge you claim you have, you can't even explain the proper reason why the a dual-core beats a 6-8 core. It doesn't happen with Intel CPUs, and there is virtually no different at max settings because it goes GPU bound anyway and cores scale as expected on every CPU.

As for buying a Porsche comparison, yes, you're rather silly if you drive it around a city all day and can never fully get to feel it's performance on speed limited roads, rather like bad console ports and locked frame-rates. :p

I know why exactly i3 beats 8350. Does it change anything? Can I fix it myself?
Then why would a buy a theoretically good product when in practice it doesnt live to the expectations? Thats stupid. Doesnt matter why it happens, only that it happens.
I never recommended buying i3 (or any other low end CPU) for games btw. And again, i3 is not just dual core, it supports HT and apps threat it as 4 core CPU. It beats pure dual core Pentiums almost everywhere. But for any serious PC the absolute minumum is i5. Its price is higher bc its superiour product. Even AMD understands that, else they wont sell their FX line so cheap. From consumer's POV i5 just works, you dont have to worry about stuff like multi-threaded optimisation, you just get good performance in every case.
As for Porsche, you'll feel it even on speed limited city streats, bc it accelerates and turns much better too.

TheOutsider-NL-
12-15-2014, 07:40 PM
Do you guys's have Twitch running in Uplay.. That's a horror for you pc

RaulO4
12-15-2014, 08:04 PM
dude, both of you move on,

like it or not intel Run better than AMD, it a fact because they have release New Tech, amd has not, there FX line up are from 2012

AMD has a better price point for the 200$ and under range.

Facts until Intel release a new cpu line up and we run all the test over again and see if amd still king on that 200$ range..... :confused:

Wrath2Zero
12-16-2014, 10:03 AM
I know why exactly i3 beats 8350. Does it change anything? Can I fix it myself?
Then why would a buy a theoretically good product when in practice it doesnt live to the expectations? Thats stupid. Doesnt matter why it happens, only that it happens.
I never recommended buying i3 (or any other low end CPU) for games btw. And again, i3 is not just dual core, it supports HT and apps threat it as 4 core CPU. It beats pure dual core Pentiums almost everywhere. But for any serious PC the absolute minumum is i5. Its price is higher bc its superiour product. Even AMD understands that, else they wont sell their FX line so cheap. From consumer's POV i5 just works, you dont have to worry about stuff like multi-threaded optimisation, you just get good performance in every case.
As for Porsche, you'll feel it even on speed limited city streats, bc it accelerates and turns much better too.

You can't claim something is a superior product based on single-thread performance and higher cost, especially when games go GPU bound and the CPU matters less and less the higher the resolution. Both CPU do a great job just as long as the GPU is not bottlenecked by the CPU anyway. HT, now that's funny, half the time you have to disable it because it causes problems and rarely do games take advantage of it and Intel can't even make a quad-core for a competitive price like AMD can. The value in Intel CPUs is nowhere, you just pay more for single-core performance and even more for Unlocked CPUs

Anykeyer
12-16-2014, 12:00 PM
You can't claim something is a superior product based on single-thread performance and higher cost
I can because it is. Even AMD itself agrees, both by their FX pricing and through new CEO words. Only fanboys like you cant still face the obvious.

HT, now that's funny, half the time you have to disable it because it causes problems and rarely do games take advantage of it and Intel can't even make a quad-core for a competitive price like AMD can
You still count CPU cores and thats not even funny now, now thats just cute. Full GFLOPS performance of 4 Intel cores is about the same as 8 AMD cores (even more if AVX set is properly utilised). In this case less cores is actually an advantage, bc its much easier for software makers to utilise 4 cores.
HT doesnt cause problems half of the time. Just another of your fanboy BS. And apps/games arent supposed to be HT aware, for them all cores are the same. Every single app that can properly use multiple cores will take advantage of HT automatically. Only OS thread scheduller acts differently (on HT aware operating systems), its task is to distribute software threads evenly between "physical" cores for maximum efficiency.

The value in Intel CPUs is nowhere, you just pay more for single-core performance
Ha. All in all, in average modern apps you'll get about the same performance from Intel and AMD at almost any given price point.

Wrath2Zero
12-16-2014, 12:11 PM
We know Intel have better single-core performance like you keep bring that up as if it's new and what did the AMD CEO ACTUALLY say? AMD can't be beaten on price/performance, I posted a chart for you which you ignored. HT is more complicated that you think, it's not that straight forward

http://i.imgur.com/cFsg7Rml.jpg (http://imgur.com/cFsg7Rm)

Anykeyer
12-16-2014, 12:44 PM
http://www.pcgamer.com/amd-next-generation-microarchitecture-will-make-up-for-muted-bulldozer-reception/

AMD can't be beaten on price/performance
No one says they're beaten, but they arent leading

I posted a chart for you which you ignored
Bc I considered this to be already covered indirectly. But if you insist:
That chart is from synthetic test. While results can provide some fun reading they shouldnt be the base of your buying decisions. Take apps you actually use frequently and look at prices.

HT is more complicated that you think, it's not that straight forward
HT helps the i3 yes because of 4 threads but HT on anything else is's crap..
It is simple. HT is "crap" only bc most apps dont use more than 4 threads. In a way i7 and FX 8xxx are in the same boat. I already gave you hwbot reslts, in some tests 4 core i7 is nearly 2 times faster than i5.

Wrath2Zero
12-16-2014, 01:03 PM
meh....

http://i.imgur.com/HvoqQGH.png

http://i.imgur.com/VfEh4uf.png

http://i.imgur.com/mPp0CnQ.png


Single Card

http://i.imgur.com/72TytNI.png

http://i.imgur.com/QHNVb0u.png

http://i.imgur.com/8WY1ciu.png

Anykeyer
12-16-2014, 01:18 PM
Good for you if you only play those games.

Wrath2Zero
12-16-2014, 01:34 PM
It's not about "those games", it's about being GPU bound to the point that the CPU doesn't matter much. There are games with the Intel CPUs destroy the AMD CPUs like ARMA 3 and Total War series, Skyrim but those are so CPU bound, single-thread, Intel win hands down.

Anykeyer
12-16-2014, 02:23 PM
In the end its about games and apps you run. This year I've played plenty of CPU bound games. In some rare cases the difference isnt even in fps. For example Wolfenstein has its fps limited, but the whole megatexture thing is based on a constant texture transcoding, weaker CPU leads to noticable delays which lead to texture poping.
AC series in particular have always been both CPU and GPU heavy. You need powerfull CPU (with both single and multi threaded performance) and powerfull GPU to get comfortable framerates with high/ultra settings. Degrading one of the components gives noticable performance hit.

RaulO4
12-16-2014, 10:08 PM
So from that chart,

we should buy intel 4690k than the 8350.

for 40$ more you get better performance without OC on multi core and single threaded work load. than to top it off you OC it

also if anyone is looking for a cpu right now amazon has the 4670k for 220$, great deal