So we've seen the leaked Zen core diagram, question is can AMD/Keller deliver.
1. New core design with at least 50% IPC improvement
2. New cache (AMD weak point) and interconnect (HyperTransport is now a bottleneck)
3. SMT (it took Intel years to get this right)
4. DDR4 controller (again, AMD weak point)
5. New instructions
All of this on a R&D budget that is 1/10th of Intel.
Realistically we can hope for something with Ivy Bridge performance, Haswell if we're lucky at the start.
He made the previously unsuccessful PA-Semi and Apple work...
He also made AMD work in the past, remember, prior to K7 AMD was a cheap-cheap Vendor making Intel clones.
>Intel themselves were slow competitive to the workstation market.
Zen will be a great arch, whether it will beat intel though, that's yet to be seen and any hype now is just jumping the gun.
Wait for REAL benchmarks next year.
Server was always going to be customer no.1
They can't keep selling Interlagos forever.
I want to believe, but I'll wait until I see benchmarks.
I am very happy they've ditched CMT and have a 6 'pipeline' pipeline this time (I'm thinking 4x2 like Apple Cyclone, which Jim developed...)
2016 is a long ways away, AMD might luck out since it seems Intel is not going to have 10nm till 2017. 2016 Processors will likely be Skylake Refresh models. So Zen could be an interesting alternative to Intel for the first time in years for the mid to low end. I am expecting Ivy to Haswell performance also, though with lower efficiency (both IPC/W).
I doubt that Samsung's/GloFo 14nm process is much better than Intel's 22nm, combine that with Intel's advanced position on IPC.
>I doubt that Samsung's/GloFo 14nm process is much better than Intel's 22nm
Okay, shill. Whatever you say, shill.
Intel's 22nm Trigate is 90nm
intel's 14nm Trigate is 70nm
Samsung's 14nm FinFET is 78nm
TSMC's 16nm FinFET is 90nm
intel's 22nm Trigate is 80nm
intels 14nm Trigate is 52 for their mobile line, 60nm for normal power chips
Samsung's 14nm FinFET is 64nm
TSMC's 16nm FinFET is 64nm
High Density SRAM cell area
Intel's 22nm Trigate is .1080um2
intel's 14nm Trigate is .0588um2
Samsung's 14nm FinFET is .0645um2
TSMC's 16nm FinFET is .0700µm2
Intel's 22nm Trigate is 60nm
intel's 14nm Trigate is 42nm
Samsung's 14nm FinFET is 48nm
TSMC's 16nm FinFET is 48nm
yes and see what happens when they use Samsung's process for high power. Those cell sizes for SRAM will balloon up. It will be better than Intel's 20nm, but not by much. Combine that with Intel's significant lead in architecture and my guess is that Haswell IPC/W will be better than Zen.
SRAM cell size has literally nothing to do with the power envelope of a given chip fabbed on the node. The density of all the SRAM offered on a specific node is a constant. A HD SRAM cell will be .0645um2 whether its in an ARM SoC, or some behemoth HPC chip.
Tech illiterate babby retards shouldn't try to talk out of their ass on topics they know nothing about.
Samsung's 14nm FinFET process have utterly massive advantages over intel's 22nm Trigate when it comes to both area scaling and drive currents.
You're both speculating.
Intel 22nm was terrible.
Intel 14nm is pretty decent.
Apple's SOC which is 14nm Samsung (Texas USA) is so-so
Samsungs SOC which is 14nm Globalfoundries (Albany, NY, USA) (see: Chipworks) is pretty decent but still only a low power example.
We won't see a real high-power example of Samsung/GF 14nm until next year - Zen and Arctic Islands are GF 14nm.
I'm not speculating, I'm posting clear objective facts.
The Exynos 7420 is not fabbed at GloFo's #8 facility. It is made in one of Samsung's fabs.
Don't try to correct someone when you don't have the slightest fucking clue.
That is intel's specific implementation, you stupid child.
Intel also has totally different BEOL for their ULV, mainstream, and server chips.
They're intentionally altering the size of their cells to control leakage due to how they're implemented. A HD SRAM cell does not have to change size with power envelope. Tech illiterate retarded faggot babby.
>And now at ISSCC2014, Samsung presented similar results for 14nm FinFet as shown in the slide below.
Sorry you can't handle facts.
This is all completely tertiary to the central fallacy raised by this shit eating child. Size of the cache cells is just one incredibly tiny metric that contributes to area scaling of a process. Samsung's 14 node has huge advantages across the board compared to intel's 22nm node.
man, just considering this possibility makes me kinda regret my upgrade to a 4790k. I mean, I doubt any cheap AMD model will match the performance on the 4790k on a single core basis, but I do a lot of work that could actually legitimately make use of all 16 cores if such a model were to cheaply exist.
>Realistically we can hope for something with Ivy Bridge performance, Haswell if we're lucky at the start.
That would be pretty decent. Still rumors say Skylake will actually bring a fairly significant IPC improvement and by the time Zen comes out Intel will have further improved upon that (even if after the supposed Skylake we just get shitty 5% more performance), so AMD would have to do even better than Ivy or Haswell or keep with the more cores thing while still providing solid single-threaded performance. What they can't afford is Intel being better in both if they hope to be competitive from a performance standpoint again.
yes and what do you think Zen will be developed on, heck it will probably need a 3rd process UHP with even larger cells. Thus to make a comparison to Intel's 22nm you need to multiple Samsungs cell size by a factor over 1.
Also on Intel 22nm
Sram Size .108um^2 for performance and 0.092um^2 for density
You don't even understand why the cells themselves are physically larger.
How intel does their cache does not equal how everyone else does cache. How intel designs their varied BEOL does not apply to other fabs.
And again, this is one of the least significant things when it comes to area scaling, let alone performance. The fact that you're so fixated on this one thing is what makes you look like a stupid child.
>90nm cpp vs 78nm cpp
>b-b-but da caches
>intel is still better!
AMD has been trying to slash everything but R&D budget. Hence why their marketing continues to be shit.
So let's assume around 200 million for this quarter, following the trend.
That, and I believe R&D for certain semi-custom projects are mostly funded by the other company (for ex., the APU's in XB1 and PS4). And also joint projects, like AMD and SK Hynix working on HBM. The products coming out now started off with higher budgets, at least on paper.
So they're trying to use their research and expertise as best as they can, such that one innovation can be used for multiple projects. Like the rumors about HBM on the 300 series, and HBM on future APU's/CPU's. At this point, the whole idea of Fusion/SoC-ization/CPU-GPU integration allows them to research GPU's and CPU's with a smaller budget.
And they dumped SeaMicro. Even with dumping Mantle; I highly doubt that AMD were aiming for market dominance with Mantle. After it was incorporated into DX12 and Vulkan, API's are now moving in a direction that directly benefits AMD's CPU's, APU's, GPU's.
A great uarch will finally create some fucking revenue in their computing segment, but also spur APU's and everything else. A decent or worse uarch will mean $250 Celerons and entry-level Pascal AIBs. So as for their commitment to Zen, it's "God IPC or bankruptcy."
This is not about BEOL, albeit Samsung's 14nm BEOL is 20nm. You realize that for a high performance chip like zen you are going to be driving a high frequency, thus you can not have a high transistor density because of thermal density. There is a reason why high density chips like GPUs only operate around 1GHz. You are not going to want slow SRAM in high performance CPU like Zen, thus they are going to need large cells for Zen. Therefore a comparison of Zen vs a Intel CPU has to take this into account.
>just keeps grasping at straws
GPUs operate in the clock range that the pipeline of their arch allows for the given drive current that is attainable at the target TDP.
Intel's various cache designs are a direct result of their BEOL in a given line. The cache cell are designed around the BEOL.
Stop. Talking. Out. Of. Your. Ass.
>slash everything but RnD
Good on them.
I hope it works out as well. In the past marketing has had better returns than research.
Just hope its good enough to warrant shilling
Skylake is delayed, and appears to be focusing on other shit like wireless charging and video encoding and some multi-threading tweaks.
Fab-wise, 14nm is pretty much the wall for the fanciest multi-mask argon-fluoride litho with FinFET's (or jew tri-gates), and Intel can't wish that away. Look at all the delays with Broadwell. Until I see a 14nm chip that's not supposed to be in a plebphonetablet or a laptop, I'm not expecting timely Skylake.
Plus, Intel is still focusing on those mobile/portable markets, and it's clear that desktop performance is not a priority.
Cache cell are designed around the performance needs of the processor while taking into account the limits of process node.
If GPU where just about target TDP, then why are there not GPUs clocked at 3-4GHz with 1/4 the transistors. Nvidia and AMD would certainly do this if possible, it would cut their wafer requirements significantly. The reason is it is not possible to run 3-4GHz on chips with high transistor densities because of issue with thermal density.
>what is the relationship between voltage, clocks, leakage, and power consumption
10 ALUs at 500mhz draw less power than 5 ALUs at 1000mhz.
GPUs naturally exploit this being massively parallel processors. Lower clocks allow you more headroom in your power envelope to add more compute power.
No chip anywhere is reaching the limitations brought about by self heating. Not even remotely close.
AMD hasnt delivered on their promises of great processors to compete with Intel in years and years.
I dont even care what the promises or specs are until we see it come out.
No more bulldozer/piledriver please.
I can only recommend AMD parts for people on a tight budget where AMD does best.
AMD doesn't even compete in the budget range anymore. The i3-4160 is a much better but than the FX-6300.
The only place where AMD is competitive is if someone needs a cheap multithread rig, which isn't too common.
you avoided the question. 5 ALU at 1000mhz will require half the wafer space. Why are Nvidia and AMD not making GPUs with 3GHz clocks that would require 1/3 the amount of silicon, that would perform the same as the large chip 1GHz GPUs and have the same TDP. (because such clocks are impossible at such high densities)
Id still rather have a 6300 honestly over an i3 because a few games dont work right on 2 cores+ht.
I agree pure performance wise but compatibility wise as far as gaming I still recommend 6300, 8320 without there being a problem.
They obviously dont have enough money to research and develop processors or videocards as fast as the competition while the competition sorta walks around waiting.
It's like that guy you root for thats trying extremely hard but still losing to some asshole that easily is holding him back but because he doesnt have to do any more he doesnt blow him away
Speaking of budgets....what would you recommend for ~$200?
I ordered a 290x Tri-X and my Phenom II x965 just need at least some upgrade until the new chips come out.
FX-6300 + MSI Gaming
i5-4440 + h81?
Both can be had for about $200.
This thread is a little above my head but I hope Zen turns out good. Ill use it for sure in one of my builds. Don't want AMD to go away to leave us to be fucked by Intel.
One question though, what NICs do higher end AMD boards use? I dont think AMD has it's own NIC like Intel
What do you mean?
Newegg had it on sale for $280. That's the lowest I've seen it in 6 months so i grabbed it.
My CPU isn't really up to the task of keeping up with the GPU so I need any sort of upgrade i can get, and unfortunately my budget now is only $200.
Why would you put anything other then stock cooling on a CPU that doesnt overclock?
Get the i5 4440 or 4460 (I got the 4460 and its been great even with a 980)
In the past when I got overclockable cpus I got better then what they come with but its just not needed.
The i5s are all another step up from the 6300
whos using the same socket upgrading though within a year or two when they still use it?
Someone with a budget to get an i3 or 6300 isnt going to spend more in a year or two and upgrade. These people are not enthusiests upgrading every year. They buy a computer and stick with it 3-4 years and by that time theres another socket you need to upgrade.
What youre saying is true but it doesnt apply to the real world.
He really shouldnt even base a purchase decision or have to on us.
Someone whos too retarded to use the worlds best database humans have ever created to simply type in a few words into youtube or google to compare parts doesnt deserve to even have those.
People arguing on 970 threads or back with bulldozer threads can just take 5 seconds and get an answer from people who have all the hardware to test and share results.
You can even add in the specific software you wanna use like a specific game like gta 5 that came out lately has like dozens of side by side comparisons.
God /g/ is so retarded lately
Not the same anon that you've been arguing with, but I'm going to reply because I've been enjoying the back and forth.
The reason that you don't see AMD/Nvidia/Intel/whoever making the proverbial 5 ALU at 1000 MHz over the 10 ALU at 500 MHz comes down to a variety of reasons, few of which would be related to heat.
1. The former IC wouldn't be exactly half the size of the latter- there's a bunch of other overhead to deal with as well. Ignoring that, you also need more transistors for higher frequency operation. Each transistor is capable of driving less current, so for anything larger than a few transistors/anything that fans out to more than one cell, you'll lose any benefit from faster frequency in the fact that you'll need to have multiple driving transistors to get your signals out.
2. Layout is significantly easier at lower frequencies. Simple as that. The lower your frequency, the less you have to worry about stub length, via placement, and a variety of other implementation challenges. In an embarrassingly parallel workload in that of a GPU, where you can easily scale to use more cores, why wouldn't you want to just make more of a design you already know works, instead of trying to shrink it by a few micrometers to save a little cash?
3. Power loss increases dramatically at higher frequencies. Not only do you have gate driver losses (modeled by the usual 1/2 C•f•V^2), but you also have commutation losses from the transistors (no transistor can instantly switch from blocking voltage to passing current, and so there's an additional power loss during all switching). All of this together means that power loss due to switching increases at a rate faster than 1:1, meaning higher frequencies get harder and harder to justify. This means that they require more power to run, and waste more power as well.
SRAM is flip flops. SRAM is rarely used on desktops though because it's not very dense. DRAM is 4-6x as dense, it uses what are basically capacitors and a single transistor to hold a charge for a while. The caps run out of juice after a short while so they have to be refreshed periodically though hence the "dynamic" part of DRAM
AMD has been beat up bad for like 5 years now even if I want to build a cool AMD rig I cant bring myself to get an inferior product with the processor.
I have a friend whos a fan and supports em just because he likes the little guy and gets an all AMD system.
The actual problem I think isnt that AMD is failing to deliver to its fans, its that it doesnt get good at making new fans with either mobile or other processors like Nvidias Tegra or strike deals with Audi and shit to be in their GPS.
AMDs really only saving money making thing right now is that they have hardware for all the cpus/gpus on the consoles. Consoles in itself I see dying off possibly as well as technology has changed drastically in the past 10 years in what people want.
AMD just... doesnt adapt or create and fail or sometimes win in new areas like Ive seen with Intel or Nvidia.
My 980 feels so different then previous flagships Ive had before where i had to sacrifice tons of power and heat for performance and... it boosts itself to 1536 on the core in a valley bench while being 70c and requiring 500w, passable easily with even a 450w...
Its like AMD is stuck in the late 2000s with increasing and having insane power and heat requirements on flagships.
I had a 7970 and their driver support was horrid with timely releases on new games, and I see people cant even play with their dual gpu flagship last gen a 7990 with crossfire enabled on a GIANT new game like GTA 5 now. Meanwhile I see several people with a 690 just fine on it. Its not like AMD isnt trying but they clearly are lacking money to make fixes like this happen quick with people working on it they employ.
Anyway I hope Zen is awesome and AMD comes back, because everytime lately I either try to give em the benefit or cant eitherway it is a negative.
>1. New core design with at least 50% IPC improvement
>2. New cache (AMD weak point) and interconnect (HyperTransport is now a bottleneck)
>3. SMT (it took Intel years to get this right)
>4. DDR4 controller (again, AMD weak point)
>5. New instructions
lel good luck with that
and they're only targeting the server market in 2016, consumers will get theirs in 2017 at best, and they'll be much weaker than intel's offerings >>47746224