A lot of people hear about why the N64 was difficult to program, but might not understand why. On paper, the system seems fairly simple, it's only got three components (CPU, GPU, one allotment of unified RAM), and no crazy parallel processing systems like Saturn or PS3.
I think a lot of people tried to look too deeply into the type of RAM (or the texture cache), but the answer is really much more simple. And it's in the location nobody speculating looked.
I found a forum post by one of the lead programmers of Burnout 3
>the problem wasn't the type of memory on the N64, but rather the fact that the memory controller was on the GPU and it gave priority to rendering for bus accesses. So the CPU was always sitting there waiting, and waiting and waiting.
he also says in another post that the reason gamecube didn't get burnout 3 was because the console wasn't cut out for it, but that's not a /vr/ discussion
Yeah I know and I actually like hardware discussions that avoid fanboyisms, sorry for being a bit of an ass with my previous post.
Don't really know that much about the n64, apparently the cuts were made to the memory busses and that made very tricky to arbitrate all the traffic in an efficient way. I should re-read that thread once. The bad thing is that apparently at some point it got moved from a sub board to another and the fanboys got in polluting the discussion, almost completely dismissing pages and pages of what was posted before.
The only thing the dc had in ROM was its BIOS and the console menu. As you said, wince would be loaded from disco, but it was more like a collection of libraries (DirectX) than a full blown windows (it had some OS stuff like process management but very compact). Most of the Games didnt use wince anyway, they used SEGA's libs which usually performed quite better. Wince was there to aid porting Games from pc mostly.
Essentially, the N64 had a first gen true 3d GPU.
The chip was fairly powerful, but since it was a early form of 3d accelerated graphics, not all of the functions had been standardized yet.
As the years went by, new tricks or ways to use features were discovered, and in some instances, quirks in how the chip operated were employed by the programmers, like how the bugged filters were (ab)used on the SID chip.
You can even see how the graphics in games improved over the life of the system.
he also says in another post that the reason gamecube didn't get burnout 3 was because the console wasn't cut out for it, but that's not a /vr/ discussion What? I thought the Gamecube had more horsepower than the PS2, and was easier to program. The only reason I can see is a lack of space on the disc.
the Gamecube is absolutely better hardware than the PS2 in practice. Perhaps not on paper/theoretically (as that tends to be bullshit anyway), but real word perf? Gamecube smoked the Xbox & PS2.
Simply compare Resident Evil 4 if you'd like a glaring example.
>Why would a bus need access into a home console?
Direct Memory Access. The N64 could not directly move data between CPU and RAM. All data had to go through the RSP to get from CPU to RSP.
Fear of stranger danger reached a new height that day.
I do believe him because he provided a technical explanation. It's to do with vector formats. Gamecube's CPU (Gecko) only supported the vector format called paired singles. Supposedly paired singles aren't very good at what they were trying to do in Burnout 3. It's not that Gecko is bad, it's just not suited to this kind of programming.
Dreamcast is arguably the first console capable of doing 3D graphics with competent image quality, true, but that wasn't his point. He was talking about GPUs.
N64's GPU (SGI Reality Co-Processor) actually has more in common with modern GPUs than Dreamcast's GPU (PowerVR2) does. It's primarily because RCP has support for programmable hardware T&L (aka vertex shaders) and PowerVR2 doesn't (even though it's more powerful).
RE4 was originally supposed to be a GC exclusive from what I recall, and I wouldn't rule out Nintendo slipping Capcom some money to make the PS2 version look worse. Then again, it's more likely RE4 was just heavily optimized for the GC architecture. It probably would've taken too much effort to retool it for the PS2, despite the potential for profit.
I'm guessing the BO3 team didn't feel like reoptimizing their code to amount for the GC's differences then, since it had a much smaller install base, and therefore less potential for profit. I would have loved to see BO3 on the GC, but as it is, I had tons of fun with it on the Xbox.
The programmer seems to believe that no amount of optimization would have enabled a Gamecube port to run on-par with PS2. The reason for this is really down to the hardware architecture.
Gecko isn't really that great at vector-based floating point (due to the half-hearted paired single implementation), at this it's a lot worse than Emotion Engine (which specializes at it) and a fair bit worse than Pentium 3 (which is an all-rounder). On the other hand it's fantastic at general processing (integer); a little better than even a Pentium 3, and beats the crap out of Emotion Engine.
Usually vector-based floating point calculations are done to transform polygons, so the Gamecube can do it on the GPU (Flipper). The problem is that Flipper is a fixed unit, it's only designed to transform polygons, it can't process other things like physics (even if it would theoretically have the raw power to do so). So in the particular situation of Burnout 3, the Gamecube is in a bit of a quandary.
Ironically, the N64's GPU has a fully programmable vector unit, which is designed to be used for multiple different tasks (processing graphics and decoding audio). Let's say there was an parallel world with a massively cutdown 5th generation port of Burnout 3. The N64 would be able to process physics on the GPU giving it a massive advantage, while PS1 would have to do it on the CPU (without GTE's help since it's a fixed geometry unit).
Yeah I'm gonna trust the many programmers that state how much more powerful the ps2 is and not some random tard on vr that can't offer a technical explanation. Also
>grand prix challenge
>God of war 2
>gran turismo 5
>metal gear solid 2/3
>ratchet and clank series
Literally nothing on the GC comes close in terms of iq/particle effects/polygons. Ratchet and clank in particular had 720p and a form of fxaa. While gt5 could do 720p and keep 60fps.
I don't want to partake into console wars but.. are you sure you're not mistaking PS2 and PS3 games? GT5 in particular is a PS3 game and R&C could only reach 480p (the actual framebuffered resolution was around 415p from my studies).
The console could do 1080i though (GT4 is the most popular example) while I don't think any other 6th gen machines attempted that.
I meant gt4. Phone typing sucks. Up your arsenal did employ a type of fxaa before the technique was popular. It also had a psuedo 720p mode that was more like 720i. It halved the frerate however. God of war 2 also had a similar 720p mode.
>at least 23 people replied to this childish shit
You're all just as bad as the shitposting loser.
OP: the game is awful, as you know if you actually played through it. Not worstgameevergodawful, just standard N64 awful. It was insanely overhyped and every gaming rag on the planet had at least a two-page spread hyping it up, so that made it even worse.
Why didn't you just formulate your opinion from a youtube "review" like most of the /v/ children who have ruined this board do? You would have saved yourself some time.
I see, so the the difference between Gecko and EE is kind of like the difference between the K6-2 and Pentium II. It's a shame Nintendo gimped the Gamecube by using a fixed pipeline GPU, because it wouldn't have ended up in this quandary. Oh well, I owned and loved all three of the main 6th gen systems.
Now, that all being said, how did Metroid Prime and StarFox Adventures look so damn good? They beat the tar out of most PS2 games, that's for sure.
>I see, so the the difference between Gecko and EE is kind of like the difference between the K6-2 and Pentium II
Yeah, but taken to a much more extreme extent. If you want to do FP calculations on Gamecube that don't involve polygon T&L you've only got Gecko (which ain't even that great at it). On PS2, you've got three processor cores; MIPS-FPU, VU0 and VU1, all built inside EE.
>It's a shame Nintendo gimped the Gamecube by using a fixed pipeline GPU
Actually there's a funny story about this. ArtX the developers of Flipper were comprised of the same individuals who did N64's GPU. Before embarking on creating Flipper they surveyed N64 developers about what things they didn't like about RCP. Some of the responses included that RCP's microcode programmability made development too complicated, the texture cache was too small, memory latency was too high and the z-buffer ate away at valuable framebuffer space (which meant without expansion pak only lower resolutions were possible).
Flipper was basically designed in response to this feedback. The programmability was removed, and turned into a fixed pipeline, the texture cache was made large + texture compression support, the embedded memory was 1T-SRAM which specialized in low latencies (but not particularly massive bandwidth), and a set amount of memory was dedicated to the framebuffer (which was fixed to a conservative 24bit color) to ensure high resolutions were easy to implement. Most of these changes were received positively, but some developers thought ArtX went overboard, and too much raw power was given up in return for ease-of-use. In an enormous bout of irony, Nintendo's next-gen console became more like the PS1, and Sony's next-gen console became more like the N64.
>Now, that all being said, how did Metroid Prime and StarFox Adventures look so damn good?
Because Flipper is incredibly good at rendering pretty high-screen-res, high-texture-res anti-aliased pictures.
In other words, Nintendo designed the Gamecube for the past, not for the future. Lovely.
Thinking about it now, I think this explains why the Gamecube/Wii are easier to emulate than the PS2 or the N64.
I think it would be accurate to say it was two steps forward and two steps back. It's not like PS2 was particularly forward looking either. The PS2's GPU has a unique design that was not adopted anywhere else. It relies on a large array of simple pipelines that are supposed to be arranged in passes to create special effects. Every other GPU pretty much ever to the modern day uses a smaller number of complex pipelines that pre-process pixels.
N64 was the ridiculously forward looking one though. Too much, actually. Having el cheapo memory bus design didn't help things either.
The science of behind doing 3D graphics was already known, the problem is doing it cheaply. N64 is what happens when you try to stick super powerful SGI hardware into a cheap console. There were too many sacrifices.
>I wouldn't rule out Nintendo slipping Capcom some money to make the PS2 version look worse
They hit technical limits in the PS2 on polygon count. On the GameCube, Leon's model consisted of 10,000 polygons and villager models were 5000. On PS2, Leon's model was 5000 polygons. For that matter, Snake in MGS3 for PS2 was also 5000 polygons.
there were many games third party games that were better on Gamecube/Xbox than PS2.
Situations like this
"Vexx received mixed reviews from critics. Aggregating review websites GameRankings and Metacritic gave the Xbox version 72.88% and 70/100, the GameCube version 70.84% and 71/100 and the PlayStation 2 version 63.73% and 63/100."
GameCube hit 20M real-world on-screen polygons per second in Rogue Squadron 3. That's more than any PS2 or Xbox game ever.
MGS3 on PS2, one of your examples, has up to 5 low-poly enemies on screen, plus a 5000-poly main char. RE4 on GCN has up to 7 high-poly enemies on screen, plus a 10,000-poly main char.
PS2 has extremely high RAM bandwidth but there is very little of it. GCN has much more RAM and 6:1 texture compression. So not only does the GCN push more polygons, they look better.
The PS2 rendering pipeline needs multiple passes to produce effects close to what the GCN can do in a single pass. If you want to port anything from GCN or Xbox down to PS2, it's going to be extremely inefficient and the results are going to look quite bad with bad performance. PS2 really hamstrung the generation when it came to ports, since it had the biggest marketshare and was the primary target platform for most cross-platform games. You made a game for PS2 that had ok performance, then you ported it up to GCN and Xbox and all of a sudden the performance is so good that you'd have to produce higher-quality assets or re-write AI and game mechanics to take advantage of all the extra cycles you have at your disposal. Few developers did that though, which is a huge shame.
PS2 was a powerhouse, don't get me wrong, but it was by far the weakest of three powerhouses.
PS2 can definitely do more polygons per second than Gamecube. Emotion Engine can transform and light a greater number of polygons than Gecko + Flipper, while Graphics Synthesizer also has greater fill rate than Flipper.
I think you're making the rookie mistake of just looking at polygon counts and discounting lighting (which is calculated on the same place as polygons - you literally have to sacrifice polygons for better lighting and vica versa). I've seen this kind of thing happen with N64 vs PS1 comparisons (e.g. Banjo-Tooie has up to two dynamic light sources, and Spyro has zero). If you look carefully MGS3 is doing a lot of fairly sophisticated lighting (not to mention a lot of animation, like moving individual blades of grass), while RS3 only has a single light source and being a spaceship game not much animation.
That being said, Gamecube does have advantages. It's a lot easier to use than PS2. The anti-aliasing isn't broken. You can achieve higher resolutions easily. The texturing systems are quite excellent. But those advantages don't stem to greaterl polygon/lighting counts.
are you referring to theoretically higher numbers, or is there a game on PS2 that actually hits a higher poly count than GameCube?
I'm not aware of a completed, released PS2 game that displays more polys on screen than GameCube.
>or is there a game on PS2 that actually hits a higher poly count than GameCube?
I would say most PS2 exclusives released towards the end of the system's life would have a higher polygon + lighting count than Gamecube exclusives released towards the end of that system's life.
If going by polygons alone, RS3 on Gamecube probably would have the advantage (but that's mostly due to the kind of game it is). Flipper's fixed T&L pipeline was better at transforming polygons then it was at lighting them. You could also light polygons using Flipper's TEV pixel shader (Resident Evil 4 does this), but this comes at a fairly heavy fill-rate price albeit inversely exponential (1 layer is free, 2 layers 30% penalty, 8 layers 70% penalty), which is probably why RE4 is letterboxed. The reason the PS2 version of RE4 is missing so much lighting is because the system has no direct equivalent to the TEV shaders, and the porting team didn't recreate them with T&L.
>I would say most PS2 exclusives released towards the end of the system's life would have a higher polygon + lighting count than Gamecube exclusives released towards the end of that system's life.
So in other words, not one real world example for the PS2 as of 2012 compared to a 2003 GameCube exclusive.
That's what I suspected.
ROM-hacks revealed the ZX-emulator though, but I wonder just how much of the source code we can get.
Also where can I get the files they used to recreate the emulator?
and people say /vr/ will die and become full of shitposters if 6th gen gets let in
this is actually great thread
i dont know about demanding but world driver championship looks great
6th gen was the last great generation of gaming, IMO. It was the last generation where online wasn't practically mandatory, on-disc DLC was unheard of, and games generally shipped in a finished, working state. Also, it was probably the best generation for splitscreen games.
I think the real reason the N64 was "hard to program for" was because there was literally 5X as many PS1's out in the consumers' living rooms. So devs went for the money, lol.
If you think "Well, I could spend X amount of money on this game, and make back like a 6% profit, OR develop a similar game, costing less for a platform that'll almost guarantee at least a 35% profit..." You sort of get what happened a lot of times.
The PS1 had more units out around the globe, and was a simpler hardware, so dev costs had to be lower. N64 was a borderline novelty in the world market and only a contender (sort of) in the US. And even that was a fairly pathetic showing at the time. So that's basically what made it "hard".
Not really, PS1 games were easy as fuck to pirate and most of the games were shovelware, even the better ones were demo tier. Devs were scummy lazy shits who over saturated the market with their bullshit.
This is probably one of the most ignorant posts in this thread, and really showcases your poor understanding of hardware design and how games were written in the 90s on proprietary hardware.
Kotaku article where a developer gaves his impressions on various consoles:
>PlayStation 1: Everything is simple and straightforward. With a few years of dedication, one person could understand the entire PS1 down to the bit level. Compared to what you could do on PCs of the time, it was amazing. But, every step of the way you said "Really? I gotta do it that way? God damn. OK, I guess... Give me a couple weeks." There was effectively no debugger. You launched your build and watched what happened.
>N64: Everything just kinda works. For the most part, it was fast and flexible. You never felt like you were utilizing it well. But, it was OK because your half-assed efforts usually looked better than most PS1 games. Each megabyte on the cartridge cost serious money. There was a debugger, but the debugger would sometimes have completely random bugs such as off-by-one-errors in the type determination of the watch window (displaying your variables by reinterpreting the the bits as the type that was declared just prior to the actual type of the variable —true story).
>Dreamcast: The CPU was weird (Hitatchi SH-4). The GPU was weird (a predecessor to the PowerVR chips in modern iPhones). There were a bunch of features you didn't know how to use. Microsoft kinda, almost talked about setting it up as a PC-like DirectX box, but didn't follow through. That's wouldn't have worked out anyway. It seemed like it could be really cool. But man, the PS2 is gonna be so much better!
>The PS2 had a convoluted architecture as well, but the devs were willing to squeeze every bit out of it they could just because they knew that that's where the money was.
This is not acknowledged enough on this board; the PS2 was insanely difficult to work with when released. A tiny but fast VRAM meant that you had to rethink your texture-streaming architecture entirely. Developers commented that it was hard to keep the VUs fully pumped.
But the sheer hype and momentum behind the PS2, which gave a large install base ensured that developers stuck with the system untill the tools/middleware were up to scratch.
Had the PS2 not had such competitor-drowning hype, devs would have opted for DC or XBox as the easiest system to develop for.
>This is probably one of the most ignorant posts in this thread
Not that guy, but I agree with him because its along the lines of what the lead programmer for World Driver Championship said, which was that there were very few top-tier technical teams working on N64. He said the system's biggest disadvantage in technical terms wasn't the hardware but that very few bothered to use it well because the money wasn't there (which was not just the developers fault, Nintendo contributed with its poor technical support).
He also said that the N64 texture cache issue (which, contrary to popular belief, isn't just about its size - remember the PS1 version is half as big) disadvantaged artists just as much as programmers because it required them to work with specific limitations that nobody could be bothered optimizing for.
>What was the most hardware-demanding game on the N64?
Probably a tie between Conker and World Driver Championship. The former pushed lighting capabilities as far as the system could go (up to 4 dynamic lights, which is better than most Dreamcast games) and the latter pushed polygon capabilities as far as the system could go (twice Ridge Racer Type 4, and 20% more than Crash Bandicoot).
>The PS2 had a convoluted architecture as well
PS2's architecture is a lot more convoluted than N64s but you also have a lot more control over the system. N64 was hard to get performance out of not because the its architecture was convoluted (its not really more so than PS1) but because system performance is unpredictable and difficult to optimize (cause as the OP post says, the memory controller in the GPU gives priority to itself rather than the CPU).
Just want to follow up on this post. Got some interesting facts about N64 and Dreamcast.
Dreamcast's CPU was only theoretically 2.8x more powerful than N64's CPU, and Dreamcast's T&L unit was also theoretically only 2.8x more powerful than N64's T&L unit (although N64's unit also had to process sound, Dreamcast's didn't). As you can see, for a next generation "jump" this doesn't seem very big, but fair enough for the 2.5 year release gap.
Dreamcast's big advantage over the N64 was really two things: first, the memory architecture wasn't cheap and rubbish like on N64 which allowed each component to perform closer to their theoretical numbers; second, Dreamcast's texturing / rasterizing unit had about 7 times the fill rate of the N64's equivalent due to its innovative tile rendering system. As the N64's limit was generally its fill rate (well, a lot of systems are like this), this was a massive advantage and one true area demonstrating a clear generational leap.
>PS1 would have to do it on the CPU (without GTE's help since it's a fixed geometry unit).
As I recall you could do matrix ops on the GTE, so it could have been used to enhance your code, however then you'd sacrifice some of your polygon transforming capacity.
Gamecube really has only three theoretical hardware advantages over PS2 (excluding ease of programming, that's obviously a clear win for Gamecube).
First is texturing. unequivocal win over PS2 due to both hardware decompression support and texture loopback feature (i.e. allows for more than one texture per texture unit per pass, not even Xbox supported something cool like this).
Second is hardware anti-aliasing support. Quite simply it was utterly broken on PS2 silicon and could only be replicated with inconvenient tricks.
Thirdly is display resolution. PS2's small VRAM + no texture compression + anti-aliasing tricks which took up memory (e.g. SSAA) really made sure the PS2 couldn't usually display at a 640x480 resolution because the damn thing just wouldn't fit into whatever was left of VRAM.
In virtually every other technical way than what I listed above (namely software floating point calculations / polygon / lighting / fill rate) the PS2 is theoretically superior.
Dreamcast didn't have a T&L unit though. It did that in software.
And the thing about the gpu fillrate on the DC was that, it was actually pretty low, but it didn't need to be high because it only rendered what was on the screen. It didn't need to render crap that would be hidden away by another polygon. That saved a shitload of bandwidth.
So much that DOA2 (a launch title!) was already reaching the quoted maximum number of polygons the system was told to be able to do.
>Thirdly is display resolution. PS2's small VRAM + no texture compression + anti-aliasing tricks which took up memory (e.g. SSAA) really made sure the PS2 couldn't usually display at a 640x480 resolution because the damn thing just wouldn't fit into whatever was left of VRAM.
That was until they figured out how to use the VRAM as nothing but framebuffer + texture cache. Since you had memory bandwidth reminiscent of 7th gen consoles, you could get away by texturing from main memory. Once they figured that out, 480p was more common.
>Dreamcast didn't have a T&L unit though. It did that in software.
The CPU had a fixed pipeline T&L co-processor (similar to GTE on PS1). If you meant to say that the GPU didn't have hardware T&L, then you would be right, but the CPU itself definitely had hardware T&L.
>it was actually pretty low, but it didn't need to be high because it only rendered what was on the screen. It didn't need to render crap that would be hidden away by another polygon. That saved a shitload of bandwidth.
Sort of. You could also cull the backface on PS2, but because it had so much fill it was actually faster to render the backface. Essentially this just meant that the backface "penalty" was on the fill side for PS2 and on the T&L side for Dreamcast. So not an outright win for Dreamcast, considering its T&L capabilities are not outstanding (but yes, it did mean that it had plenty of more fill to spare!).
>That was until they figured out how to use the VRAM as nothing but framebuffer + texture cache.
Yeah, but that's what you are supposed to do anyway. I was including that into my explanation. Even those progressive scan PS2 games don't tend to be actually 640x480p, usually something like 512x480p native.
If you want anti-aliasing on PS2 you've got to do something like make a 640x480 framebuffer and then downsample the display to 512x480 for the supersampling effect. Either that or consume fill to multipass edge AA.
>The CPU had a fixed pipeline T&L co-processor (similar to GTE on PS1).
It did? Do you have some specs on it?
>Even those progressive scan PS2 games don't tend to be actually 640x480p, usually something like 512x480p native.
Yeah, I heard that's what Sony recommended.
Gradius V at the very least used 640x448p for sure, though. But that was made by Treasure in their prime.
>It did? Do you have some specs on it?
There's very little info about it other than being a vector unit with a theoretical limit of 1.4 GFLOPS (in practice less of course). At least it was floating point. Gamecube's T&L unit doesn't even do floating point (much like N64). They both crunch extra integers to compensate for possible inaccuracy.
>Gamecube's T&L unit doesn't even do floating point (much like N64). They both crunch extra integers to compensate for possible inaccuracy.
Whoops ignore this part about Gamecube. I was thinking of something else. It definitely does floating point.
N64 doesn't though (except on CPU).
>There's very little info about it other than being a vector unit with a theoretical limit of 1.4 GFLOPS (in practice less of course).
That sounds to me that it is not a separate unit in the CPU, but instead the CPU has special instructions that allow it to do such math. Like the MMX instructions in Pentiums, just better.
>The old Rare guys are pretty cool.
They need to finish their Conker's BFD directors commentary.
>That sounds to me that it is not a separate unit in the CPU, but instead the CPU has special instructions that allow it to do such math. Like the MMX instructions in Pentiums, just better.
Yes, having found the datasheet for the SH-4 it seems that that is correct.
I was under the impression that the SH-4's floating point unit was specifically geared towards T&L, but in fact it is fully software simply with some special vector instructions.
So really, it does operate in a similar way that a contemporary Pentium with Voodoo card PC would have functioned (but with better use of special instructions).
it outputs in 240p sometimes.
or at least my PAL N64 outputs in 288p with quite a few games.
Notably Pokemon Stadium 2 switches between 576i and 288p for menus and battles respectively.
The PS1 was an easier system to develop for than the N64 or the Saturn. When you have an easier system to develop for, you're going to have a lot more games, and inevitably, a lot more crap. But, this also means you're going to have a bigger selection of good titles. That's just how it goes.