I remember trying to learn Cell programming in 2006 using IBM’s own SDK (possibly different and less polished compared to whatever Sony shipped to licensed PS3 developers).
I had already spent a few years writing fragment shaders, OpenGL, and CPU vector extension code for 2D graphics acceleration, so I thought I’d have a pretty good handle on how to approach this new model of parallel programming. But trying to do anything with the SDK was just a pain. There were separate incompatible gcc toolchains for the different cores, separate vector extensions, a myriad of programming models with no clear guidance on anything… And the non-gcc tools were some hideous pile of Tcl/TK GUI scripts with a hundred buttons on the screen.
It really made me appreciate how good I’d had it with Xcode and Visual Studio. I gave up on Cell after a day.
ryandrake 9 days ago [-]
Yea, this was the horrible world of embedded programming and working with SoCs before the iPhone SDK finally raised the bar. BSPs composed of barely-working cobbled-together gcc toolchains, jurassic-aged kernels, opaque blobs for flashing the devices, incomplete or nonworking boot loaders, entirely different glue scripts for every tiny chip rev, incomplete documentation. And if you wanted to build your own toolchain? LOL, good luck because every gnu tool needed to be patched in order to work. It was a total mess. You could tell these companies just made chips and reference systems, and only grudgingly provided a way to develop on them. iPhone and Xcode was such breath of fresh air. It pulled me out of embedded and I never went back.
FirmwareBurner 9 days ago [-]
>Yea, this was the horrible world of embedded programming and working with SoCs before the iPhone SDK finally raised the bar.
iPhone SDk only raised the bar for the mobile industry, the rest of embedded world is still stuck in the stone age.
matheusmoreira 9 days ago [-]
Modern ARM microcontrollers apparently use standard GNU toolchains shipped by my Linux distribution. Developing software for a Cortex M0+ was a really good experience. Lack of a complete device emulator made it hard to debug at times but I dealt with it.
paulryanrogers 9 days ago [-]
Stop! You're giving me flashbacks of developing for Hypercom point of sale devices! Never again will I work with alpha software so buggy it only worked from bundled samples! One had to remove their widgets from a sample, one at a time, testing at each step, then adding your own one at a time. Otherwise it would break something and you'd have to start over.
chasil 9 days ago [-]
So this is why so many cash registers are now iPads.
crq-yml 9 days ago [-]
I didn't gain direct experience with Cell, but given that description of the tooling, I'm unconvinced that the issue is fundamental to many-core, or that the author's assertion of non-composability holds up under scrutiny. Composition in "flat" processing architectures is, in principle, exactly what is already seen on a circuit diagram. It recurs in the unit record machines of old, and in modern dataflow systems.
That architecture does have particular weaknesses when it is meant to interface with a random-access-and-deep-callstacks workflow(as would be the case using C++) - and CPUs have accrued complex cache and pipelining systems to cater to that workflow since it does have practical benefit - but the flat approach has also demonstrated success when it's used in stream processing with real-time requirements.
Given that, and the outlier success of some first-party developers, I would lean towards the Cell hardware being catastrophically situated within the software stack, versus being an inherently worse use of transistors.
petermcneeley 8 days ago [-]
The outlier success of some first-party developers indicates that focus and talent on this exotic hardware was required to demonstrate its fully potential. As they say this was an "expert friendly system" and this was because it was complex and it was complex because it was heterogenous.
As for "inherently worse use of transistors" one would have to look at how the transistors could have been used differently. The XBox360 is a different use of transistors.
m000 9 days ago [-]
> It is important to understand why the PS3 failed
That's a weird assertion for a console that sold 87M units, ranks #8 in the all-time top-selling consoles list, and marginally outsold Xbox360 which is compared against in TFA.
It’s clear from one of the opening statements that the author considered it a failure for developers, not in the absolute sense you are pointing to. It’s not that far into the article.
> The PS3 failed developers because it was an excessively heterogenous computer; and low level heterogeneous compute resists composability.
dkersten 9 days ago [-]
I’m not even sure that’s entirely true either though. By the end of the PS3 generation, people had gotten to grips with it and were pushing it far further than first assumed possible. If you watch the GDC talks, it seemed to me that people were happy enough with it by that point (relatively speaking at least) and were able to squeeze quite a bit of performance out of it. It seems that it was hated for the first while of its life because developers hadn’t settled on a good model for programming it but by the end task based concurrency like we have now started to gain popularity (eg see the naughty dog engine talk).
Is cell really so different from computer shaders with something like Vulkan? I feel if a performance-competitive cell were made today, it might not receive so much hate, as people today are more prepared for its flavour of programming. Nowadays we have 8 to 16 cores, more on P/E setups, vastly more on workstation/server setups, and we have gpu’s and low level gpu APIs. Cell came out in a time when dual core was the norm and engines still did multi threading by having a graphics thread and a logic thread.
xmprt 9 days ago [-]
Naughty Dog has always been at the forefront of PlayStation development. Crash Bandicoot and Uncharted couldn't have been made if they didn't have a really strong grasp on how to use it. I love rereading this developer "diary" where they talk about some of the challenges with making Crash: https://all-things-andy-gavin.com/video-games/making-crash/
dkersten 9 days ago [-]
Oh yeah, I loved tasting that too! They really had to pull some tricks to make everything work so well, but damn did they pull it off!
ElCapitanMarkla 8 days ago [-]
That is a fantastic read. I usually end up stumbling upon and rereading it once a year
MindSpunk 9 days ago [-]
Cell was a failure, made evident by the fact nobody has tried to use it since.
Comparing the SPEs to compute shaders is reasonable but ignores what they were for. Compute shaders are almost exclusively used for graphics in games. Sony was asking people to implement gameplay code on them.
The idea the PS3 was designed around did not match the reality of games. They were difficult to work with, and taking full advantage of the SPEs was very challenging. Games are still very serial programs. The vast majority of the CPU work can't be moved to the SPUs like it was dreamed.
Very often games were left with a few trivially parallel numerical bits of code on the SPEs, but stuck with the anemic PPE core for everything else.
bdhcuidbebe 5 days ago [-]
Yea its not true. 7th gen was the last generation where quirks was commonplace and complete ports/rewrites were still a thing. More recent generations is more straight forward and simplified cross-console releases.
dgfitz 9 days ago [-]
It just isn’t a solid thesis at the beginning of the article, and in todays attention-span media consumption narrative, it… serves its purpose?
dcow 9 days ago [-]
The PS3 was a technical failure. It was inferior to its siblings despite having more capable hardware. This was super obvious any time you’d play a game available for both Xbox and PS3. The PS3 version was a game developed for Xbox then auto-ported to run on PS3’s unfamiliar hardware. It’s an entirely fair hypothesis.
Maybe in 15 years someone crazy enough will be delving in and building games that fully utilize every last aspect of the hardware. Like this person does on the N64: https://youtube.com/@kazen64?si=bOSdww58RNlpKCNp
Cloudef 9 days ago [-]
Heck PS3 had even trouble with some PS2 remasters because it couldnt do the same graphical effects as PS2 with its insane fillrate. MGS having frame drops during rain and the rain looking worse etc..
nottorp 9 days ago [-]
> The PS3 version was a game developed for Xbox then auto-ported to run on PS3’s unfamiliar hardware.
Yes, especially exclusives like Uncharted, Demon Souls, Heavy Rain, Ni No Kuni, Metal Gear solid 4. They were definitely developed for Xbox, that version was kept secret and only the PS3 version was published due to Sony bribes.
I'd like to thank the above devs for going through the pain of developing those titles that entertained me back then...
dcow 9 days ago [-]
You joke, but I read stories at the time of developers saying their studio did just that. I shit you not. Because developing a “traditional” pc-like game for pc-like hardware on a pc is what their teams were tooled to do. Studios didn't sit down and train up new teams of devs on how to maximize the cell architecture. The result is games that are very poorly optimized for the PS3. Even some of the exclusives (though I was only specifically calling out the cross-console games as an obvious example).
There were some gems. I owned a PS3 and had tons of fun. Nothing in this discussion speaks directly to the entertainment value of good Sony exclusives or cross-console ports. Many people don’t care one ounce about a minor performance deficit. Deep breath.
masklinn 9 days ago [-]
> Studios didn't sit down and train up new teams of devs on how to maximize the cell architecture.
Yeah usually only first party studios have the time, money, and access for that (Naughty Dog being a prime example thereof in the Sony universe).
wtallis 9 days ago [-]
Your snark would be inappropriate even if you weren't completely ignoring the sentence immediately preceding the one you quoted.
nottorp 9 days ago [-]
Maybe I'm just pointing out the world is full of crap ports but it's the exclusives that sell consoles, not the hardware specs... did I really have to spell it out?
Lammy 9 days ago [-]
> They were definitely developed for Xbox, that version was kept secret and only the PS3 version was published due to Sony bribes.
You forgot to quote (and maybe read?) this part of the parent comment:
"a game available for both Xbox and PS3."
The "both" part means they aren't talking about exclusives.
jchw 9 days ago [-]
The PS3 maybe wasn't a failure in the long run, but at launch it was a disaster all around. Sony was not making a profit on the PS3, and the initial sales at its initial price were not looking good[1]. With the Wii as its primary competitor, the Wii absolutely smashed the PS3 at launch and for a long while after, and it still maintains the lead. Sony mainly kept the competition close by slashing the price and introducing improved models, but in the long run I think the reason why their sales numbers managed to wind up OK is because they held out for the long haul. The PS3 continued to be the "current-gen" Sony console for a long time. By the time Sony had released the PS4 in late 2013/early 2014, Nintendo had already released its ill-fated Wii U console an entire year earlier in late 2012. I think what helped the PS3 a lot here was the fact that it did have a very compelling library of titles, even if it wasn't a super large one. As far as I know, Metal Gear Solid 4 was only released for PlayStation 3; that stands out to me as a game that would've been a console-seller for many.
So while PS3 was not ultimately a commercial success, it was clearly disliked by developers and the launch was certainly a disaster. I think you could argue the PS3 was a failure in many regards, and a success in some other regards. Credit to Sony, they definitely persevered through a tough launch and made it out to the other end. Nintendo wasn't able to pull off the same for the Wii U, even though it also did have some good exclusive games in the library.
While the PS3 has a soft spot in my heart (free online multiplayer!) I can't help but wonder if the subpar launch gave Microsoft Xbox a leg in the race where otherwise the Xbox 360 might have been the last console in their lineup.
bombcar 9 days ago [-]
Everyone I knew had a Wii. But they mostly had a 360 as that generations “real” console. In fact, I can’t recall anyone who had the PS3.
Lots of PS2.
mavamaarten 9 days ago [-]
Probably very dependent on location and your friend group. In my case, everyone I knew had a PS3 and I was pretty much the only one with an Xbox360. Only some had a Wii.
Before that, only PS1's and PS2's were around.
(Belgium)
bombcar 9 days ago [-]
By country, too. The USA was must stronger in the 360 that most other countries.
kbolino 9 days ago [-]
The Sony-Toshiba-IBM alliance had much grander plans for the Cell architecture, which ultimately came to naught. The PS3 wasn't just a console, it was supposed to be a revolution in computing. As a console, it did alright (though it's still handily beaten by its own predecessor and marginally by its own successor), but as an exponent of the Cell architecture that was supposed to be the future, it failed miserably. Sony yanked OtherOS a couple of years into its life, and while a Cell supercomputer was the first to break the petaflop barrier, it was quickly surpassed by x86 and then Arm.
dfxm12 9 days ago [-]
There are a lot of different measures. The Wii (the 7th gen Nintendo console) outsold it considerably, as did the 6th gen PS2 (which far and away beat out all other consoles in its generation).
Going from such market dominance to second place is not good. Not being able to improve upon your position as the industry leader is not good. Failure might be strong, but I certainly wouldn't be happy if I was an exec at Sony at the time.
tekla 9 days ago [-]
Sales wasn't what the article was referring to if you take the context of literally the very first sentence of the article
miltonlost 9 days ago [-]
No wonder tech people need LLMs so much if they are incapable of reading more than 3 sentences and comprehending them.
notatoad 9 days ago [-]
perhaps a better headline would have been "why the PS3 architecture failed". if it was a success, they wouldn't have abandoned it for the next generation.
colejohnson66 9 days ago [-]
OP is talking about developer experience. From right after the image:
> The PS3 failed developers because it was an excessively heterogenous computer; [...]
santoshalper 9 days ago [-]
Where a console rates on the all-time sales leader board is pretty irrelevant, since the industry has grown so much in absolute terms. As when looking at movie box office revenue, you need to look at more than one number if you want to judge the real performance of a console in the market.
Here is a good example: The PS3 sold only slightly more than half as many units as its predecessor, the PS2, did. Most businesses would, in fact, consider it a failure if their 3rd generation product sold much more poorly than the second generation. Sony's rapid move away from the PS3/Cell architecture gives you a pretty good reason to believe they considered it a failure too.
9 days ago [-]
miltonlost 9 days ago [-]
This is a weird assertion that the author meant "failed to sell consoles" when he simply said "the PS3 failed". He later clarified "failed for developers" literally 2 sentences later.
chasil 9 days ago [-]
The approach used by Cell was attempted exactly once in this market.
This does imply an architectural misstep, and one of IBM's last.
AMD vanquished IBM from the next generation of consoles.
bsder 9 days ago [-]
I know a lot of people who only bought the PS3 because it was the cheapest BluRay player for a remarkably long time.
corysama 9 days ago [-]
With an SPU's 256K local memory and DMA, the ideal way to use the SPU was to split the local memory into 6 sections: code, local variables, DMA in, input, output, DMA out. That way you could have async DMA in parallel in both directions while you transform your inputs to your outputs. That meant your working space was even smaller...
Async DMA is important because the latency of a DMA operation is 500 cycles! But, then you remember that the latency of the CPU missing cache is also 500 cycles... And, gameplay code misses cache like it was a childhood pet. So, in theory you just need to relax and get it working any way possible and it will still be a huge win. Some people even implemented pointer wrappers with software-managed caches.
500 cycles sounds like a lot. But, remember that the PS2 ran at 300MHz (and had a 50 cycle mem latency) while the PS3 and 360 both ran at 3.2Ghz (and both had a mem latency of 500 cycles). Both systems pushed the clock rate much higher than PCs at the time. But, to do so, "niceties" like out-of-order execution were sacrificed. A fixed ping-pong hyperthreading should be good enough to cover up half of the stall latency, right?
Unfortunately, for most games the SPUs ended up needing to be devoted full time to making up for the weakness of the GPU (pretty much a GeForce 7600 GT). Full screen post processing was an obvious target. But, also the vertex shaders of the GPU needed a lot of CPU work to set them up. Moving that work to the SPUs freed up a lot of time for the gameplay code.
bri3d 9 days ago [-]
I think one thing that the linked article (which I think is great and I generally agree with!) misses is that libraries and abstraction can patch over the lack of composability created by heterogeneous systems. We see it everywhere - AI/ML libraries abstracting over some combination of TPU, vector processing, and GPU cores being one obvious modern place.
This happened on the PS3, too, later in its life: Sony released PlayStation Edge and middleware/engine vendors increasingly learned how to use SPU to patch over RSX being slow. At this point developers stopped needing to care so much about the composability issues introduced by heterogeneous computing, since they could use the SPUs as another function processor to offload, for example, geometry processing, without caring about the implementation details so much.
masklinn 9 days ago [-]
> Both systems pushed the clock rate much higher than PCs at the time.
Intel reached 3.2GHz on a production part in June 2003, with the P4 HT 3.2 SL792. At the time the 360 and PS3 were released, Intel's highest clocked part was the P4 EE SL7Z4 at 3.73.
rasz 8 days ago [-]
Not to mention both Intel 30 stages deep pipeline and PPC in-order were empty MHz spend mostly on waiting for cache misses.
01HNNWZ0MV43FF 9 days ago [-]
I'm surprised the SPUs were used for post-processing, cause whenever I try to do software rendering I get bottlenecked on fill rate quickly. I believe you, because I've seen it attested in many places, but I'm surprised by it.
corysama 9 days ago [-]
The 1:1 straight-line behavior of fullscreen post processing is much easier to prefetch than triangle rasterization. And, in this case the SPUs and GPU used the same memory. So, no bandwidth advantage to the GPU. The best the GPU could do would be hiding latency better.
dehrmann 9 days ago [-]
The Xbox worked as a proof-of-concept to show that you could build a console with commodity hardware. The Xbox 360 doubled down on this while the PS3 tried to do clever things with an innovative architecture. Between the two, it was clear commodity hardware was the path forward.
mikepavone 9 days ago [-]
> The Xbox 360 doubled down on this while the PS3 tried to do clever things with an innovative architecture.
I don't think this is really an accurate description of the 360 hardware. The CPU was much more conventional than the PS3, but still custom (derived from the PPE in the cell, but has an extended version of VMX extension). The GPU was the first to use a unified shader architecture. Unified memory was also fairly novel in the context of a high performance 3D game machine. The use of eDRAM for the framebuffer is not novel (the Gamecube's Flipper GPU had this previously), but also wasn't something you generally saw in off-the-shelf designs. Meanwhile the PS3 had an actual off the shelf GPU.
These days all the consoles have unified shaders and memory, but I think that just speaks to the success of what the 360 pioneered.
Since then, consoles have gotten a lot closer to commodity hardware of course. They're custom parts (well except the original Switch I guess), but the changes from the off the shelf stuff are a lot smaller.
photon_rancher 9 days ago [-]
I mean commodity hardware usually did ok in games consoles prior to then too. NES was a modified commodity chip
fragmede 9 days ago [-]
in the beginning general purpose computers weren't capable of running graphics like the consoles could. That took dedicated hardware that only the early Atari/NES/Genesis had. That's not to say that the Apple or IBM clones didn't have games, they did, but it just wasn't the same. The differentiation was their hardware, enabling games that couldn't be run on early PCs. Otherwise why buy a console?
So the thinking was a unique architecture is what a console's raison d’être was. Of course now we know better, as the latest generation of consoles shows, butthat's where the thinking for the PS3's cell architecture came from.
gmueckl 9 days ago [-]
This leaves out an important step. When 3D graphics acceleration entered the broader consumer/desktop computing market, it was also a successor to the kind of 2D graphics acceleration that consoles had and previous generations of desktop computers generally didn't. So I believe that it's fair to say that specialized console hardware was replaced by general purpose computing hardware because the general purpose hardware had morphed to include a superset of console hardware capabilities.
01HNNWZ0MV43FF 9 days ago [-]
GPUs are just mitochondria that were absorbed into general purpose computers after evolving from early game consoles
gmueckl 9 days ago [-]
GPUs evolved from graphics workstations (Pixar Image Computer, various SGI products...) rather than game consoles. Especially SGI pioneered a lot of the HW accelerated rendering pipeline that trickled into consumer graphics chips.
dehrmann 9 days ago [-]
Have to mention that the N64 was loosely based on SGI tech.
gregw2 9 days ago [-]
Not just loosely, SGI itself made the core graphics chip for N64.
dehrmann 9 days ago [-]
Agreeing with all my siblings' comments, computers took cues from a lot of places and evolved to be general-purpose. Something similar happened on the GPU side, and at some point, the best parts of bespoke graphics hardware got generalized–plus 3D upended the whole space. By the PS3 era, there were multiple GPU vendors and multiple generations of APIs, so everything had settled down and standardized. The era of gaining a competitive advantage through clever hardware was over, and Sony, a hardware company, was still fighting the last war.
fragmede 9 days ago [-]
> Sony, a hardware company, was still fighting the last war.
exactly!
MBCook 9 days ago [-]
That has been and I’d say still is a huge problem for them. They are not a software company at all and it hurts many thing.
PlayStation is the one exception. But they learned the wrong lesson from the PS2 (exotic hardware is great! No one minds!) and had to get a beating during the PS3 era for the division to get back on track.
treyd 9 days ago [-]
This is the thing that people don't realize about middle-era consoles. It was the shift where commodity PC hardware was competing well with console hardware.
Today in 2025 the only possible advantage is maybe in a specific price category where the volume discount is enough to justify it. In general, consoles just don't make technological sense.
MBCook 9 days ago [-]
Price, UX, and fixed hardware that can be heavily optimized for.
VyseofArcadia 9 days ago [-]
Well, there was the Amiga, but in all fairness it was first conceived as a game console and then worked into a computer.
rasz 8 days ago [-]
>That took dedicated hardware that only the early Atari
2600 was downright pathetic compared to TRS-80 or Apple 2
>/NES
comparable to C-64
>/Genesis
comparable to Amiga 500
thadt 9 days ago [-]
Not a game developer, but I wrote a bunch of code specifically for the CELL processor for grad school at the time (and tested it on my PS3 at home - marking the first and last time I was able to convince my wife I needed a video game system "for real work"). It was fun to play with, but I can empathize with the time cost aspect: scheduling and optimizing DMA and SPE compute tasks just took a good bit of platform specific work.
I suspect a major point killing off special architectures like the PS3 was the desire of game companies to port their games to other platforms such as the PC. Porting to/from the PS3 would be rather painful if you were trying to fully leverage the power and programming model of the CELL CPU.
MBCook 9 days ago [-]
As things got more expensive we really started to see a switch from custom or in-house engines to the ones we’re so familiar with like Unity and Unreal.
Many developers couldn’t afford to keep up if they had to build their own engine, let alone on multiple platforms.
Far cheaper/easier to share the cost with many others through Unreal licenses. Your fame is more portable and can use more features that you may have ever had time to add to the engine.
It’s way easier to make multi-platform engines if each one doesn’t need its own ridiculously special way of doing things. And unless that platform is the one that’s driving a huge amount of sales I’m guessing it’s gonna get less attention/optimization.
darknavi 9 days ago [-]
I suspect that as well.
It's not that the architecture was bad, it's that it's not easily compatible to other endpoints developers wanted to release on resulting in prohibitively high costs of doing a "full" port.
wmf 9 days ago [-]
Nah, it was bad. It took far too much effort even for PS3-exclusive games.
MBCook 9 days ago [-]
Well that wasn’t supposed to be the architecture either. The whole thing was supposed to be vastly faster and bigger with way more SPUs.
Maybe that would’ve been terrible, maybe not. Kinda sounds like yes in hindsight.
But the SPU‘s were originally supposed to do the GPU work too I think. So there’s a reason the GPU doesn’t fit in terribly well, it had to be tacked on at the end so the PS3 had any chance at all. And it couldn’t be well designed/optimized for the rest of the system because they were out of time.
petermcneeley 8 days ago [-]
Bingo!
rokkamokka 9 days ago [-]
> I used to think that PS3 set back Many-Core for decades, now I wonder if it simply killed it forever.
Did general purpose CPUs not kind of subsume this role? Modern CPUs have 16 cores, and server oriented ones can have many, many more than that
bitwarrior 9 days ago [-]
> The PS3 failed developers because it was an excessively heterogenous computer
Which links to the Wiki:
> These systems gain performance or energy efficiency not just by adding the same type of processors, but by adding dissimilar coprocessors
Modern CPUs have many similar cores, not dissimilar cores.
kmeisthax 9 days ago [-]
Mobile CPUs embraced this hardcore; but the problem is that most of those cores don't have the programmer interfaces exposed. The most dissimilarity you get on mobile is big.LITTLE; you might occasionally get scheduled on a weaker core with better power consumption. But this is designed to be software-transparent. In contrast, the device vendor can stuff their chips full of really tiny cores designed to run exactly one program all the time.
For example, Find My's offline finding functionality runs off a coprocessor so tiny it can basically stay on forever. But nobody outside Apple gets to touch those cores. You can't ship an app that uses those cores to run a different (cross-platform) item-finding network; even on Android they're doing all the background stuff on the application processor.
MBCook 9 days ago [-]
AI accelerators are a new popular addition. Media encoder/decoder blocks have been around for a while. Crypto accelerator blocks.
dkersten 9 days ago [-]
Some intel processors have P/E core splits. So do some apple processors and mobile processors.
Our normal desktop processors have double the cells cores. Workstation and servers have 64 or more cores.
Many core is alive and well.
rokkamokka 9 days ago [-]
Ah, my bad, I didn't understand the definition of many-core
sergers 9 days ago [-]
i was thinking similar lines.
maybe i dont full understand "many-core", but the definition the article implies aligns with what i think of latest qualcomm snapdragon mobile processor for example with cores at different frequencies/other differences.
also i dont understand why ps3 is considered a failure, when did it fail?
in NA xbox360 was more popular (i would say because xbox live) but ps3 was not far behind (i owned a ps3 at launch and didnt get a xbox360 till years later).
from a lifetime sales, shows more ps3s shipped globally than xbox.
MBCook 9 days ago [-]
The incredibly high price of the PS3 at launch cost it a lot of sales, and it took forever to come down. Both of those are direct results of the hardware cost of the Cell and BluRay drive.
Early on the Xbox also did a better job with game ports. People had very little experience using multicore processors and the cell was even worse. So often the PlayStation three would have a lower resolution or worse frame rate or other problems like that.
Xbox Live is also an excellent point. That really helped Microsoft a lot.
All of that meant Microsoft got an early lead and the PlayStation three didn’t do anywhere near as well as someone might suspect from a follow up to the PlayStation 2.
As time went on, the benefits of the Blu-ray drive started to factor in some. Every PlayStation had a hard drive, which wasn’t true of the 360. The red ring of death made a lot of customers mad and scared others off from the Xbox. And as Sony released better libraries and third parties just got a better handle on things they started to be able to do a better job on their PS3 versions to where it started to match or exceed the Xbox depending on the game.
By the end I think the PlayStation won in North American sales but it was way way closer than it should have been coming off the knockout success of the PS2.
masklinn 9 days ago [-]
> also i dont understand why ps3 is considered a failure, when did it fail?
> The PS3 failed developers
It failed as an ISA (or collection thereof), and in developer mindshare.
dcow 9 days ago [-]
I would argue that the failure extended to the user-perceptible performance deficit vs the XB360 despite arguably more capable hardware. Released games didn't perform better on the PS3 even if they technically could.
masklinn 9 days ago [-]
That's part of the failure in developer mindshare: leveraging SMT for games in 2005 was difficult enough, heterogeneous multi-ISA hardware, a ring bus, and the peculiarities of the SPEs made the PS3 not really a consideration. Things might have been different if sony had provided ready-made more or less plug and play SPE applications you could use with just a little tuning for your circumstance (e.g. a physics engine or something) but as far as I know that wasn't the case. I've never heard Sony being praised for its SDKs, while the 360 had straight up directx (with more hardware access).
mattnewport 8 days ago [-]
Big little cores like on mobile or some Intel processors are really not the same thing. The little cores have the same instruction set and address the same memory as the big cores and are pretty transparent to devs apart from some different performance characteristics.
The SPEs were a different instruction set with a different compiler tool chain running separate binaries. You didn't have access to an OS or much of a standard library, you only had 256K of memory shared between code and data. You had to set up DMA transfers to access data from main memory. There was no concept of memory protection so you could easily stomp over code with a write to a bad pointer (addressing wrapped so any pointer value including 0 was valid to write to). Most systems would have to be more or less completely rewritten to take advantage of them.
accrual 9 days ago [-]
> 256 MB was dedicated to graphics and only had REDACTED Mb/s access from the CPU
I wonder what the REDACTED piece means here, aren't the PS3 hardware specifications pretty open? Per Copetti, the RSX memory had a theoretical bandwidth of 20.8 GB/s, though that doesn't indicate how fast the CPU can access it.
monocasa 9 days ago [-]
I don't know why it's redacted here; maybe he couldn't find a public source.
It is a mind bendingly tiny 16MB/s bandwidth to perform CPU reads from RSX memory.
OptionOfT 9 days ago [-]
Just to make sure, I read that the CPU reads from the RSX at 16 megabytes / sec?
maximilianburke 9 days ago [-]
I can't recall exactly but that sounds right. It was exceptionally slow.
MBCook 9 days ago [-]
That’s why getting DMA right mattered so much.
maximilianburke 9 days ago [-]
I'm not quite sure what this means? DMA is the only mechanism that the SPUs had to access memory outside of their local store; it wasn't a matter of getting it right or wrong. Reading RSX memory by SPU DMA wasn't any faster than reading it from the PPU.
MBCook 9 days ago [-]
I’m sorry I thought you meant the CPU not the GPU.
I didn’t realize DMA didn’t help with GPU memory.
christkv 9 days ago [-]
Sony was funny in this way.
PS1: Easy to develop for and max out.
PS2: Hard to develop for and hard to max out.
PS3: Even harder than PS2.
PS4: Back to easier.
PS5: Just more PS4.
PS5 PRO: Just more PS5.
AdmiralAsshat 9 days ago [-]
It certainly doesn't seem to have impacted adoption, though.
For whatever reasons developers seem loath to talk about how difficult developing for a given console architecture is until the console is dead and buried. I guess the assumption is that the console vendor might retaliate, or the fans might say, "Well all of these other companies are somehow doing it, so you guys must just suck at your jobs."
An early interview with Shinji Mikami is one of the only ones I can recall about a high-profile being frank about having difficulties developing for the console[0]:
> IGNinsider: Ahh, smart politics. How do you feel about working on the PlayStation 2? Have you found any strengths in the system by working on Devil May Cry that you hadn't found before?
>
> Mikami: If the programmer is really good, then you can achieve really high quality, but if the programmer isn't that great then it is really hard to work with. We lost three programmers during Devil May Cry because they couldn't keep up.
the ps3 development difficulty was definitely complained about during its usage cycle; the standard ps3 vs xbox360 argument was that the ps3 had far superior hardware, and xbox fans would always counter with no one could make use of that hardware
christkv 9 days ago [-]
I think it's funny because the ease of development was one of the reason why the original Playstation had such a wide library of titles. The Saturn and the N64 was hard to get good performance out off due to architectural decisions.
maximilianburke 9 days ago [-]
The PS4 and beyond is entirely creditable to Mark Cerny who spent a lot of time talking to developers who had spent years pulling their hair out with the PS3.
MBCook 9 days ago [-]
Sony needed developers for the PlayStation. So they did a good job.
The PlayStation did so well a lot of people wanted the PlayStation 2. And because it worked as a cheap DVD player it sold extremely well.
Sony learned hard to program expensive exotic hardware does great!
PS3 arrives with hardware that’s even more expensive and even harder to program and gets a world of hurt.
So for the PlayStation 4 they tried to figure out what went wrong and realized they needed to make things real easy for developers. Success!
PlayStation 5, that PlayStation 4 four thing worked great let’s keep being nice to developers. Going very well.
The PS2 succeeded _in-spite_ of its problems. And Sony didn’t realize that.
dundarious 9 days ago [-]
> Most code and algorithms cannot be trivially ported to the SPE.
Having never worked on SPE coding, but having heard lots about interesting aspects, like manual cache management, I was very interested to read more.
> C++ virtual functions and methods will not work out of the box. C++ encourages dynamic allocation of objects but these can point to anywhere in main memory. You would need to map pointer addresses from PPE to SPE to even attempt running a normal c++ program on the SPE.
Ah. These are schoolboy errors in games programming (comparing even with the previous 2 generations of the same system).
I think the entire industry shifted away from teaching/learning/knowing/implementing those practices de rigeur, so I'm absolutely not criticising the OP -- I was taught the same way around this time.
But my reading of the article is now that it highlights a then-building and now-ubiquitous software industry failing, almost as much as a hardware issue (the PS3 did have issues, even if you were allocating in a structured way and not trying to run virtual functions on SPEs).
Pet_Ant 9 days ago [-]
I hope that as RISC-V gains in support, there is a chance to experiment with a many-core version of it. Something like a hundred QERV cores on a chip. The lack of patents is a key enabler, and support for the ISA on more vanilla chips is the other enabler. This could happen.
> It is important to understand why the PS3 failed.
But did it fail?
PS3 was a very successful 7th gen console, only beaten by the Wii in units sold, but had a longer shelf life, more titles than any other 7th gen console.
raphlinus 9 days ago [-]
Thanks so much, Peter, for writing this up. I think it adds a lot to the record about what exactly happened with the Cell. And, as with Larrabee, I have to wonder, what would an alternative universe look like if Sony had executed well? Or is the idea so ill-fated that no Cell-like many-core design could ever succeed?
nemothekid 9 days ago [-]
I feel like calling the PS3 a licked cookie is unfair.
>The original design was approximately 4 Cell processors with high frequencies. Perhaps massaging this design would have led to very homogenous high performance Many-Core architecture. At more than 1 TFlop of general purpose compute it would have been a beast and not a gnarly beast but a sleek smooth uniform tiger.
That's great and all, but the PS3 cost (famously) FIVE HUNDRED AND NINETY NINE US DOLLARS (roughly $900 dollars in todays money).
However one thing I noticed is that multi-core programming in 2006 was absolutely anemic. Granted I was too young to actually understand what was happening at the time, but a couple years ago I went in on a deep dive on the Cell and one thing I came away with was proper parallelism was in its infancy for mainstream development. Forget the Cell, it took a long time for game developers to take advantage of quad core PCs.
Developers were afraid of threads, didn't understand memory barriers and were cautious of mutexes. Gaben has a famous clip trashing the PS3 because most of valve's developers at the time did not have the experience programming multicore systems. It was common to just have objective-based threads (ex, Render thread, AI thread, Physics thread), and pretend coordination didn't exist for large parts of the code. This mostly worked up until you had more cores than threads. This stands in stark contrast to most parallel thread today that does userspace scheduling with tasks or threads.
Even Naughty Dog eventually figured out late in the cycle to best take advantage of SPEs using fibers with a system that looks like modern async reactors (like node or tokio) if you squint really, really hard.
The Cell was just really really early. Looking back I don't think the Cell was half-baked. It was the best that could be done at the time. Even if the hardware was fully baked, there was still 5-10 years of software engineering research before most people had the tooling to tae advantage of parallel hardware.
wk_end 9 days ago [-]
> Even Naughty Dog eventually figured out late in the cycle to best take advantage of SPEs using fibers with a system that looks like modern async reactors (like node or tokio) if you squint really, really hard.
Well, if anyone was going to figure it out, it would've been Naughty Dog, they've got a long history of being absolute wizards at taming Sony's hardware.
I'm 100% confusing this talk with an earlier talk on the PS3. "Fibers" (aka, tasks, coroutines) is the industry standard name but I believe their engine did something more primitive on the PS3 in which their PS4 engine evolved from.
I'll try to find it, but admittedly this is knowledge that I researched back in ~2011 around the time of Uncharted 3's launch.
dkersten 9 days ago [-]
Naughty Dog’s late PS3 era code was very similar to modern task/job based parallel code. That really was the start of the modern era of multitasking.
Compared to today, the Cell really isn’t so complex — we have just as many or more cores, we have GPU programming (and low level APIs with manual data moving and what not). It’s just that the cell came out in a world where dual core had just became the norm and people hadn’t really accustomed to it yet. It was ahead of its time. And a new architecture meant immature tooling.
MBCook 9 days ago [-]
Remember that with the PS3 it wasn’t just multicore (which was new to game consoles that generation) but it was also heterogeneous cores. VERY heterogeneous cores.
wmf 9 days ago [-]
Even if you understood parallel programming you still would have been better off with the 360.
I also thought the price was OK considering the Blu-ray support.
kristianp 9 days ago [-]
> PS3 cost ... roughly $900 dollars in todays money).
I'd like to see a list of the top-selling consoles with their inflation-adjusted prices. The ps3 did really well considering the price. I'd say the ps2 hit the sweet spot for price while still offering a performance improvement over the previous generation.
em3rgent0rdr 9 days ago [-]
> multi-core programming in 2006 was absolutely anemic
OpenMP was around back then and was easy.
haunter 9 days ago [-]
I still vividly remember when they announced the $599 price tag, inflation adjusted that would be almost $1,000 today! It was crazy
999900000999 9 days ago [-]
It was still one of the cheaper Blueray players at release.
A lot of people brought them just for that, same thing with DVDs and PS2s.
NitroPython 9 days ago [-]
"I want a good parallel computer" is a good side article.
chadhutchins10 9 days ago [-]
The PS3 failed?
tekla 9 days ago [-]
The article didn't say that.
> The PS3 failed developers because it was an excessively heterogenous computer
Most here are probably too young to realize the PS3 was supposed to be a flagship consumer device to show off the Cell processor and Sony was pushing hard for the Cell arch to be everywhere, media devices, general purpose computers, next gen supercomputers
It died hard when people realized how difficult it was to program for, and I dont think anything else other than the PS3 ever bothered seriously trying again with that arch.
Suppafly 9 days ago [-]
>The article didn't say that.
Yes it did, fairly near the top.
>>It is important to understand why the PS3 failed.
This article is a bunch of nonsense.
miltonlost 9 days ago [-]
Read. More. Than. One. Sentence. Read the article as a whole, not cut into pieces. Context will let you learn what he meant by "failed", since "failed" can mean multiple things. "At what did the PS3 fail?" is how it should be read and comprehended.
trelane 9 days ago [-]
Dunno why you are getting downvoted. You're completely correct. It is literally the third line on the page, including the title! That sentence is, to quote TFA,
> It is important to understand why the PS3 failed.
petermcneeley 9 days ago [-]
You mean the URL not the title. The title is actually the the conclusion of the article. A URL is more like a filename.
trelane 9 days ago [-]
I see the disconnect.
Full disclosure: I have been doing The Internet for well over 30 years. :)
"including the title" was intended to attach to the line count. Some may include the page title (h1 heading in HTML) in the lines at the start of the page. Some may not, because the title isn't necessarily part of the article itself. I was trying to disambiguate.
I was not trying to say that the "PS3 failed" was in the title.
You're right that it's in the URL but I didn't see that until you pointed it out.
petermcneeley 9 days ago [-]
But hopefully also you see the connection.
Being pedantic is not an interesting position.
9 days ago [-]
tekla 9 days ago [-]
To save time I'm going to just link the other comment thread because its boring arguing against bad reading comprehension
I had already spent a few years writing fragment shaders, OpenGL, and CPU vector extension code for 2D graphics acceleration, so I thought I’d have a pretty good handle on how to approach this new model of parallel programming. But trying to do anything with the SDK was just a pain. There were separate incompatible gcc toolchains for the different cores, separate vector extensions, a myriad of programming models with no clear guidance on anything… And the non-gcc tools were some hideous pile of Tcl/TK GUI scripts with a hundred buttons on the screen.
It really made me appreciate how good I’d had it with Xcode and Visual Studio. I gave up on Cell after a day.
iPhone SDk only raised the bar for the mobile industry, the rest of embedded world is still stuck in the stone age.
That architecture does have particular weaknesses when it is meant to interface with a random-access-and-deep-callstacks workflow(as would be the case using C++) - and CPUs have accrued complex cache and pipelining systems to cater to that workflow since it does have practical benefit - but the flat approach has also demonstrated success when it's used in stream processing with real-time requirements.
Given that, and the outlier success of some first-party developers, I would lean towards the Cell hardware being catastrophically situated within the software stack, versus being an inherently worse use of transistors.
As for "inherently worse use of transistors" one would have to look at how the transistors could have been used differently. The XBox360 is a different use of transistors.
That's a weird assertion for a console that sold 87M units, ranks #8 in the all-time top-selling consoles list, and marginally outsold Xbox360 which is compared against in TFA.
See: https://en.wikipedia.org/wiki/List_of_best-selling_game_cons...
> The PS3 failed developers because it was an excessively heterogenous computer; and low level heterogeneous compute resists composability.
Is cell really so different from computer shaders with something like Vulkan? I feel if a performance-competitive cell were made today, it might not receive so much hate, as people today are more prepared for its flavour of programming. Nowadays we have 8 to 16 cores, more on P/E setups, vastly more on workstation/server setups, and we have gpu’s and low level gpu APIs. Cell came out in a time when dual core was the norm and engines still did multi threading by having a graphics thread and a logic thread.
Comparing the SPEs to compute shaders is reasonable but ignores what they were for. Compute shaders are almost exclusively used for graphics in games. Sony was asking people to implement gameplay code on them.
The idea the PS3 was designed around did not match the reality of games. They were difficult to work with, and taking full advantage of the SPEs was very challenging. Games are still very serial programs. The vast majority of the CPU work can't be moved to the SPUs like it was dreamed.
Very often games were left with a few trivially parallel numerical bits of code on the SPEs, but stuck with the anemic PPE core for everything else.
Maybe in 15 years someone crazy enough will be delving in and building games that fully utilize every last aspect of the hardware. Like this person does on the N64: https://youtube.com/@kazen64?si=bOSdww58RNlpKCNp
Yes, especially exclusives like Uncharted, Demon Souls, Heavy Rain, Ni No Kuni, Metal Gear solid 4. They were definitely developed for Xbox, that version was kept secret and only the PS3 version was published due to Sony bribes.
I'd like to thank the above devs for going through the pain of developing those titles that entertained me back then...
There were some gems. I owned a PS3 and had tons of fun. Nothing in this discussion speaks directly to the entertainment value of good Sony exclusives or cross-console ports. Many people don’t care one ounce about a minor performance deficit. Deep breath.
Yeah usually only first party studios have the time, money, and access for that (Naughty Dog being a prime example thereof in the Sony universe).
Funnily enough the opposite of this did actually happen at least once: https://archive.org/details/gears-of-war-3-ps3
"a game available for both Xbox and PS3."
The "both" part means they aren't talking about exclusives.
So while PS3 was not ultimately a commercial success, it was clearly disliked by developers and the launch was certainly a disaster. I think you could argue the PS3 was a failure in many regards, and a success in some other regards. Credit to Sony, they definitely persevered through a tough launch and made it out to the other end. Nintendo wasn't able to pull off the same for the Wii U, even though it also did have some good exclusive games in the library.
[1]: https://web.archive.org/web/20161104003151/http://www.pcworl...
Lots of PS2.
Before that, only PS1's and PS2's were around.
(Belgium)
Going from such market dominance to second place is not good. Not being able to improve upon your position as the industry leader is not good. Failure might be strong, but I certainly wouldn't be happy if I was an exec at Sony at the time.
> The PS3 failed developers because it was an excessively heterogenous computer; [...]
Here is a good example: The PS3 sold only slightly more than half as many units as its predecessor, the PS2, did. Most businesses would, in fact, consider it a failure if their 3rd generation product sold much more poorly than the second generation. Sony's rapid move away from the PS3/Cell architecture gives you a pretty good reason to believe they considered it a failure too.
This does imply an architectural misstep, and one of IBM's last.
AMD vanquished IBM from the next generation of consoles.
Async DMA is important because the latency of a DMA operation is 500 cycles! But, then you remember that the latency of the CPU missing cache is also 500 cycles... And, gameplay code misses cache like it was a childhood pet. So, in theory you just need to relax and get it working any way possible and it will still be a huge win. Some people even implemented pointer wrappers with software-managed caches.
500 cycles sounds like a lot. But, remember that the PS2 ran at 300MHz (and had a 50 cycle mem latency) while the PS3 and 360 both ran at 3.2Ghz (and both had a mem latency of 500 cycles). Both systems pushed the clock rate much higher than PCs at the time. But, to do so, "niceties" like out-of-order execution were sacrificed. A fixed ping-pong hyperthreading should be good enough to cover up half of the stall latency, right?
Unfortunately, for most games the SPUs ended up needing to be devoted full time to making up for the weakness of the GPU (pretty much a GeForce 7600 GT). Full screen post processing was an obvious target. But, also the vertex shaders of the GPU needed a lot of CPU work to set them up. Moving that work to the SPUs freed up a lot of time for the gameplay code.
This happened on the PS3, too, later in its life: Sony released PlayStation Edge and middleware/engine vendors increasingly learned how to use SPU to patch over RSX being slow. At this point developers stopped needing to care so much about the composability issues introduced by heterogeneous computing, since they could use the SPUs as another function processor to offload, for example, geometry processing, without caring about the implementation details so much.
Intel reached 3.2GHz on a production part in June 2003, with the P4 HT 3.2 SL792. At the time the 360 and PS3 were released, Intel's highest clocked part was the P4 EE SL7Z4 at 3.73.
I don't think this is really an accurate description of the 360 hardware. The CPU was much more conventional than the PS3, but still custom (derived from the PPE in the cell, but has an extended version of VMX extension). The GPU was the first to use a unified shader architecture. Unified memory was also fairly novel in the context of a high performance 3D game machine. The use of eDRAM for the framebuffer is not novel (the Gamecube's Flipper GPU had this previously), but also wasn't something you generally saw in off-the-shelf designs. Meanwhile the PS3 had an actual off the shelf GPU.
These days all the consoles have unified shaders and memory, but I think that just speaks to the success of what the 360 pioneered.
Since then, consoles have gotten a lot closer to commodity hardware of course. They're custom parts (well except the original Switch I guess), but the changes from the off the shelf stuff are a lot smaller.
So the thinking was a unique architecture is what a console's raison d’être was. Of course now we know better, as the latest generation of consoles shows, butthat's where the thinking for the PS3's cell architecture came from.
exactly!
PlayStation is the one exception. But they learned the wrong lesson from the PS2 (exotic hardware is great! No one minds!) and had to get a beating during the PS3 era for the division to get back on track.
Today in 2025 the only possible advantage is maybe in a specific price category where the volume discount is enough to justify it. In general, consoles just don't make technological sense.
2600 was downright pathetic compared to TRS-80 or Apple 2
>/NES
comparable to C-64
>/Genesis
comparable to Amiga 500
I suspect a major point killing off special architectures like the PS3 was the desire of game companies to port their games to other platforms such as the PC. Porting to/from the PS3 would be rather painful if you were trying to fully leverage the power and programming model of the CELL CPU.
Many developers couldn’t afford to keep up if they had to build their own engine, let alone on multiple platforms.
Far cheaper/easier to share the cost with many others through Unreal licenses. Your fame is more portable and can use more features that you may have ever had time to add to the engine.
It’s way easier to make multi-platform engines if each one doesn’t need its own ridiculously special way of doing things. And unless that platform is the one that’s driving a huge amount of sales I’m guessing it’s gonna get less attention/optimization.
It's not that the architecture was bad, it's that it's not easily compatible to other endpoints developers wanted to release on resulting in prohibitively high costs of doing a "full" port.
Maybe that would’ve been terrible, maybe not. Kinda sounds like yes in hindsight.
But the SPU‘s were originally supposed to do the GPU work too I think. So there’s a reason the GPU doesn’t fit in terribly well, it had to be tacked on at the end so the PS3 had any chance at all. And it couldn’t be well designed/optimized for the rest of the system because they were out of time.
Did general purpose CPUs not kind of subsume this role? Modern CPUs have 16 cores, and server oriented ones can have many, many more than that
Which links to the Wiki:
> These systems gain performance or energy efficiency not just by adding the same type of processors, but by adding dissimilar coprocessors
Modern CPUs have many similar cores, not dissimilar cores.
For example, Find My's offline finding functionality runs off a coprocessor so tiny it can basically stay on forever. But nobody outside Apple gets to touch those cores. You can't ship an app that uses those cores to run a different (cross-platform) item-finding network; even on Android they're doing all the background stuff on the application processor.
Our normal desktop processors have double the cells cores. Workstation and servers have 64 or more cores.
Many core is alive and well.
maybe i dont full understand "many-core", but the definition the article implies aligns with what i think of latest qualcomm snapdragon mobile processor for example with cores at different frequencies/other differences.
also i dont understand why ps3 is considered a failure, when did it fail?
in NA xbox360 was more popular (i would say because xbox live) but ps3 was not far behind (i owned a ps3 at launch and didnt get a xbox360 till years later).
from a lifetime sales, shows more ps3s shipped globally than xbox.
Early on the Xbox also did a better job with game ports. People had very little experience using multicore processors and the cell was even worse. So often the PlayStation three would have a lower resolution or worse frame rate or other problems like that.
Xbox Live is also an excellent point. That really helped Microsoft a lot.
All of that meant Microsoft got an early lead and the PlayStation three didn’t do anywhere near as well as someone might suspect from a follow up to the PlayStation 2.
As time went on, the benefits of the Blu-ray drive started to factor in some. Every PlayStation had a hard drive, which wasn’t true of the 360. The red ring of death made a lot of customers mad and scared others off from the Xbox. And as Sony released better libraries and third parties just got a better handle on things they started to be able to do a better job on their PS3 versions to where it started to match or exceed the Xbox depending on the game.
By the end I think the PlayStation won in North American sales but it was way way closer than it should have been coming off the knockout success of the PS2.
> The PS3 failed developers
It failed as an ISA (or collection thereof), and in developer mindshare.
The SPEs were a different instruction set with a different compiler tool chain running separate binaries. You didn't have access to an OS or much of a standard library, you only had 256K of memory shared between code and data. You had to set up DMA transfers to access data from main memory. There was no concept of memory protection so you could easily stomp over code with a write to a bad pointer (addressing wrapped so any pointer value including 0 was valid to write to). Most systems would have to be more or less completely rewritten to take advantage of them.
I wonder what the REDACTED piece means here, aren't the PS3 hardware specifications pretty open? Per Copetti, the RSX memory had a theoretical bandwidth of 20.8 GB/s, though that doesn't indicate how fast the CPU can access it.
It is a mind bendingly tiny 16MB/s bandwidth to perform CPU reads from RSX memory.
I didn’t realize DMA didn’t help with GPU memory.
PS1: Easy to develop for and max out. PS2: Hard to develop for and hard to max out. PS3: Even harder than PS2. PS4: Back to easier. PS5: Just more PS4. PS5 PRO: Just more PS5.
For whatever reasons developers seem loath to talk about how difficult developing for a given console architecture is until the console is dead and buried. I guess the assumption is that the console vendor might retaliate, or the fans might say, "Well all of these other companies are somehow doing it, so you guys must just suck at your jobs."
An early interview with Shinji Mikami is one of the only ones I can recall about a high-profile being frank about having difficulties developing for the console[0]:
> IGNinsider: Ahh, smart politics. How do you feel about working on the PlayStation 2? Have you found any strengths in the system by working on Devil May Cry that you hadn't found before? > > Mikami: If the programmer is really good, then you can achieve really high quality, but if the programmer isn't that great then it is really hard to work with. We lost three programmers during Devil May Cry because they couldn't keep up.
[0] https://www.ign.com/articles/2001/05/31/interview-with-shinj...
The PlayStation did so well a lot of people wanted the PlayStation 2. And because it worked as a cheap DVD player it sold extremely well.
Sony learned hard to program expensive exotic hardware does great!
PS3 arrives with hardware that’s even more expensive and even harder to program and gets a world of hurt.
So for the PlayStation 4 they tried to figure out what went wrong and realized they needed to make things real easy for developers. Success!
PlayStation 5, that PlayStation 4 four thing worked great let’s keep being nice to developers. Going very well.
The PS2 succeeded _in-spite_ of its problems. And Sony didn’t realize that.
Having never worked on SPE coding, but having heard lots about interesting aspects, like manual cache management, I was very interested to read more.
> C++ virtual functions and methods will not work out of the box. C++ encourages dynamic allocation of objects but these can point to anywhere in main memory. You would need to map pointer addresses from PPE to SPE to even attempt running a normal c++ program on the SPE.
Ah. These are schoolboy errors in games programming (comparing even with the previous 2 generations of the same system).
I think the entire industry shifted away from teaching/learning/knowing/implementing those practices de rigeur, so I'm absolutely not criticising the OP -- I was taught the same way around this time.
But my reading of the article is now that it highlights a then-building and now-ubiquitous software industry failing, almost as much as a hardware issue (the PS3 did have issues, even if you were allocating in a structured way and not trying to run virtual functions on SPEs).
https://github.com/olofk/qerv
The only pratical many-core I know of was the SPARC T-1000 series https://en.wikipedia.org/wiki/SPARC_T_series
But did it fail?
PS3 was a very successful 7th gen console, only beaten by the Wii in units sold, but had a longer shelf life, more titles than any other 7th gen console.
>The original design was approximately 4 Cell processors with high frequencies. Perhaps massaging this design would have led to very homogenous high performance Many-Core architecture. At more than 1 TFlop of general purpose compute it would have been a beast and not a gnarly beast but a sleek smooth uniform tiger.
That's great and all, but the PS3 cost (famously) FIVE HUNDRED AND NINETY NINE US DOLLARS (roughly $900 dollars in todays money).
However one thing I noticed is that multi-core programming in 2006 was absolutely anemic. Granted I was too young to actually understand what was happening at the time, but a couple years ago I went in on a deep dive on the Cell and one thing I came away with was proper parallelism was in its infancy for mainstream development. Forget the Cell, it took a long time for game developers to take advantage of quad core PCs.
Developers were afraid of threads, didn't understand memory barriers and were cautious of mutexes. Gaben has a famous clip trashing the PS3 because most of valve's developers at the time did not have the experience programming multicore systems. It was common to just have objective-based threads (ex, Render thread, AI thread, Physics thread), and pretend coordination didn't exist for large parts of the code. This mostly worked up until you had more cores than threads. This stands in stark contrast to most parallel thread today that does userspace scheduling with tasks or threads.
Even Naughty Dog eventually figured out late in the cycle to best take advantage of SPEs using fibers with a system that looks like modern async reactors (like node or tokio) if you squint really, really hard.
The Cell was just really really early. Looking back I don't think the Cell was half-baked. It was the best that could be done at the time. Even if the hardware was fully baked, there was still 5-10 years of software engineering research before most people had the tooling to tae advantage of parallel hardware.
Well, if anyone was going to figure it out, it would've been Naughty Dog, they've got a long history of being absolute wizards at taming Sony's hardware.
Do you have any links to further details about ND's achievements on the PS3? I found a talk that looked like it might cover the issue, but it seems like it's about the PS4. https://gdcvault.com/play/1022186/Parallelizing-the-Naughty-...
I'll try to find it, but admittedly this is knowledge that I researched back in ~2011 around the time of Uncharted 3's launch.
Compared to today, the Cell really isn’t so complex — we have just as many or more cores, we have GPU programming (and low level APIs with manual data moving and what not). It’s just that the cell came out in a world where dual core had just became the norm and people hadn’t really accustomed to it yet. It was ahead of its time. And a new architecture meant immature tooling.
I also thought the price was OK considering the Blu-ray support.
I'd like to see a list of the top-selling consoles with their inflation-adjusted prices. The ps3 did really well considering the price. I'd say the ps2 hit the sweet spot for price while still offering a performance improvement over the previous generation.
OpenMP was around back then and was easy.
A lot of people brought them just for that, same thing with DVDs and PS2s.
> The PS3 failed developers because it was an excessively heterogenous computer
Most here are probably too young to realize the PS3 was supposed to be a flagship consumer device to show off the Cell processor and Sony was pushing hard for the Cell arch to be everywhere, media devices, general purpose computers, next gen supercomputers
It died hard when people realized how difficult it was to program for, and I dont think anything else other than the PS3 ever bothered seriously trying again with that arch.
Yes it did, fairly near the top.
>>It is important to understand why the PS3 failed.
This article is a bunch of nonsense.
> It is important to understand why the PS3 failed.
Full disclosure: I have been doing The Internet for well over 30 years. :)
"including the title" was intended to attach to the line count. Some may include the page title (h1 heading in HTML) in the lines at the start of the page. Some may not, because the title isn't necessarily part of the article itself. I was trying to disambiguate.
I was not trying to say that the "PS3 failed" was in the title.
You're right that it's in the URL but I didn't see that until you pointed it out.
Being pedantic is not an interesting position.
https://news.ycombinator.com/item?id=43656279#43656640