NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Building the System/360 Mainframe Nearly Destroyed IBM (spectrum.ieee.org)
herodotus 3 hours ago [-]
My Master's supervisor, at Wits university in Johannesburg, worked on the architecture of the 360 after graduating from the PhD program at Harvard. I remember him telling us how they went about deciding whether or not 32 bits would be a sufficient size for "most" floating point numbers. They were very systematic about it, scouring journals, talking with physicists and mathematicians and so on.
Synaesthesia 12 minutes ago [-]
And this is all before integrated circuits, so the circuits are still very large, consisting of many circuits on boards mounted on a backplane.

https://en.wikipedia.org/wiki/IBM_System/360#Basic_hardware_...

https://en.wikipedia.org/wiki/Solid_Logic_Technology

froh 5 hours ago [-]
TIL that not only the software side was chaotic (served as the backdrop of Fred Brook's "Mythical Man Month"), but also the hardware side almost failed.

the article here ends around 1971 --- the mainframe would later save IBM again, twice, once when they replaced aluminum with copper in interconnects, and then when some crazy IBM Fellow had a team port Linux to s390. Which marked the birth of "enterprise Linux", i.e. Linux running the data centre, for real.

mananaysiempre 3 hours ago [-]
> [Porting Linux to s390] marked the birth of "enterprise Linux", i.e. Linux running the data centre

Did it though? Or was it the gradual phasing out of mainframe-class hardware in favour of PC-compatible servers and the death of commercial Unices?

rbanffy 3 hours ago [-]
> Or was it the gradual phasing out of mainframe-class hardware in favour of PC-compatible servers

Proprietary Unix is still around. Solaris, HP-UX and AIX still make money for their owners and there are lots of places running those on brand-new metal. You are right, however, that Linux displaced most of the proprietary Unixes, as well as Windows and whatever was left of the minicomputer business that wasn't first killed by the unixes. I'm not sure when exactly people started talking about "Enterprise Linux".

kevin_thibedeau 2 hours ago [-]
Redhat was doing enterprise Linux well before IBM was involved. It was the rational platform for non-legacy .com 1.0 businesses.
rbanffy 30 minutes ago [-]
Back then I went with Debian, but I agree - the early scale-out crowd went mostly with Red Hat. Back then there was a lot of companies still doing scale-up with more exotic hardware with OSs like AIX and Solaris.
talkingtab 2 hours ago [-]
This whole thing is very cool and worth reading.

BUT. I worked at a place that used IBM 360s. We ran stuff for engineers, a lot of Fortran along with assembly code. We had so much stuff going on we could not code up and run things fast enough. The engineer/scientist got frustrated.

Then one day an engineer brought in an Apple II from home and ran the programs on that.

The earth shook. The very ground beneath us moved. Tectonic plates shifted. The world was never the same again! I think it was Visicalc.

Later there were other things. Soul Of A New Machine. The Mac.

I wonder how the compute power of a current high end smart phone compares with and IBM 360? I know the graphics chip is better.

PaulHoule 20 minutes ago [-]
If I had to compare computers based on one number it would be the amount of RAM. The 360 had a 24 bit address space which could fit 16MB of RAM although only the largest installations, like the one at NASA, were that big. iPhone 16s have 8GB of RAM so you're talking 512 times the memory capacity, never mind that my desktop PCs are all loaded with 4-8x times that of the phone and you can definitely get a big server with a few TB.

An IBM 360/20 on the small side, however, ranged from 4kB to 32kB which was similar to home computers circa 1980, before it is routine to have a complete 64kB address space.

Where the 360 crushed home computers was in mass storage, 9-track tapes could store 80MB contrasted to floppy disks that stored less than 200kB. Large storage compared to memory meant a lot of focus on external memory algorithms, also there was already a culture of data processing on punched cards that translated to the 360 (e.g. terminals have 80 columns because punched cards had 80 columns)

winrid 1 hours ago [-]
A $50 smartphone is many orders of magnitude faster.
EncomLab 5 hours ago [-]
Alan Kay has promulgated many famous truths about computer science - one of them being that among all fields of study, it has the least regard for the people and discoveries that brought it to where it is today. Maybe it's just my own sense of history as I move into my 4th and likely last decade of working with computers, but I find this to be both true and lamentable.

This was a great article - thanks for sharing!

bluGill 3 hours ago [-]
How many of the people who made the steam engine possible do we remember? James Watt of course, but many many people were making contributions in material science, needed to make them useful. Not to mention many advances in values. No doubt lots of other areas as well, but I'm no expert in the steam engine.
EncomLab 2 hours ago [-]
Not sure what you mean by this - it's not as if steam engines are an extensive technology today, and certainly no university is teaching "steam science", while nearly every school is teaching "computer science".

Perhaps this is just the attitude that drives Mr. Kay's point home - do individuals who are interested in CS have little value for who and what has come before them?

scrlk 4 hours ago [-]
The IBM Centennial Film has a short interview with Fred Brooks about System/360: https://youtu.be/VQ0PBve6Alk?t=84

Set to some nice Philip Glass music to boot.

noworld 6 hours ago [-]
The successor IBM Mainframes are still alive... for the time being.

https://www.redbooks.ibm.com/redbooks/pdfs/sg248329.pdf

froh 5 hours ago [-]
oh, they'll stay around for another while.

they also moved on three more CPU generations since that redbook, to z17.

I think it's Linux on Z that makes it sexy and keeps it young, in addition to a number of crazy features, like a hypervisor that can share CPUs between tenants, and a hardware that support live migration of running processes between sites (via fibre optic interconnect) and the option to hot swap any parts on a running machine.

It's doing a number of things in hardware and hypervisor that need lots of brain power to emulate on commodity hardware.

_and_ it's designed for throughput, from grounds up.

Depending on your workload there may be very good economical reasons to consider a mainframe instead of a number of rack-frames.

toast0 7 minutes ago [-]
> Depending on your workload there may be very good economical reasons to consider a mainframe instead of a number of rack-frames.

This may be true, but because there's basically no on ramp to running on a mainframe, there's no way anybody is going to try it unless they're already on a mainframe. Or maybe unless they really need something that only a mainframe can provide. But most companies can live with some downtime, and once you can live with some downtime, you have options for migration between sites, and options for migrating loads so you can swap parts on a stopped machine. Splurging on network infrastructure with multi-chasis redundancy is an easier step to take to get to a more reliable system than building against a totally different system architecture.

jareds 3 hours ago [-]
Do you have resources that provide info on companies using Linux on Z and the benefits of this verses commodity hardware? I used to work for a Mainframe ISV but the majority of our software ran on z/OS. I only saw customers using Linux on Z to begrudgingly run software that wouldn't efficiently run on z/OS, mainly Java applications when customers didn't want to deal with the complexity of specialty engines. I realize because our software focused on z/OS I had limited visibility into the full operations at our customers.
jasode 4 hours ago [-]
>Depending on your workload there may be very good economical reasons to consider a mainframe instead of a number of rack-frames.

For legacy companies yes but it would be very hard for new YC companies or existing non-mainframe companies to create a spreadsheet showing how buying a new IBM Z mainframe would cost less than the latest commodity x86 or ARM servers in the cloud or on-premise.

The IBM pricing for mainframes makes sense for legacy companies like banks and airlines with a million lines of old COBOL code that want to keep it all running with the latest chip technology. (The mainframes from a few years ago are coming off the lease and they need to upgrade to whatever new mainframe IBM is offering and sign another lease & service contract.) So, IBM mainframe prices are very expensive -- but legacy companies will pay it because migrating the code away from mainframes can be even more expensive.

It's similar to expensive enterprise pricing of Oracle, Embarcadero Delphi, Broadcom etc that takes advantage of existing customers already locked into their products. Virtually no new tech startup with a greenfield project is going to pay Delphi $1599-per-seat for each of their developers. Only existing customers stuck with their investment in Delphi code are going to pay that. (https://www.embarcadero.com/app-development-tools-store/delp...)

But some companies do endure the costs of migration to get out from IBM lock-in. There are lots of case studies of companies shifting their workload from mainframes to AWS/GCP/Azure. I can't think of a notable company that did the reverse. Even a mainframe hardware vendor like Fujitsu quit making mainframes and shifted to x86 running an emulation of their mainframe os.

Yes, IBM mainframe can run other workloads besides COBOL like Java and C/C++ but no company that's not already using mainframes would buy & deploy to IBM's Z hardware for that purpose.

winrid 59 minutes ago [-]
Building a fast reliable fault tolerant system is easier on mainframes than the cloud you're familiar with.

Imagine having transactions ACROSS services. Do you know how much bullshit and over engineering that gets rid of? A lot.

Spooky23 2 hours ago [-]
It’s a risk issue. 5 year high risk projects aren’t appealing to CIOs with an average tenure around 18-36 months. Even if it works, why should the next asshole get the credit?
KerrAvon 2 hours ago [-]
I think the point is that IBM’s market is shrinking, and they can’t acquire new customers. They will eventually need to stop making mainframes because there will be a crossover between the cost for new mainframes for remaining customers vs transition to commodity hardware cost.
rbanffy 5 hours ago [-]
> I think it's Linux on Z that makes it sexy and keeps it young

They feel fantastic when running Linux, but, if you don't need all the reliability features that come with the platform, commodity hardware might be a better choice for the kind of workload that has evolved on Linux.

> Depending on your workload there may be very good economical reasons to consider a mainframe instead of a number of rack-frames.

Absolutely - it makes a lot of the administrative toil disappear. I know clusters are sexy, but getting the job done is always better.

trollbridge 5 hours ago [-]
Other than IBM’s absurdly high pricing, they’re cheaper to run in almost every way than x86 machines (including cloud). I haven’t done the math to compare with aarch64/ARM.

But most people don’t want to deal with the hassle of dealing with IBM.

MrBuddyCasino 5 hours ago [-]
I suspect the main reason isn't just strictly economical. If you are Google or some such you can probably compensate by building smart software around commodity hardware, but most companies simply can't. Even if they have boatloads of money (banking, insurance) and can hire expensive talent, they simply can't successfully complete such an undertaking because they don't have it in their DNA, and management isn't incentivised to take on such a risky project.

In this case they will just use a mainframe, even it isn't cheaper in the long run.

cmrdporcupine 2 hours ago [-]
Thing is Google does not use commodity hardware for their DCs as far as I can tell. They got famous for that back in the early 2000s but I think they abandoned that approach a long time ago.
MrBuddyCasino 53 minutes ago [-]
They may design their own hardware, but their approach to scaling and fault tolerance is still the same: not a low number of fast and very expensive enterprise grade hardware that is unlikely to fail, but a huge fleet of servers that scale horizontally and that can tolerate the loss of a machine.

Which is the Mainframe vs commodity server dichotomy.

speed_spread 3 hours ago [-]
A mainframe is the biggest single system image you can get commercially. It's the easiest, most reliable way to scale a classical transactional workload.
rbanffy 2 hours ago [-]
> A mainframe is the biggest single system image you can get commercially

It depends. As we have seen the other day, HPE has a machine with more than 1024 logical cores, and they have machines available to order that can grow up to 16 sockets and 960 cores on a single image of up to 32TB of RAM. Their Superdome Flex goes up to 896 cores and 48TB of RAM.

I believe IBM's POWER line also has machines with more memory and more processing power, but, of course, that's not the whole story with mainframes. You count CPUs that run application code, but there are loads of other computers in there doing a lot of heavy-lifting so that the CPUs can keep running application code at 100% capacity with zero performance impact.

> It's the easiest, most reliable way to scale a classical transactional workload.

And that's where they really excel. Nobody is going to buy a z17 to do weather models or AI training.

NelsonMinar 53 minutes ago [-]
My partner makes a living writing z/OS assembly language, has been for many years now. The platform is still going strong as a business. The main problem they face is all the folks who know how to program these things are retiring (or dropping dead at their keypunches.) It's very hard to convince new people to learn how to operate these systems.
ChicagoDave 1 hours ago [-]
My sister worked at IBM and one of my favorite stories is when Lotus tried to pressure IBM to spend millions on licensing, a few engineers went off and built their own spreadsheet application and IBM told Lotus to piss off.
svilen_dobrev 2 hours ago [-]
i have also found these to be interesting, kind of selection/ analysis/ visualisation over memoirs of Lou Gerstner (part 2 is around s/360):

https://juliusgamanyi.com/2018/12/28/wardley-maps-an-illustr...

https://juliusgamanyi.com/2019/06/18/wardley-maps-an-illustr...

https://juliusgamanyi.com/2020/06/25/wardley-maps-illustrate...

Now, reading the article, this "[rivalry].. So intense was it that sometimes it seemed to exceed the rivalry with external competitors.” reminded me of something old about Motorola.. where rivalry went to the need to reverse-engineering other depts' chips via 3rd party acquiring them..

www.chicagomag.com/Chicago-Magazine/September-2014/What-Happened-to-Motorola/

cafard 4 hours ago [-]
Thomas Watson, Jr.'s A Business and Its Beliefs was published in 1963 and makes no allusion to the S/360 work. It makes for curious reading today, for unrelated reasons.
kwanbix 2 hours ago [-]
As an ex-IBMer, I miss the IBM of old. So much innovation. Now they are just a consulting company. So sad to see it go that way.
winrid 58 minutes ago [-]
Indeed. Their current goal as communicated to shareholders is literally to move all engineering offshore.
FpUser 6 hours ago [-]
This was some great read
ListeningPie 6 hours ago [-]
Your comment was reason enough for me to read it, 8/10
curtisszmania 4 hours ago [-]
[dead]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 16:47:11 GMT+0000 (Coordinated Universal Time) with Vercel.