GregDavidson 4 hours ago

The UCSD "Computer Scientists" were a small group of undergraduates working in Ken Bowles' lab. We were supposedly following Professor Bowles' directions but he was a fairly conservative physicist and we had lots of radical ideas - fortunately he was tolerant. The p-code was not just machine independent - by careful design it was approximately 1/4 the size of native code on those early 8 and 16-bit microprocessors, allowing us to effectively almost quadruple the amount of code we could fit in 64K - minus the interpreter which was 8K of machine code and minus another 8K on PDP-11s for I/O space. We would also use native code for hotspots without appreciably expanding code size. This key idea is what allowed us to have a high-level OS and development environment on those dinky machines when everyone else was compromising quality to get things to fit. Alas, CopyLeft had not yet been invented, the UC sold the P-System and we lost legal access to the code we'd written.

wduquette 3 days ago

The UCSD p-System was amazing. I used it on a Heathkit-branded PDP-11, the Apple II, and an HP-9000 workstation; and though the author doesn't mention it, the first version of Borland's Turbo Pascal for CP/M and DOS had a UI that was clearly influenced by the p-System's UI.

The coolest thing about UCSD Pascal when I first encountered it was it supported "full screen" programs, notably the system's text editor, via the `gotoxy(x, y)` intrinsic. This procedure moved the cursor to the specified character cell on the terminal. Prior to this I'd only used line-oriented editors.

  • mbessey 3 days ago

    I did mention the Turbo Pascal connection briefly, and I'll probably make a more in-depth comparison in a later post on just the IDE.

    I used a fairly early version of Turbo Pascal for DOS for several years after High School. I can still remember the absolute terror of realizing you'd pressed "R" without saving first.

    • wduquette 2 days ago

      My bad; I missed the Turbo Pascal reference.

      I first heard of Turbo Pascal in a magazine called Profiles, published by Kaypro for owners of their computers; I'd recently gotten a Kaypro 4, which ran CP/M-80, my first computer of my very own, and I was pining for Apple Pascal/UCSD Pascal. I read the ad (and maybe a review?); it was $49.95, and I ordered it immediately. Nor was I disappointed.

    • dumdedum123 2 days ago

      Oh the memories! You are exactly right. I remember this as well.

    • sitkack 3 days ago

      I never used it, are you saying you could Run the current program and it might accidentally bring your entire system down without having saved the program?

      Seems like at least a two file circular buffer with autosave wouldn't take up too much space, or maybe streaming diffs into a compressed buffer (even on a 286, this shouldn't be too much trouble).

      • mbessey 2 days ago

        Yes, that exactly. Part of what made Turbo Pascal so fast was that it kept your entire program, and the compiler, in memory.

        You had an option from the main menu to "compile" or "run", which included compiling, but NOT saving your edits first. You could save first, but on a floppy-based system, that could take a while.

        I want to say that behavior changed in Turbo Pascal version 2, or 3?

        • dumdedum123 2 days ago

          At least until 3 which is what I used.

      • cardiffspaceman 2 days ago

        But control-Kdsr saves your work to the device it came from and runs the program. Approximately the WordStar command set with additions for the task at hand.

      • kragen 2 days ago

        I think that is what he is saying, though I can't remember the TP command set well enough.

        Turbo Pascal wasn't written on a 286; it was written for CP/M, where I think it required 48KiB of RAM. A "fairly early version of Turbo Pascal for DOS" might have required 64KiB?

        You can't really stream things onto a floppy disk (remember that early home computers and the IBM PC didn't have hard disks; they didn't become standard equipment until the late 80s). You have to write a whole sector at a time, which can take a second or two to seek the disk to the appropriate track; rotating the disk to the right sector takes a significant fraction of a second. Journaling your edits to a journal file was a feature that EDT on VAX/VMS had around that time, but there wasn't really a practical way to do that on a home computer.

        • sitkack 2 days ago

          Yeah, I see that. I had an Amiga and mostly used the HD, so I don't really remember how slow the floppy drives were. Maybe for systems with tape drives we could live code and stream a journal to audio tape? A log of all the edit commands should be doable, maybe even as DTMF tones.

          That would be funny if early OSes had an 8track (endless loop) as the circular journal. I think that is how the Voyager probes work. The 8 track DTR on the Voyager probes did not have an endless loop. https://hackaday.com/2018/11/29/interstellar-8-track-the-low...

          Did you see this

          Show HN: Torque – A lightweight meta-assembler for any processor (benbridle.com)

          https://news.ycombinator.com/item?id=43698801

          It is a Forth inspired programmable assembler.

          based off of https://wiki.xxiivv.com/site/uxn.html

          • kragen 2 days ago

            I didn't, thanks!

        • prosaic-hacker 2 days ago

          Early 80's VMS had a Keep/Purge system for history of files. Everytime you edited a file a version number was bumped up by 1. There was a command, "purge", I think that set the Keep count which was default to 3.

          This would have been within the contemporary Dos and disk capabilities and size of 160-360kBytes, although slow (5-15 seconds).

          When Dave Cutler moved from DEC to MS I expected the Dos console under windows to get that same feature. Disappointed. A gajillion lost hours could have be saved.

          • sitkack 2 days ago

            We had a VMS machine in school and regret not spending more time learning from it.

            It would be awesome to have application level granularity for a time traveling file system, the undo/redo mechanism could be built into the OS.

            Reading up on https://en.wikipedia.org/wiki/Fossil_(file_system)

            • kragen 4 hours ago

              Incidentally, I was just reading https://cseweb.ucsd.edu/classes/wi19/cse221-a/papers/bobrow7... about TENEX, which offered roughly the same version numbering facility as VMS, though without the hierarchical filesystem directories VMS and Unix got from Multics. I don't know if TENEX got the idea from an earlier system. They don't mention one.

              It's kind of amazing that this one paper introduced command-line completion, copy-on-write pages, load averages, and CAM TLBs.

            • kragen 21 hours ago

              The way I remember it, there was a SET command to set how many versions of a file should be kept, and PURGE would delete all but the most recent version. You could see the date of each version in DIR (they were listed on separate lines) but there wasn't a convenient way to open "FOO.FOR as of yesterday morning". You would have to figure out what version number to ask for and open "FOO.FOR;53" or whatever.

              I agree that implementing this functionality in MS-DOS would have been relatively straightforward and acceptably efficient.

              It'd be great to have long-lived transactions in a filesystem, permitting higher-level undo and redo.

  • musicale 2 days ago

    > Heathkit-branded PDP-11

    The idea that you could save money by soldering together your own PDP-11 system from parts, and that there was a company that actually sold the kits (as well as assembled versions), is terrific.

    And today (assuming you can find a vintage DCJ11 CPU or equivalent) you still can build your own hardware PDP-11 via PDP-11/Hack and other designs! (Though personally I'll probably go for an FPGA version.)

    • wduquette 2 days ago

      I watched my dad build the PDP-11, terminal, and paper tape reader/punch. Eventually we got a dual-8” floppy drive; he might have built that, too, I don’t remember.

dlinder 2 days ago

Around 1995, our high school "Pascal I" and "Pascal II" classes were taught in a forgotten Apple //e lab in the Math wing of the school. The PC and Mac labs were occupied by typing, word processing, and desktop publishing classes. I think every other kid in class groaned, but to a hamfest scrounger of PDPs, Vaxen, and weird UNIX workstations, UCSD p-System Pascal on Apple hardware was weirdly intriguing, the cherry on top being that the whole lab was served by a Corvus hard disk shared over, I think, an "Omninet" network. We'd all come in, turn on the lights, turn on the computers, and then have the lecture portion of class while this poor early NAS would serve Pascal to 20-odd machines simultaneously. I think we saved our work on floppy disks, though maybe that was a backup, as I think I recall turning in our work by saving to the Corvus? Even at the time, it all had a very "you are living the early experimental days" feeling to it.

  • icedchai 2 days ago

    That brings back memories. My high school also had a Corvus. You could definitely save files to it. I remember writing some Basic programs and it would show up as a Prodos "device" (or maybe it was a volume.) That was the first time saw any type of network.

stevekemp 3 days ago

I "recently" wrote a CP/M emulator, and I have a lot of love for the kinda vintage software out there that still runs on it.

https://github.com/skx/cpmulator/

Over the past few days I've seen posts on hacker news discussion 6502 assembly, people coming to the infocom games, and similar things. There's a lot of interest out there in this retro stuff, even now.

  • mbessey 3 days ago

    Surely some of it is just nostalgia for a "simpler" time, but I think there is a legitimate reason to preserve and celebrate these older systems, too.

    It's essentially impossible for a single person to build something as complex as a modern PC "from scratch", or indeed to build an operating system that compares to Windows, Linux, or MacOS.

    These old microcomputer systems are simple enough for one person or a small team to understand and build, and they are/were capable of doing "useful work", too, and not so overly-abstracted like some "teaching systems" are.

    I think that for me, part of the point of digging into something like the p-System is to show some of the brilliant (and stupid) ideas that went into building something as ambitious as a "universal operating system" in the mid-1970s.

    • mst 2 days ago

      Having cut my teeth on early Archimedes machines, I have a deep fondness for arm2's 16 instructions and the (lost during a house move, I suspect) assembly book I had that gave me enough of a description of the internals of the chip that I could desk check my assembly in my head with reasonable confidence that I was mentally emulating what the chip was actually doing rather than just what outputs I'd get for a given set of inputs.

      Having to remember where I'd put the relevant chunk of assembler any time I needed a division routine was, admittedly, less fun, but the memories remain fond nevertheless :)

    • WalterBright 3 days ago

      I sometimes think about that. Consider the early versions of MS-DOS. A modern programmer could crank that out with little difficulty in a short time.

      • kragen 2 days ago

        I think Tim Paterson did crank it out with little difficulty in a short time? He even called it "Quick and Dirty Operating System".

        • WalterBright 2 days ago

          Which makes one wonder, why weren't there others (like Gary Kildall)?

          • kragen 19 hours ago

            I'm not sure I understand what you mean.

            You probably remember Gary did start selling an MS-DOS clone (DR-DOS) after a few years, when it became clear CP/M-86 was dead. IIRC that's what inspired Microsoft to start working on MS-DOS again after several years of letting it languish. They also put anti-DR-DOS code into Windows so you couldn't start it up on DR-DOS.

            And, as you know, there were a number of other bare-bones "operating systems" like MS-DOS and CP/M in those days: HDOS, TRS-DOS, ProDOS, etc. But once everyone was writing their apps for MS-DOS, there was little point in bringing out a new OS that wasn't compatible with it unless it was dramatically better in some way.

            So, why weren't there other members of what set?

            • WalterBright 16 hours ago

              > there was little point in bringing out a new OS that wasn't compatible with it unless it was dramatically better in some way.

              Make a free open source one.

              • kragen 15 hours ago

                We do have FreeDOS now! Someone could have written it in 01981, but, as I understand it, the ideological motivation for such activities wouldn't be articulated until Stallman founded GNU years later.

          • musicale 2 days ago

            QDOS had the advantage of being able to reimplement the CP/M-86 design rather than starting from scratch.

            There were lots of disk operating systems created for 8 and 16-bit machines, as well as a number of BASIC + DOS type systems. But CP/M is the one 8-bit OS to rule them all - even running on an Apple II or C64 with a Z-80 CPU card or cartridge.

            • WalterBright 16 hours ago

              > QDOS had the advantage of being able to reimplement the CP/M-86 design rather than starting from scratch.

              CP/M was little different from the PDP-11 operating system, which it used as a model.

              CP/M was not as innovative as often thought.

              Both CP/M-86 and MSDOS were just an interrupt table and some implementation routines. The 8086 chip was designed around that interrupt table, so of course any OS would use it.

              • kragen 14 hours ago

                I assume you're talking about RT-11? Do you want to elaborate on the similarities? Although I've never used RT-11 (just CP/M, HDOS, MS-DOS, and VMS), I think they may be more superficial than you're suggesting.

                Looking at https://bitsavers.org/pdf/dec/pdp11/rt11/v1_Sep73/DEC-11-ORT... (RT-11 System Reference Manual, DEC-11-ORUGA-A-D, Sept. 1973, Chapter 8, Programmed Requests) I see printing of ASCIZ strings, 16 numbered I/O channels for open files, stream (wordwise) rather than purely blockwise access to those files (though the start position is specified as a block number), an open set of device names, RADIX-50 filenames, the ability to "swap" the "user service routines" into memory temporarily so they don't have to be resident the whole time your program is running, and "tentative files" that automatically replace a permanent file if successfully closed, and asynchronous I/O (.READ and .WRITE as opposed to .READW and .WRITW or .READC and .WRITC); all of these would have been improvements over the design CP/M actually used. On the other hand, it says RT-11 only supported contiguous storage of files (like the p-System), a CRLF is automatically appended to any string you print, and the filenames are 6 characters rather than 8, which are points where CP/M wins.

                The whole FCB thing, which is about 80% of CP/M BDOS, seems to have been absent in the RT-11 system call interface. I'm not sure whether it's better or worse (it's substantially more painful to use, but permits your program to allocate space for the number of open files it's actually going to use) but it's certainly a very different approach. RT-11 has .SAVESTATUS and .REOPEN to work around the 16-file limitation when necessary.

                Because you can only read or write starting at a block boundary in RT-11, it seems like it usually wouldn't make sense to read less than a block. But the inability to read more than a block was a real bottleneck for I/O in CP/M, as Tim Paterson explains in the blog post I linked from https://news.ycombinator.com/item?id=43729165:

                > At least part of the reason CP/M was so much slower was because of its poor interface to the low-level “device driver” software. CP/M called this the BIOS (for Basic Input/Output System). Reading a single disk sector required five separate requests, and only one sector could be requested at a time. (The five requests were Select Disk, Set Track, Set Sector, Set Memory Address, and finally Read Sector. I don’t know if all five were needed for every Read if, say, the disk or memory address were the same.)

                (Actually, this is the BIOS interface; I think the BDOS interface was more reasonable, but still only able to read one 128-byte record at a time.)

                Even the "Keyboard Monitor" described in Chapter 2 sounds very different from the CP/M command processor, for example, using "." as its prompt, supporting user-defined device names and command abbreviation, being able to make octal dumps of RAM and change its contents byte by byte, requiring an explicit "run" command to run programs, no way to pass command-line arguments to programs, and echoing character deletion in a teletype-friendly fashion\noihsaf\format. Most of the control keys are the same, I guess? And the editor sounds pretty similar to CP/M's benighted ED?

                • WalterBright 14 hours ago

                  The things like the TYPE command, the DEL command, the 8.3 case insensitive filenames (6.3 for RT-11), the / for switches, the drive:, CRLF, etc. Anyone using RT-11 could pick up MSDOS in about 5 minutes. I know I did (I had an H-11, and bought an IBM PC).

                  I bought a hard disk drive for my H-11, wire-wrapped an interface board for it, and wrote the device driver for it. It was a fun project, and didn't take much time. It was straightforward. I even got RT-11 to bootstrap off of it.

                  Sorry, I don't think any of that stuff is a work of genius.

                  My profile pic on twitter is of the machine:

                  https://x.com/WalterBright

                  from before I added the HDD.

                  It's also been 40 some years since I touched an 11, so my memory of the details needs a refresh :-/

                  • kragen 10 hours ago

                    Some of the things you're talking about are features MS-DOS had in common with RT-11 but where CP/M was totally different; specifically, DEL was called ERA on CP/M, and CP/M didn't have switches. (Except PIP, which, bizarrely, wrapped its switches in square brackets: PIP A:=B:*.COM[W]. See https://ia902808.us.archive.org/23/items/osborne-cpm-users-g...) MS-DOS got drive letters from CP/M; on RT-11, as you might remember, instead of A:, B:, C:, etc., you had SY0:, SY1:, and DK:. (HDOS copied that, as well as /switches.) I'm not sure where the 8.3 filenames are from, but CP/M and MS-DOS had them, and, as you say, RT-11 didn't, using 6.3 instead.

                    So, of the six similarities you listed between CP/M and RT-11, four were actually differences; only two were actually similarities (the TYPE command and the use of CRLF), with a third debatable one (8.3 is like 6.3 in that a three-character file type code forms part of the filename in some contexts).

                    If CP/M had used RADIX-50 like RT-11 did, it could have had case-insensitive 9.3 filenames in 8 bytes instead of 8.3 filenames in 11 bytes. I think that would have been a big improvement.

                    So, I don't think any of CP/M's deviations from RT-11 are a "work of genius", but it wasn't just a copy of RT-11, "little different", as you say. It clearly deviated from RT-11 in a lot of ways, to an extent that suggests drawing from some other source. Maybe RSX-11, dunno.

                    The page you link to just says "Sign in to Twitter". For the sake of courtesy, I'd rather not go into how I feel about that invitation.

                    • WalterBright 8 hours ago

                      The differences such as A: vs SY0:, are differences only in detail. The unix command line is fundamentally different, not just different in detail. BTW, RT-11 used PIP.

                      > The page you link to just says "Sign in to Twitter". For the sake of courtesy, I'd rather not go into how I feel about that invitation.

                      It goes to my profile page. Of course, I am logged in to twitter. I had no idea that it was necessary to sign in to twitter to see my profile page. There was no nefarious intent. I am not aware of any benefit that may accrue to me from you signing up for a twitter account.

                      • kragen 7 hours ago

                        I agree that Unix was fundamentally different in many ways, but CP/M wasn't a copy of Unix either; if anything, RT-11 was slightly more Unix-like than CP/M was. Because CP/M was evidently worse than RT-11 in many apparently unnecessary ways, I suspect that it was drawing from some other source.

                        I didn't suspect any nefarious intent, but if I didn't tell you it had happened, you would never have known. My apologies if it sounded like I was blaming you for it.

                        • WalterBright 6 hours ago

                          I don't see any heritage of unix in CP/M, but I do see a heritage from DEC. Not an exact copy, of course.

                          > if I didn't tell you it had happened, you would never have known

                          That's right, and now I know. Thanks!

                          > My apologies if it sounded like I was blaming you for it.

                          Thank you. Apology accepted!

            • kragen 21 hours ago

              I broadly agree, but I would quibble on the "-86" part; CP/M-86 uses a different interrupt than MS-DOS, so I suspect that the model for QDOS was CP/M-80. I'm not even sure CP/M-86 had been released when Paterson wrote QDOS.

              • kragen 17 hours ago

                Paterson claims CP/M-86 wasn't released yet in http://dosmandrivel.blogspot.com/2007/09/design-of-dos.html?...:

                > We knew Digital Research was working on a 16-bit OS, CP/M-86. At one point we were expecting it to be available at the end of 1979. Had it made its debut at any time before DOS was working, the DOS project would have been dropped. SCP wanted to be a hardware company, not a software company.

    • kragen 3 days ago

      Probably what you want to check out is Oberon, which is a modern PC built basically from scratch, along with an operating system that compares to Windows, Linux, or MacOS, built originally not by a single person but by maybe a dozen people. It's capable enough that it was the daily driver for numerous students during the 80s; the earliest versions of it were built in-house by necessity because graphical workstations weren't a product you could buy yet. Wirth's RISC CPU architecture avoids all the braindamage in things like the Z80 and the 80386. I think that, with their example to work from, a single person could build such a thing.

      Oscar Toledo G. also wrote a similar graphical operating system in the 01990s and early 02000s, working on the computers his family designed and built (though using off-the-shelf CPUs). You can see a screenshot of the browser at http://www.biyubi.com/art30.html and read some of his reflections on the C compiler he wrote for the Transputer in his recent blog post at https://nanochess.org/transputer_operating_system.html.

      There's a lacuna in the recursivity of Wirth's system: although he provides synthesizable source code for the processor (in Verilog, I think) there's no logic synthesis software in Oberon so that you can rebuild the FPGA configuration. Instead you have to use, IIRC, Xilinx's software, which won't even run under Oberon. Since then, though, Claire Wolf has written yosys, so the situation is improving on that front.

      CP/M is interesting because it's close to being the smallest system where self-hosted development is bearable; the 8080 is just powerful enough that you can write a usable assembler and WYSIWYG text editor for it. But I don't think that makes it a good example to follow. We saw this weekend that Olof Kindgren's SeRV implementation of RISC-V can be squoze into 5900 transistors (in one-atom-thick molybdenum disulfide, no less) https://arstechnica.com/science/2025/04/researchers-build-a-... https://news.ycombinator.com/item?id=43621378 which is about equivalent to the 8080 and less than the Z80. And Graham Smecher's "Minimax" https://github.com/gsmecher/minimax is only two or three times the size of SeRV and over an order of magnitude faster.

      There's no reason to repeat the mistakes Intel made in the 01970s today. We know how to do better!

      • musicale 2 days ago

        > There's no reason to repeat the mistakes Intel made in the 01970s today. We know how to do better!

        CP/M, WordStar, and Turbo Pascal were/are pretty good though!

        As you suggest, someone really should port an open source FPGA toolchain to Oberon to honor Prof. Wirth's great work.

        • kragen 20 hours ago

          I agree about WordStar and TP. You can kind of justify all of CP/M's problems by reference to the limits of the machines it had to run on (for example, they often had no real-time clocks), but I still think you could do better in many ways. For example:

          - The command processor didn't have to be so limited, as amply demonstrated by ZCPR, or so hard to use, as demonstrated by the p-System.

          - Record-oriented file access was probably a mistake. The 128-byte record size meant that you still had to use 2-byte record numbers to get files of over 32K, and that writing a single record required wastefully reading a whole 512-byte sector in order to not lose the other three records in the sector. Byte-based file access would have been far better for the usual case, even at the expense of needing 24-bit seek offsets for very large files (over 64K). This would have to be built on sector-based access, but the intermediate layer of record-based access is purely dead weight for most applications. Sector-based access (or access in larger blocks as in Forth) would allow you to use 1-byte sector numbers until the advent of double-density disks.

          - Using different BIOS calls to write to the terminal, the printer, and the paper tape punch was obviously a mistake, and one that made it very difficult to extend the set of available devices. I believe HDOS did a better job here. Moreover, if you adopted byte-based file access, you could use a single BDOS call to write to a file, the terminal, the printer, or the punch, and analogously for reading. This also would have made it easier to support multiple terminals.

          - The user interface of ED was stuck in the teletype era. But almost nobody ran CP/M on a teletype, because teletypes cost more than CP/M crates. Before long most CP/M machines had a monitor built in, like the Kaypro, Osborne, and H-89. Even BASIC-80's unusably bad line editor gave you a live display of the line you were editing, and WordStar demonstrated that it was possible to do much better on the same hardware.

          - The "user" facility in its filesystem was useless.

          So, I'm not a huge fan of CP/M. I think virtually all of its major design decisions were mistakes, except for the BIOS/BDOS split, though not such serious mistakes as to make it completely unusable. I'm interested to hear why you disagree so strongly.

          • stevekemp 2 hours ago

            Interesting that you picked out the use of different API calls for writing to terminal, printer, and the tape. There was at least an attempt at unifying that with the "IOByte" configuration.

            The idea was that depending on the state of the IOByte the actual destination of "stuff" could vary.

            Of course in the CP/M emulator I wrote/maintain I ignore that byte, because it turns out everybody else did too. (Kinda like user-numbers/areas, most people ignored them.)

          • kragen 17 hours ago

            I just found Tim Paterson's article at http://dosmandrivel.blogspot.com/2007/09/design-of-dos.html?... which says my comment above got an important thing wrong: the sector size on 8" disks was typically 128 bytes, so CP/M was reading per sector. He mentions that North Star DOS used 256-byte sectors. I'm pretty sure the logical HDOS sectors on my H-89 5¼" single-density floppies were 512 bytes, but I've never interacted with the low-level format of the disks. They were 100K per side, 10 sectors per track (with 11 holes punched in the disk to indicate their positions), which I guess would imply 20 tracks, which sounds too low! Maybe the physical sectors were 256 bytes.

            https://heathkit.garlanger.com/diskformats/HDOS_Disk.pdf confirms: 256 bytes, 10 sectors, 40 tracks.

            Paterson's article explains why he copied the FAT filesystem from Microsoft BASIC but extended the cluster numbers to 12 bits. He also has some pretty damning criticisms of both the CP/M filesystem design and its BIOS interface.

  • musicale 2 days ago

    I like how your emulator (like RunCPM) can work with native directories and files. It's much more convenient than messing around with disk images.

    • stevekemp 20 hours ago

      Thanks! One of my biggest frustrations with the retro-scene is having to deal with old compression-formats, and disk-archives, so that was very much a design choice.

      Many of the recent/modern emulation projects work the same way. In addition to RunCPM there's also the excellent rust-based iz-cpm project which I enjoyed studying at times.

creeble 3 days ago

VersaCAD was the only commercial program I can remember for p-System. It ran from floppies, or a hard drive that could be set to boot it.

It was a great CAD program, I many ways ahead of AutoCAD in its time. But AutoCAD was written in C, which proved far more popular (and, ultimately, more portable) than UCSD-p.

  • cduzz 2 days ago

    The first Wizardry[1] was built on the P system.

    My dad spent a small fortune buying an IBM PC with 544k of memory and an external Davong hard drive that emulated an enormous floppy drive. He also put UCSD P system on this beast, along with a tecmar graphics master...

    I used the p system's editor to write school papers for a long time. It was some weird modal editor; he switched to Dos and turbo pascal after a while...

    I got to use this computer for games when he wasn't using it and found that the _wizardry_ save game disks were formatted in UCSD P system format and I could even noodle around with the save games (mostly resulting in the game crashing).

    [1]https://en.wikipedia.org/wiki/Wizardry:_Proving_Grounds_of_t...

musicale 2 days ago

> Get Apple Pascal up and running in some kind of emulator on my Mac, so I can experience it again

I wonder if Lisa Pascal will run in a Lisa emulator...

> Build a p-machine emulator, in Rust

Probably a p-code interpreter and/or p-system VM! (Analogous to the JVM but for Pascal/p-system rather than Java and its bytecode. p-code translator/JIT compiler probably left as an exercise for the reader.) I'm surprised that nobody seems to have written one in JavaScript and/or webassembly... the latter basically being p-code for the 2020s.

  • mbessey 2 days ago

    I haven't seen a web-based p-System, either, which was a little surprising to me. You can run either the Apple or CP/M versions through emulating the entire computer, though.

    That is probably why nobody's felt the need to make a p-System for the web.

SomeHacker44 2 days ago

I used this in/around 1982 or 83 on an Apple ][+. I remember hacking Wizardry, my favorite game, and discovering it seemed to run on Apple Pascal as well. Such fun times. I love this hacking project of the OP.

kragen 3 days ago

I used the p-System on a Heathkit H-89.

I think the overall approach of future-proofing your software by compiling it to a simple, portable virtual machine is valid. Since the p-System, in addition to the JVM and Zork Z-machine mentioned in this post, we've seen Smalltalk-80, PostScript, Open Firmware aka OpenBoot, Glulx, the ASS/400, the Open Software Foundation's ANDF (the architecture-neutral distribution format), Google's NaCl and pNaCl, Microsoft's CIL, JS as a compilation target, WebAssembly, uxn, and the revival of old video game consoles in emulation as a stable software target.

A problem with this approach is that most of these portable platform layers are still far too unstable for reliable archival; even video game emulators face a constant struggle to maintain compatibility as they are updated to keep up with whatever platform they're running on. Platforms like the JVM, which make more concessions to efficiency than MAME, have even more difficulty, so the JVM's slogan of "write once, run anywhere" was widely mocked as "write once, debug everywhere". But it's a good aspiration. I'd like to see it realized in a practical way.

My memory of the p-System is that it was almost unusably slow, a problem made worse by its filesystem being so simple it didn't support fragmentation, so sometimes you had to defragment your floppy disk in order to write new files onto it. It's true that its UI was screen-oriented, as wduquette said, and it was driven by a Lotus-1-2-3-like menu system, which enhanced its usability quite a lot.

Being a pure bytecode interpreter was a serious handicap, especially on the sub-1-MIPS machines we were running it on. EUMEL managed to make a go of it. I never got a chance to use EUMEL on an actual Z80, but I hear it was usably fast; I suspect the EUMEL virtual-machine instruction set (which included string operations) and operating environment went a long way towards compensating for the slowness of bytecode interpretation, much as Numpy does on CPython today.

I suspect you could have done a better job with a bytecode more like Dalvik, designed for efficient JIT compilation by leaving less work for the JIT compiler. But Deutsch and Schiffman didn't publish the first JIT-compilation paper until a few years after the p-System was released. (Schiffman told me a self-deprecating joke about this which I guess I can't really repeat.)

Long Tien Nguyen and Alan Kay published a paper on designing a very simple virtual machine for such digital preservation 10 years ago: https://tinlizzie.org/VPRIPapers/tr2015004_cuneiform.pdf

I think these ideas point the way to achieving the kind of future-proofness that the p-System was shooting for.

  • mbessey 3 days ago

    Performance of the p-System is definitely an issue on the Apple II, especially in the "OS" interface and editor, which is all interpreted. But running applications built on it wasn't half-bad.

    It's also important to remember that to a large extent, Apple Pascal on the Apple II and other late 1970s home computers wasn't competing with sophisticated native-code compiler suites, but with interpreted BASIC and with assembly language.

    Pascal was vastly more-productive than writing in Assembler, and much faster in execution than Apple BASIC. It even had reasonable support for integrating assembly routines for places where you really needed the speed.

    The p-System was A LOT more usable on the HP Motorola 68k workstations I used it on. Those were more than adequately fast for the sort of software we were writing for them in 1985.

    Thanks for the link to cuneiform, I think I read that paper once, long ago. Will definitely check it out.

    • kragen 3 days ago

      The interpreted BASIC on most home computers was Microsoft BASIC-80, which, as I remember it, was also painfully slow. There were lots of programs in it, and it was good enough for some games, but for the most part "real software" for those computers was written in assembly language. Even Turbo Pascal was written in assembly language, not Pascal.

      I think now we know how to do better.

  • TheOtherHobbes 2 days ago

    Computers are more like an assembly of subsystems than a single thing, and you can just about get away with agnostic byte code as long as you ignore most of the subsystems.

    So cross-platform byte code is sort of viable on mid-80s text terminal systems with limited memory and addressing. But as soon as you start adding graphics cards, video acceleration, sound, and AI accelerators, you need to add abstraction layers which will be limited and inefficient and limited compared to the hardware.

    And if the hardware isn't available you can either say 'This won't run at all' or emulate it in software, which will be even slower.

    • kragen 2 days ago

      There's some truth to that, but I think you're overstating the case. The vast majority of software we run on a day-to-day basis doesn't use any of the stuff you mentioned, so nothing is stopping it from moving to platform-agnostic bytecode, except that that bytecode doesn't exist yet.

      For example, 2-D graphics acceleration hardware (whether in the form of character generators or in the form of blitters and line drawing) was really important for usability up to the 01990s. This is a major reason the X-Windows protocol is so big and complicated: it needed a way to expose the acceleration capabilities of the hardware to applications so they could draw with it. This was, as you said, "limited and inefficient and limited compared to the hardware". But basically, as CPUs got faster, we gave up on all that stuff around the turn of the millennium, and now most 2-D applications really just want to fill up pixel buffers and swap the displayed buffer between screen refreshes. It's a very, very simple interface (you might say "inefficient abstraction layer").

      Something similar happened with sound. In the 80s and 90s our sound cards did square and sawtooth waves, LFSR noise generation, envelopes, FM synthesis, wavetable synthesis, etc. Different sound cards had different instruments! Now all I want to do with my sound card is send it a sequence of samples, maybe get back a sequence of samples from the microphone, maybe choose from among multiple outputs. Another very, very simple interface.

      3-D games do still use 3-D acceleration, of course. That's not a simple interface. They also depend pretty heavily on SIMD instructions. The same is true of video codecs.

      But my SSH client, my mail server, my IRC client, my text editor, my compiler, my filesystem, my Game of Life simulator, my system logger, my Sudoku game, my circuit design program (KiCad), my PDF viewer, my audio editor (Audacity), and so on — those aren't using "graphics cards, video acceleration, sound, and AI accelerators", except through the very, very simple interfaces we're talking about above. Most of them don't even use floating-point math! They could easily be in platform-agnostic bytecode because they already do ignore most of the subsystems in my computer.

      • wahern 2 days ago

        > This is a major reason the X-Windows protocol is so big and complicated

        X was trying to, in a sense, remote hardware acceleration. Wayland doesn't bother at all, clients render their windows locally and share (or send) a pre-rendered graphic. But if you use across the network an older X app, such as one that uses server-side fonts, etc, the experience is often much smoother, IME, as compared to the Wayland-universe alternatives.[1] Once upon a time even web browsers, like ancient versions of Netscape, were shockingly responsive over the network, even with mixed text and graphics; almost indistinguishable from local (and this at a time when X11 on a 486 was a smoother experience than Windows). The popular toolkits now render the window on the client even when using X, so those capabilities are largely unused today.

        [1] In that case, the composition and rendering of all the widgets, text, and images within a window is truly local, i.e. on your local X server.

  • ahefner 2 days ago

    Not ideal, but MS-DOS seems to me like the most practical universal software platform. DOSBOX isn't going anywhere.

    • kragen 2 days ago

      It is of some practical use, but there are a lot of slightly incompatible versions of the IBM PC and of MS-DOG, so it doesn't offer the kind of strong reproducibility that I'm looking for.

    • 3036e4 2 days ago

      A DOS-executable is far more write-once-run-anywhere now than Java ever was, and the best thing is that no one is going to ever deprecate any API in DOS. There is no software rot in a dead OS.

      • kragen 2 days ago

        You're damning it with faint praise, though.

kwertyoowiyop 3 days ago

Just think, Pascal on an Apple II cost about $1,800 in today’s dollars.

  • timbit42 2 days ago

    Was that before or including the extra hardware (RAM) to run it?

whartung 2 days ago

From the Terak Museum[0] from the Terak thread[1] there was this anecdote

  > What does the Terak have to do with the Macintosh and MacPaint? The Macintosh's operating system was bootstrapped on an Apple Lisa computer. The Lisa's OS was written on the Lisa using a port of the UCSD Pascal compiler and P-System. The Lisa's port of the P-System was prepared on an Apple II, which had its own version of the P-System that was developed by Bill Atkinson, the Apple programmer who later wrote MacPaint. Atkinson ported the P-System to the Apple II while visiting UCSD, who helped Apple with the port using a Terak. Some people think he got the idea for MacPaint from the paint programs he saw in use on the graphics-intensive, square-pixel Terak. Thanks in part to Gary Capell (gary@cs.su.oz.au) for parts of this story.
Its a great anecdote. And it also makes one think "They wanted to use Pascal so badly, they were willing to use UCSD Pascal for it."

UCSD Pascal was a wonder and a pioneer. Unfortunately, it was in the early era of microcomputers. When microcomputers were, frankly, horrible. File this anecdote under "its amazing we managed to get any software written at all" back then.

In the P-System, you had the core VM, and everything else was compiled into that P-code. The shell, the compiler, the file utilities, everything. Again, it's a marvel. It's (almost) self hosted (did it come with an assembler? I don't recall). But if you had a VM running, everything else was self hosted. The compiler compiled itself, being written in UCSD Pascal. Marvel yes, speedy, not so much. It certainly qualified as "better than nothing".

It had a lousy file system. Files had to be continuous, which makes it difficult to write to more than one file at a time. Compressing the disk structure was a routine process. The editor was also very interesting. Also, text files naturally compressed whitespace -- important with Pascal source code where whitespace is probably 20-30% of the space on disk.

However, on the other hand, the runtime was quite sophisticated. P-Code was position independent, so it could read, run, and flush code at will. The code was segmented into chunks, "overlays" being the norm. But as user of the code, it was mostly invisible to you (you know, the way RPC is invisible). If you look at the original Macintosh memory and resource manager, and how it stored applications in segments, you can see the lineage straight from what UCSD was doing back in 1977. And, of course, UCSD Pascal had that novel feature of being an actual, usable Pascal for, well, demonstrable system level programming, and also large system design through the use of Units and such. Novel at the time.

The real shame of the UCSD eco-system, specifically today, was when USCD licensed it to SofTech (something like that), who came out with P-System IV (P-III was a unique port to, I think, a Sage 68K machine). The ship had sailed for "but it's not DOS" types of systems that SofTech was trying to squeeze UCSD back into. But it had some cool features, notably it supported co-routines.

But, while P-System 1.5 and P-System II are all flying free around the interwebs, P-IV is not.

It IS well documented, but whoever owns SofTech today, hasn't released the legacy stuff to world.

Also, as a shout out to the Blog author. Do a search for, I think, the game SunDog. This was written in P-System IV, and they have a P-machine in C (or C++) that you can look at.

The part of the P-Machine I haven't quite grokked, is the way it handles stack frames. Because Pascal allows nested functions (and scopes), it had primitives to access "variable 3, 4 stacks frames up" kind of thing, so its a bit of a maze (plus the first class support for the segments in the runtime as well). I was looking at trying to port it to the 65816. The P-System would naturally work well with a 128K 65816 using their data bank model. You could have the runtime in its own 64K bank, the P-Code in its own, and then 64K of data ram in a 3rd. I thought that would be a neat '816 project.

[0] https://www.threedee.com/jcm/terak/index.html [1] https://news.ycombinator.com/item?id=43708726

  • mbessey 2 days ago

    I will definitely check out SunDog, if I can find it. I haven't yet decided whether I'm making a VM for version II or version IV, or both, yet. I want to be able to run code from Apple Pascal directly, so will likely start from II.5.

    An interesting piece of trivia about the Apple III version of Apple Pascal that I learned from this discussion is that it apparently puts p-code and data in their own 64k segments, like you were talking about for your 65816 version.

    • whartung 2 days ago

      A nit about Apple pascal is that it did rely on some custom machine language routines, but it may have been just for graphics.

      That’s interesting about the Apple ///. Honestly don’t know much about that machine.

      If nothing else, if you can hunt down SunDog, it can show the p-machine in something higher level than 6502 or Z80. But the IV machine is pretty different than the 1.5-2 machines.

      One problem I had getting started was just trying to figure out how to read the floppy images that are available. I obviously didn’t try really hard, but it was enough to calm my sails at the time. Udo Monk has some nice images on his Z80 pack site.

Mn7cB_3kL 3 days ago

[dead]

  • nxobject 2 days ago

    Apropos of that: know QEMU has an extensive hardware emulation library, but it shouldn't be taken for granted – Apple M-series support isn't quite there (here's a console-only solution [1]), and it would a significant platform to lose emulation for.

    [1] https://github.com/cylance/macos-arm64-emulation

  • wahern 3 days ago

    What's the relevance of Docker here? It's not mentioned in the article, and more generally I can't think of cases where Docker would help with backward compatibility, except perhaps making it easier to, e.g., handle old code with hardcoded paths (i.e. a fancier chroot).

    • kragen 2 days ago

      Like any chroot, Docker images include all your library dependencies, which keeps your code from being broken by library upgrades — or from having its security vulnerabilities closed.

csdvrx 3 days ago

The dream has been realized with the release of cosmopolitan.

There's no reason we couldn't have a cross-platform minimal set of common utilities.

  • wahern 3 days ago

    I think Cosmopolitan support on OpenBSD is broken (since 7.5?). And IIRC it also conflicts with WINE on Linux without explicit binfmt support. It's amazing and laudable that Cosmopolitan managed to thread the needle the way that it did, but it's also entirely unsurprising that it's been broken; and even if fixed would be broken again.

  • frumplestlatz 2 days ago

    Unless something has changed since I last looked at it, cosmopolitan depends directly on the host's raw syscall interface everywhere but Windows (on Windows, it correctly dispatches syscalls through supported userspace libraries, e.g. `kernel32.dll`).

    This is unsupported, undocumented, and unstable on every target cosmopolitan supports other than Linux — macOS and (Free|Net|Open)BSD define their syscall ABI as private and subject to arbitrary change, and they do change it. The only supported syscall interface is via their userspace libraries, and binaries that target the syscall ABI directly will fail on future releases.

    Furthermore, while cosmopolitan binaries are multi-arch (amd64/arm64 currently) and multi-OS (Linux/Windows/macOS/...), they are arch-specific and OS-specific. Once support for a new target has been added to cosmopolitan, existing binaries must be rebuilt to include it; existing binaries cannot run simply by porting a common runtime.

    On top of all that, the APE executable format relies on ill-defined fallback heuristics — which may or may not be implemented by a shell — for executable files that fail with ENOEXEC after first calling `exec()`, but look like they may be '#!'-less shell scripts. Unsurprisingly, this is unreliable, depends on the user's shell, and means that programmatically executing an APE executable using `exec`, `posix_spawn()`, etc, will simply fail.

    Cosmopolitan is neat hack, but it's not a viable multiplatform executable format, runtime sysytem, or distribution mechanism. Something like WASM + WASI seems much more likely to fulfill this function in the future.

    • mbessey 2 days ago

      Yes, it's a cool hack, but doesn't really even approach the same use cases as something like p-code.

    • csdvrx 2 days ago

      > Cosmopolitan is neat hack, but it's not a viable multiplatform executable format, runtime sysytem, or distribution mechanism.

      It's the best we have to ensure programs will be kept running. The multiple payloads are like a rosetta stone.

      If I had 1 wish, I would make wine support sixel output + VNC to a localhost port + webgl or equivalent to extend this redundancy to GUIs

      > Something like WASM + WASI seems much more likely to fulfill this function in the future.

      Time will tell, but given 2 binaries (Sun Java vs Windows i386) from the same time period, the one that was "much more likely to fulfill this function in the future" is much harder to use

      p-code is a neat technology, but failing at what it was supposed to achieve

  • dlachausse 3 days ago

    Cosmopolitan is very impressive indeed, although it is a very different approach than byte code systems like the p-System, Java, and .NET. Each approach has its merits.

  • detourdog 3 days ago

    I think it’s interesting when one considers the Macintosh and Pascal and NeXT and Objective-c