9

The original IBM PC used a 5.25" floppy disk format of two sides, 40 tracks, 9 sectors per track, 512 bytes per sector, for 360K per disk.

As I understand it, a significant amount of disk space went to spacing between sectors, and I'm wondering why that half-kilobyte sector size stayed so consistent between disk formats and operating systems; a reasonable optimization would seem to be to use fewer, larger sectors, thereby reducing the overhead. But then, there was some onboard intelligence in most floppy disk controllers. (The Apple II controller was famous for not having such.)

So for a specific question:

On the original IBM PC floppy disk controller, was the sector size under software control? Could an operating system choose to use e.g. 5 kilobytes per sector, in the hope of squeezing 5K instead of 4.5K into one track?

rwallace
  • 60,953
  • 17
  • 229
  • 552
  • 1
    From this page it looks like the original IBM PC FDC used sector sizes that were powers of two starting at 128 (shifted left by an 8-bit value). A lot of the 8-bit microcomputers used 256-byte sectors before the world settled on 512 byte sectors for a while. I wonder which if any systems used 128-byte sectors? – hippietrail Jun 26 '20 at 04:24
  • 4
    @hippietrail CP/M used 128-byte sectors. – dirkt Jun 26 '20 at 05:38
  • Do you understand the role of the floppy disk controller in this? – Thorbjørn Ravn Andersen Jun 26 '20 at 13:34
  • 4
    @dirkt CP/M 2.2 did and the BIOS writers had to handle the mapping to 512 byte sectors on non-single density drives. (Disassembled one). CP/M-3 contained logic to handle this, but that was not the version emulated by QDOS which became MS-DOS. – Thorbjørn Ravn Andersen Jun 26 '20 at 13:35

1 Answers1

12

Yes, the sector size is software-controlled, to a certain extent. Every FDC command involving sectors or tracks takes the sector size as a parameter. The size is specified as a bit shift applied to 128, so sector sizes are of the form 128 × 2n (usable values go from 128 to 4096 on the original PC; there isn’t enough time to fit an 8KiB sector in a track using the IBM PC’s single-density 5.25” drives). Sector sizes are assumed to be the same throughout a track.

As Tim Paterson explains, the IBM PC can write at most 6250 bytes per track in total (including gaps, which don’t store data). The headers and gaps result in an overhead of 114 bytes per sector, so a track can store 9 512-byte sectors, but not quite 10 — at least not safely, on all drives. (Since the tracks are circular, having gaps between the logical sectors isn’t sufficient, so counting 10 sectors and 9 gaps doesn’t work — a gap is also needed between the last and first sectors.) However it would technically be possible to store 5 1024-byte sectors, for a total capacity of 200KiB per side, 400KiB per disk. So while this addresses your question, it doesn’t explain why 512-byte sectors were chosen. 1024-byte sectors were used on other computers, and on IBM’s own 8” disks.

Later disk formats such as XDF and 2M used varying sector sizes to increase storage capacity, but they are more CPU-intensive and not appropriate for a 4.77MHz 8088. Some copy-protection techniques also relied on non-standard sector sizes.

See also What is between the sectors of floppy disks? for details of floppy disk layouts (in particular, their capacity needs to be considered in terms of reversals per unit of time, not bytes or anything like that).

Using larger sectors increases the memory requirements when reading from or writing to disks; DOS used two buffers by default on the PC, which would have meant going from 1KiB for buffers, to 2KiB. Larger sectors also reduces the data manipulation granularity, and 1KiB was a significant amount of data back then — two 512-byte buffers would have been more useful than one 1024-byte buffer.

As pointed out by supercat, duplicating equipment could have written disks with tighter tolerances and would have allowed higher capacities, for read-only distribution media. This was taken advantage of later on, with Distribution Media Format disks and the aforementioned XDF on 3.5” disks, but was apparently not considered for 5.25” disks.

In a post on PC disk sector sizes and booting on OS/2 Museum, Michal Necasek points out that the PC BIOS can only boot from a disk with a 512-byte boot sector, because it uses the default disk parameter table, which specifies a 512-byte sector. The boot sector could change the DPT to use a different sector size, so a disk wouldn’t have to use only 512-byte sectors; but at least the boot sector must be 512 bytes in size.

Stephen Kitt
  • 121,835
  • 17
  • 505
  • 462
  • That clarifies it, thanks! But looking through those links, from mentions of being able to put 10 and sometimes 11 512-byte sectors on a track, it seems to me that 5 1024-byte sectors would have allowed an adequate safety margin? – rwallace Jun 26 '20 at 11:23
  • 4
    10 512-byte sectors don’t quite fit on the original PC drive (see Tim Paterson’s blog post on floppy disk formats for details), but you’re right, 5 1024-byte sectors would fit. I suspect another factor here is the amount of data being read into memory; 1024 bytes is a lot in a 64KiB system... – Stephen Kitt Jun 26 '20 at 12:18
  • Could this have something to do with the floppy controller and video controller performing DMA to the same memory ? Using Tim Paterson's article for the bit rate, 512 B is about 16ms, 1024 B would be 33ms which is the same time as a 60 Hz field. CP/M machines generally used serial terminals, so separate display memory. – grahamj42 Jun 26 '20 at 15:05
  • 2
    @StephenKitt: Using larger sectors would necessitate the use of larger buffers. In a 48K system, three buffers [which I think was the minimum for usable operation] would take 1.5K with a sector size of 512, but would take 3K with a sector size of 1024. – supercat Jun 26 '20 at 15:08
  • 1
    @supercat thanks, that’s what I was driving at in my comment above, but I wasn’t sure of the buffer requirements (as in, the number of buffers). – Stephen Kitt Jun 26 '20 at 15:11
  • 1
    @supercat checking my trusty MS-DOS Encyclopedia reveals that the default was 2 buffers on the PC, 3 on the AT, and 10 recommended for fixed-disk systems. – Stephen Kitt Jun 26 '20 at 15:18
  • 3
    @StephenKitt: I forgot that floppies used FAT12, which would allow the FAT for a 160-cluster disk to fit on one sector, making single-file operations workable with two buffers (one for the FAT and one for the data). Still, for any purpose involving simultaneous reading from one drive and writing to another, having at least four buffers would be much better (one for source FAT and data; one for destination FAT and data). – supercat Jun 26 '20 at 15:36
  • 3
    @StephenKitt: Actually, until now, I'd sorta wondered why DOS used FAT12, since the storage savings didn't seem terribly significant, but the difference between having a FAT take 480 bytes of data versus 640 is really a difference between having it take 512 versus 1024. – supercat Jun 26 '20 at 15:58
  • 3
    @supercat indeed; I used to stop optimising programs when I crossed a cluster boundary, but I hadn’t made the FAT12 connection either... – Stephen Kitt Jun 26 '20 at 16:00
  • @StephenKitt IIRC 10 sectors fit on the track ... We formatted 10 sec * 512 bytes * 2 sides * 42 tracks all the time... the sync gaps where smaller than usual but the FDC could handle it. – Spektre Jul 04 '20 at 19:01
  • @Spektre yes, it’s certainly possible, either by reducing the gaps by one byte each or by ignoring the 10-byte gap reduction between the last and first sectors; how acceptable that is depends on how much of a stickler for “official specs” (whatever that is in this case) one is ;-). – Stephen Kitt Jul 04 '20 at 19:15
  • @StephenKitt: Reducing the inter-sector gap increases the likelihood that writing to a sector will corrupt the following one. If data is successfully written and read back using a smaller gap, the smaller gap should not affect reliability unless or until it is written again. Use of smaller gap may thus be appropriate for things like write-once distribution media; I'm curious why that wasn't exploited back in the day. – supercat Dec 10 '20 at 22:20
  • @supercat perhaps because distribution media was rarely write-once, back when the PC was being designed and for quite a long time after that, even when hard drives were in common use... Distribution disks weren’t write-protected and it was quite common to write files to them. – Stephen Kitt Dec 11 '20 at 05:39
  • @StephenKitt: A lot of software was shipped on write-protected disks. While it may sometimes have been desirable to be able to erase parts of a program one didn't need and reuse those parts of the disk, I don't recall having treated any of the software I bought in such fashion. If one wanted to use the unused parts of the distribution disk for other things, packing the used portion as tightly as possible would free up space in the unused portion. – supercat Dec 11 '20 at 06:45
  • @supercat on write-protected 5.25” disks? I’ve got loads of write-protected 3.5” disks (with no tab in the hole), but very, very few write-protected 5.25” disks (out of hundreds of originals). When I used PCs with no hard drive, I made backups of the software I used, and used those, but many people didn’t, and some software didn’t let you anyway. I’m thinking in terms of the original question above: when designing the PC, I don’t think write-once formats would have been much of a factor. – Stephen Kitt Dec 11 '20 at 07:05
  • @StephenKitt: Many of the commercial programs I've bought on 5.25" disks ship on disks that don't have a notch at all (commercial duplicating equipment can ignore the absence of a write-enable slot. – supercat Dec 11 '20 at 15:45
  • @supercat I sit corrected, and I’ve added a brief mention of all the above to the answer. – Stephen Kitt Dec 11 '20 at 15:54
  • Many copy-protected games for the Apple II series used custom disk formats, and some of them packed about 12% more data per track than usual by eliminating all but one of the gaps on each track (a scheme which was referred to as RWTS18, since it stored 18x256 bytes per track, while the normal RWTS16 stored 16x256 bytes). I don't know the maximum amount of data any commercially-released software squeezed on a disk, but at least when writing and reading back on my own machine, a write-once format could push 200K onto a single-sided floppy (vs 140K using RWTS16) and using custom hardware... – supercat Dec 11 '20 at 16:53
  • ...for writing would probably allow that to be pushed to 240K per disk side for a distribution format that the Apple II would be able to read but not write (which would have been pretty awesome for a copy-protected game). The PC drive and interface are alas far less flexible, so I don't think anything near that capacity boost would have been achievable on disks that had to be readable on that platform despite their higher "starting" capacity (180K/side, versus 140K for RWTS16 or ~160K for RWTS18). – supercat Dec 11 '20 at 16:59