Ironically, using an antique computer is often the fastest way to interface with antique hardware; even when a modern computer can be made to work, it will often be slower.
On a PC running DOS or ancient version of windows, software that wants to manipulate the signals on the printer port can do so any time it wants, by loading a couple of registers and executing a single "OUT DX,AL" instruction. If e.g. a printer port is attached to a ZIP drive, nothing will prevent other software from manipulating the printer port bits inappropriately (e.g. because it wrongly believes there's a Lifesize Sound Expander plugged into the printer port, and is trying to feed it audio), but software can drive the port as fast as the device on the other end can handle it.
On a more modern OS, attempts to access I/O devices all have to go through system logic which needs to figure out whether the program which is trying to access the port is allowed to do so. It's possible that improvements in hardware could overcome the orders-of-magnitude speed penalty for validating each and every individual I/O operation, but most operating systems aren't designed to perform tasks 100% "smoothly". If the far end of a connection would take one microsecond to process each byte and indicate that it's ready to do so, a system that can send out a byte each microsecond will be faster than one that would only take 0.1 microseconds to send out most bytes, but would occasionally stall for a millisecond.