I may have spoken too soon. Thinking I won, I removed all my debugging code and the bug came back. Even with the updated DOCTYPE. Doh.
So egg on my face, but for posterity I thought I'd post an update so someone doesn't curse this non-fix 5 years down the road. It appeared the bug was solved because my test input must have had an emoji or something in it. The bug doesn't appear when you have un-smart-able (long story) unicode. At least that's my only guess.
I solved it (again?) by checking the js library code and it appears they are straight up changing space to two spaces. Uh, ok. That code area is supposed to change non-ascii (i.e. non-0x20) unicode spaces to an ascii space (for a short space) or 2 (or more) ascii spaces (for a long space). But somehow it got a 0x20 in the rule. Or at least FF ^F search " " matched the character? I changed the 0x20 in the rule to 0xa0 and now everything is fixed. And why a single 0xa0 should ever be turned to 2 spaces in the first place is beyond me.
Looks like something messed up the js source and changed a 0xa0 to a 0x20. This may be FF... maybe when I saved the js source? If I cut a NBSP from a unicode sample web page and paste it into my form textarea, it always seems to turn into a 0x20! If I type it in place with CTRL-SHIFT then it properly shows up as a 0xa0. If I paste it in from a nano editor where I know for sure it's 0xa0 then it also works. I found some ancient bz's about FF doing bad things with NBSP's when c&p'ing... maybe there's still some bugs in there.
Anyhow, it's not a DOCTYPE problem: it's the wrong unicode char in the source file. And since it's just a bloody empty space character you can't really see it when debugging without spitting out hex codes somehow. Fun!
P.S. Quirks mode being off is screwing up some of my tables cosmetically... so I guess it really does something after all...