Yeah, although then you create a whole cottage industry of people on the mailing list arguing about which value they feel is best for completely ignorant reasons.
LOL. You're probably right. Sounds like the voice of experience.
What we're talking about here is (apparently) some buggy serial hardware
or
drivers that bring their own characteristics.
A buggy dongle that sends characters too fast? Hmm. Maybe the serial dongle isn't preserving application timing because of buffering? Now we're back to lowering the latency timer. For example: the application sends a record, waits for processing, and then sends another record. If the dongle/driver buffers across two records, and then blasts out the entire block it could nullify the record processing timeout. Lowering the latency timer might help with this as it reduces buffering.
My experience was FTDI chipsets were most reliable in many ways. They responded well to a change in the latency parameter I mentioned earlier. Prolific USB dongles come in second to FTDI in my experience.
I'm fairly sure some of the USB dongle experience is its OS driver as much as anything. The Linux Prolific driver has a quirk or two. I've had a lot of problems with the generic Windows usbserial driver. It seemed to do a lot of internal buffering that I couldn't tune out the usual way.
The thing I want us to avoid is forcing all users of the driver to send
one
byte at a time with a lengthy delay just because some people have broken serial hardware.
I agree completely. That's one of the things I was trying to stress.
-Nathan
On Fri, Jun 16, 2017 at 3:26 PM, Dan Smith dsmith@danplanet.com wrote:
I do like your idea of creating a throttled serial driver so long as users can still opt out of it. Maybe they could even tune the throttling with a parameter? This might help some HW without strewing sleep() calls all over the place. I want the max speed I can use, though. It takes too long to read/write some radios as it is. Don't throttle everyone.
Yeah, although then you create a whole cottage industry of people on the mailing list arguing about which value they feel is best for completely ignorant reasons. Seems better to have the driver expose a value that should be used in compatibility mode, and if one isn't exposed, calculate a default based on the baudrate.
Throttling baudrate completely might be a bad idea in some cases. What if a radio needs a relatively high amount of time to "digest" a packet/block/line of information? Maybe it reads an entire block, and then commits it to EEPROM, for example. Or maybe it decrypts the block (slowly). If the time needed to process a block is high compared to the time between symbols, you wouldn't want to impose the block timeout on each symbol, or the throughput would be much lower than needed; maybe painfully so.
Right, but those are things we _should_ bake into the driver as they're characteristics of the radio itself. Indeed several drivers have this sort of thing and it's fine.
What we're talking about here is (apparently) some buggy serial hardware or drivers that bring their own characteristics.
My experience is most embedded devices don't use XON/XOFF. Even the simplest microcontrollers can usually keep up with 9600 baud until they try to do something like decryption or writing to EEPROM. Maybe I'm wrong here - I've never written firmware for a radio.
Yeah, I don't think I've seen any that do software flow either, but with only two or three serial lines, it's the only way to provide back pressure. Clearly anything should be able to keep up with 9600 baud, but we do have some that can't although it's mostly at the record level and not just straight line speed. The thing I want us to avoid is forcing all users of the driver to send one byte at a time with a lengthy delay just because some people have broken serial hardware.
--Dan