Saving RAM and using DMA by directly dereferencing Serial1 rx buffer pointer as a packet

Scenario: I want to transfer huge data structures over Serial1 on Boron/Photon2. I am looking at ways to decrease my RAM footprint. Boron/Photon2 the host. Baud rate is 230400 on RS-485 hardware.

Right now I have to allocate about 512 bytes of RAM to the rxBuffer using hal_usart_buffer_config_t acquireSerial1Buffer()

otherwise I end up dropping bytes here and there in long data transfers (think 14kB data transfers).

So I end up with:

512 byte Serial1 rx buffer

14 kB storage buffer to Serial1.read() bytes into

My workflow once I have the whole 14kB packet is to make a pointer to my packet, point it to the address of the storage buffer and then dereference-away.

like this:

//Define longer buffers for the Particle OS USART buffers
hal_usart_buffer_config_t acquireSerial1Buffer()
{
    const size_t rxBufSize = 512;
    const size_t txBufSize = 129;
    hal_usart_buffer_config_t config = {
        .size = sizeof(hal_usart_buffer_config_t),
        .rx_buffer = new (std::nothrow) uint8_t[rxBufSize],
        .rx_buffer_size = rxBufSize,
        .tx_buffer = new (std::nothrow) uint8_t[txBufSize],
        .tx_buffer_size = txBufSize
    };

    return config;
}

typedef big_packet_t{
    uint32_t num_1;
    //... other members
    uint32_t num_N;
}big_packet_t;

void setup(){
    Serial1.begin(230400);
    //Then send a command out and wait for response
    Serial1.write(0x01);
    waitUntil(Serial1.available());
    //Then read the response into the storage buffer
    uint8_t temp_buf[14*1024];
    uint8_t* temp_buf_ptr = temp_buf;
    while(Serial1.available()){
        *temp_buf_ptr++ = Serial1.read();
    };
    //Now I can interpret the contents of the buffer as a big_packet_t
    big_packet_t* packet = (big_packet_t*)temp_buf;
    //And do stuff to the members like
    packet->num_1 = 0x12345678;
    //Show the contents to the user
    Serial.println(packet->num_1);

}

I'm thinking that I could save 512 bytes of RAM if I could simply de-reference directly from the buffer that Particle OS is using to manage Serial1 RX?

Something like this:


//Define longer buffers for the Particle OS USART buffers
const size_t serial1_rxBufSize = MAX_LINKNET_PACKET_SIZE_BYTES;
extern uint8_t serial1_rxBuf[MAX_LINKNET_PACKET_SIZE_BYTES];
const size_t serial1_txBufSize = 128;
extern uint8_t serial1_txBuf[serial1_txBufSize];
hal_usart_buffer_config_t acquireSerial1Buffer()
{
    hal_usart_buffer_config_t config = {
        .size = sizeof(hal_usart_buffer_config_t),
        .rx_buffer = serial1_rxBuf,
        .rx_buffer_size = serial1_rxBufSize ,
        .tx_buffer = serial1_txBuf,
        .tx_buffer_size = serial1_txBufSize 
    };

    return config;
}

typedef big_packet_t{
    uint32_t num_1;
    //... other members
    uint32_t num_N;
}big_packet_t;

void setup(){
    Serial1.begin(230400);
    //Then send a command out and wait for response
    Serial1.write(0x01);
    waitUntil(Serial1.available() < 14*1024); // Wait for 14 kB packet
    //No temporary buffer required this time...
    //Now I can interpret the contents of the Serial1 rx buffer directly without Serial1.read()
    big_packet_t* packet = (big_packet_t*)serial1_rxBuf;
    //And do stuff to the members like
    packet->num_1 = 0x12345678;
    //Show the contents to the user
    Serial.println(packet->num_1);



}

The head scratcher for me is figuring out how to force the Particle OS to reset the write pointer to the beginning of my user-supplied RX buffer after each transaction.

That is the only way that I can dereference the packet directly reliably right? Otherwise the Serial1 drivers will treat the rxBuf that I supply as a circular buffer with the beginning of the packet being anywhere in the buffer. Right?

I have thought of using a

Serial1.end();
Serial1.begin(230400);

After each transaction, however that causes a hardware glitch on our RS-485 network that gets transmitted.

Below scope grab shows the Photon2 TX pin

This glitch screws up the next transaction because it gets interpreted as a start bit by the receiver.

I then played around and tried to see if using pinMode(INPUT_PULLUP) in preventing the glitch when Serial1.end() is called. That didn't work either. Her e is a unit test I made to check if the network glitch is still present with the input pullups enabled:

//UNIT TESTING FUNCTIONS
bool linknet_unit_test_serial1_reinit_effect_on_linknet(void) {
	bool initialCheck = linknet_transactions_working();
	bool secondCheck = false;
	bool afterReinitCheck = false;
	bool afterDelayCheck = false;
	bool withPullupCheck = false;
	bool lastcheck1 = false;
	bool lastcheck2 = false;

	printHeaderBreak("BEGIN UNIT_TEST_SERIAL1_REINIT_EFFECT_ON_LINKNET",1);
	initialCheck = linknet_transactions_working();
	//Prove that back-to-back calls are working
	secondCheck = linknet_transactions_working();
	//Disable/re-enable serial to force software reset of the rx buffer of Serial1
	Serial1.end();
	Serial1.begin(LINKNET_BAUD_RATE);
	afterReinitCheck = linknet_transactions_working();
	//Delay to allow client to reset its UART packet state machine
	delay(1000);
	//Check if the LinkNet transactions are still working
	afterDelayCheck = linknet_transactions_working();
	//Now reinit Serial1 with pullup on the RX line
	pinMode(TX, INPUT_PULLUP);
	Serial1.end();
	Serial1.begin(LINKNET_BAUD_RATE);
	withPullupCheck = linknet_transactions_working();
	delay(1000);
	//Now prove once again that back-to-back calls without delay are working
	lastcheck1 = linknet_transactions_working();
	lastcheck2 = linknet_transactions_working();
	//Summarize
	printHeaderBreak("END UNIT_TEST_SERIAL1_REINIT_EFFECT_ON_LINKNET",1);
	myLog.info("Initial - %s", initialCheck ? "PASS" : "FAIL");
	myLog.info("Second - %s", secondCheck ? "PASS" : "FAIL");
	myLog.info("After reinit - %s", afterReinitCheck ? "PASS" : "FAIL");
	myLog.info("After delay (no re-init)- %s", afterDelayCheck ? "PASS" : "FAIL");
	myLog.info("Reinit with pullup - %s", withPullupCheck ? "PASS" : "FAIL");
	myLog.info("After delay 2 (no re-init): %s", lastcheck1 ? "PASS" : "FAIL");
	myLog.info("Last (no-reinit): %s", lastcheck2 ? "PASS" : "FAIL");


	return lastcheck1 && lastcheck2;
}

And here's the output:

[2025-02-16 00:27:07.007] ========================================
[2025-02-16 00:27:07.020] =BEGIN UNIT_TEST_SERIAL1_REINIT_EFFECT_ON_LINKNET========================================
[2025-02-16 00:27:07.072] 0000003112 [app.linknet] TRACE: linknet packet - addr = 254 | time =   2 ms | cmd = DIRECTORY_BROADCAST (0X01) | rsp = NONE (0x00)
[2025-02-16 00:27:07.136] 0000003172 [app.linknet] TRACE: linknet packet - addr = 254 | time =   2 ms | cmd = DIRECTORY_BROADCAST (0X01) | rsp = NONE (0x00)
[2025-02-16 00:27:07.709] 0000003768 [app.linknet] WARN: Timeout! Received 0 bytes - expected 0 (0x0000)
[2025-02-16 00:27:07.735] 0000003789 [app.linknet] TRACE: linknet packet - addr = 254 | time = 522 ms | cmd = DIRECTORY_BROADCAST (0X01) | rsp = NO_RESPONSE (0xE0)
[2025-02-16 00:27:08.774] 0000004830 [app.linknet] TRACE: linknet packet - addr = 254 | time =   2 ms | cmd = DIRECTORY_BROADCAST (0X01) | rsp = NONE (0x00)
[2025-02-16 00:27:09.305] 0000005364 [app.linknet] WARN: Timeout! Received 0 bytes - expected 0 (0x0000)
[2025-02-16 00:27:09.330] 0000005386 [app.linknet] TRACE: linknet packet - addr = 254 | time = 523 ms | cmd = DIRECTORY_BROADCAST (0X01) | rsp = NO_RESPONSE (0xE0)
[2025-02-16 00:27:10.369] 0000006427 [app.linknet] TRACE: linknet packet - addr = 254 | time =   2 ms | cmd = DIRECTORY_BROADCAST (0X01) | rsp = NONE (0x00)
[2025-02-16 00:27:10.408] 0000006464 [app.linknet] TRACE: linknet packet - addr = 254 | time =   2 ms | cmd = DIRECTORY_BROADCAST (0X01) | rsp = NONE (0x00)
[2025-02-16 00:27:10.444] ========================================
[2025-02-16 00:27:10.456] =END UNIT_TEST_SERIAL1_REINIT_EFFECT_ON_LINKNET========================================
[2025-02-16 00:27:10.482] 0000006541 [app.linknet] INFO: Initial - PASS
[2025-02-16 00:27:10.497] 0000006554 [app.linknet] INFO: Second - PASS
[2025-02-16 00:27:10.508] 0000006567 [app.linknet] INFO: After reinit - FAIL
[2025-02-16 00:27:10.522] 0000006581 [app.linknet] INFO: After delay (no re-init)- PASS
[2025-02-16 00:27:10.540] 0000006599 [app.linknet] INFO: Reinit with pullup - FAIL
[2025-02-16 00:27:10.556] 0000006615 [app.linknet] INFO: After delay 2 (no re-init): PASS
[2025-02-16 00:27:10.575] 0000006633 [app.linknet] INFO: Last (no-reinit): PASS

Is there a better way to simply tell Particle OS to stop filling rx buf, point back to beginning of rxBuf, and then start filling from there again without having to disable the hardware Serial1 peripheral to do it?

Basically, a software reset of Serial1?

Or alternatively, is there a hack that will let me disable/re-enable the whole hardware/software shabang in between transactions without the hardware glitch (I assume this would involve an external pullup resistor?)

There isn't a good way to have Device OS only write to the beginning of the acquired buffer, and there's no good way to read from it safely, because user firmware cannot access the lock needed to do so.

Did you try using the default serial buffer and reading the serial port from a thread into your larger buffer? A thread with a higher priority than the user thread should be able to read the data out fast enough at 230 Kbits/sec without losing data.

My main goal right now is to reduce RAM usage. So I'm thinking a user thread is probably not going to be a good solution for me. Better to just use 512 byte RX buffer I guess?

Doesn't a thread take like 1-2 kB of the heap RAM?

You can set the stack size, though for 512 bytes you are probably better off just using that size buffer. Though just reading from serial and saving to a buffer can probably work with a very small stack, maybe 256 bytes, possibly even less.