In the actual version 1 scheme, the MQTT publish cycle was simple and fast but added some burden to the developer to handle publish while receiving reentrancy issue.
Please refer to this page for a presentation of the main drawback.
The implementation was built this way for numerous reasons:
messageReceived
callback when it has just received the packet (and before it overwrites it with a new one in the publish cycle) the client avoids copying the packet and can deal with it on low memory system.The drawbacks are:
In the MQTT v5 standard, we have those guarantees:
Receive Maximum
CONNECT's property to 1, so no new QoS packet can happen until the current is acknowledged completely.To loose the locking constraint drawback, we can remove the global action lock and instead move to a local socket lock (so we prevent mixing bytes in the socket buffer if used from multiple thread). With a socket lock, a complete packet is received or sent but never half a packet.
We must also prevent having 2 publish actions with QoS running at the same time (resp. receiving 2 publish with QoS at the same time).
Doing so is only possible if we are able to guarantee progress in all tasks of the client (for example, in a publish cycle a publish lock is used so the second publish would be serialized after the former is acknowledged).
Thus, a specific buffer sized to hold temporary packet for either PUBACK
/PUBREC
/PUBREL
/PUBCOMP
is required too.
In that case, we could get such behavior for the communication:
When the client entered the publish cycle for packet ID 1, it was expecting a PUBACK from the broker, but instead got a new PUBLISH message. Typically, this can trigger the following actions in the code:
publish
methodPUBLISH
packet and send itpublishCycle
method with sending flagpublishReceive
method that's fetching only the packet type.PUBLISH
packet in the current recvBufferPUBLISH
packet here that'll be lost by the client since there's no place to store it (it'll be dropped).PUBLISH
packet is acknowledged, go back to step 5 above.DISCONNECT
packet or a network disconnection signalpublish
method so the user can deal with the error the way she intended.publishCycle
, releases the lock and returns from publish
methodeventLoop
call, if there is a publish packet in the recvBuffer (from step 7.2), it'll trigger the usual messageReceived
callback.eventLoop
even if the packet is already downloaded.Let's consider another possible implementation, that's probably more close to what MQTT standard intended.
The client will be constructed with a user defined Receive Maximum
CONNECT's property (defaults to 1), later called RcvMax. The higher the property value, the higher the requirements for memory buffers in the client.
The property implies storing that number of PUBLISH
packet in a buffer in case of needing to retransmit it.
Currently, the client simply errors out when disconnected (or network connection is dropped). It is up to the application to reconnect if that's what required. If the client were to continue doing this, no packet buffer would be required, but it would be non compliant with the standard.
In the new implementation proposed below, the client will auto reconnect, so it must store the connect properties and attributes (additional storage requirements).
Let's see what would happen in the different phases of the communication
publish
method, the client checks the QoS level for the packet.PUBLISH
packet, then it takes the socket's lock and immediately publish it and returns.eventLoop
to let the publish cycle proceed. It's now possible to do so in the caller code at the cost of increased stack usage.PUBLISH
packet, then stores the packet in a specific buffer (via a callback). It also stores the packet ID in a (RcvMax sized) buffer (containing packet ID for each QoS level). The buffer for the QoS 2 level also contains RcvMax boolean to know if the PUBREC
packet was received (see below).PUBACK
, a PUBREC
, a PUBREL
, a PUBCOMP
or an unrelated packet (like PUBLISH
or DISCONNECT
or PING
or PONG
...)PUBACK
or PUBREC
) and the packet ID can be released to (in PUBACK
or PUBCOMP
).PUBLISH
packet will trigger calling the messageReceived
callback as usual.PUBACK
for QoS 1 or PUBREC
and PUBCOMP
for QoS 2) and sent with the socket locked.PUBACK
is sent, or once the PUBCOMP
is sent for QoS 2.That implementation present multiple advantages and few drawbacks.
First, it doesn't suffer from re-entrancy issue (it's possible to publish anytime, even inside the messageReceived
callback), no possible deadlock (locking is small and progress is always made). Callbacks are always called with no lock taken.
For QoS 0, the latency is minimum. For higher QoS, the latency is minimal for publishing or receiving (since any packet's action is done immediately), but the QoS overhead is delayed (it might take longer since one must wait for the messageReceived
's callback execution to continue). This means that if there are multiple serialized PUBLISH
packet with high QoS on the line, the second packet will suffer from a longer latency.
Then, it can support publishing with multiples packet pending on the wire.
The main drawback of this implementation is the requirement for additional memory buffers:
CONNECT
parameters (this might be a security issue to save them in memory) for automatic reconnection. Instead, the implementation will provide a connectionLost
callback where the user can decide to connect again by providing the necessary parameters. This imply a reentrancy for any method that can handle network failure.PUBLISH
packet size, creating a packet buffer for this beforehand would be a waste of resources. Instead the implementation will provide a savePacketBuffer
and releasePacketBuffer
and recallPacketBuffer
callback where the user code will decide what to do with the packet. A possible solution would be to save the packet by allocating memory for it and deleting it when released (with the big drawback of memory fragmentation this implies). Or the user could simply save the source of the packet (what data where used to generate this packet and save the source instead indexed by the packet ID) (same drawback, but likely better memory usage). Or the user could copy the packet in a fixed size circular buffer (this would only work if the packet size is somehow small).This implementation was chosen for version 2 of the library.