For the general question posed, I just confirm that, when filtering is possible, it is done on a client-by-client basis.
This, in case of particular race conditions, accounts for different identical subscriptions receiving data in different ways.

Now I see that the case shown is not the typical race condition that can happen due to the underlying frequency limit.
In this specific case, the Server had enough time available to send the event, but it didn't.

My only explanation is that the connection could not work at full speed.
If previous writes on the connection could not be flushed in short time, then the subsequent write might have been blocked and the next event could have been kept long enough to be covered by the following one.
I see that the affected session was reopened a few minutes earlier, after a failed attempt to recover from a previous session, which must have had connectivity issues.
So, perhaps, the communication is disturbed in some way.

As an experiment, you could try enlarging the send buffer used by the Server, which, by default, is small, because, in normal usage scenarios, filtering is encouraged.
For instance, you can set
<sendbuf>5000</sendbuf>
and see if there is any improvement.

Unfortunately, with COMMAND mode, you cannot leverage the client's setRequestedBufferSize setting, as the buffer in COMMAND mode is always a single update per each key.

BTW, can you confirm that the Metadata Adapter doesn't introduce any bandwidth limit?
This is possible, if your adapter configuration was inspired by our demo examples.
You may send us adapters.xml for a check.