If I understand correctly, you would like to try this in order to have the update packets flow faster and prevent race conditions that would lead to conflation.
BTW, did you try the experiment with...
Type: Posts; User: DarioCrivelli
If I understand correctly, you would like to try this in order to have the update packets flow faster and prevent race conditions that would lead to conflation.
BTW, did you try the experiment with...
For the general question posed, I just confirm that, when filtering is possible, it is done on a client-by-client basis.
This, in case of particular race conditions, accounts for different identical...
In a general scenario, separate items perform better, because it may happen that only part of the 100 items get an update, while others don't.
In this case, only the updates for items that really...
The log shows that there is an underlying frequency limit of 3 updates per second due to the license in use.
This accounts for the suppression of some update when two or more are produced in short...
Preventing filtering can have an effect when the overall update flow to a client is so huge (or the client/network so slow) that it is impossible that all available updates are processed by the...
Sorry; if my reference to "two sessions" raises doubts, then please disregard it and send what you have available.
What's important is that the subscriptions are included in the log, so that we can...
The log snippet includes 5 sessions, whereas I understood that two sessions were enough to exploit the issue.
This complicates the analysis. Can you please produce a log with only two sessions?...
Yes, the client application can set a maximum frequency on the updates received on an item basis.
Also the Metadata Adapter can set such a maximum frequency on an item basis, which may further...
The only scenario that I can recall is if the json data in the COMMAND item is much heavier than the field-based data in the MERGE item and the client is very slow at handling the updates.
Apart...