Good. In fact, in our internal tests with the JavaScript Client Library, a test page running locally and just reading the updates (with no DOM operations) could reach many thousands of updates per second, on Chrome.

By the way, if I understand correctly, in your tests you wanted to enforce filtering on events that couldn't be forwarded immediately and this is why you set <max_buffer_size> to 0.
Actually, this is not the canonical way to do this, because this setting affects all items.
You could, alternatively, use MERGE or DISTINCT mode for your item instead of RAW mode and set the buffer size to 0 for that specific item (if the item is in MERGE mode, this is the default).

In fact, in a test like yours, by setting the buffer size at 0, you avoid that updates are enqueued in the Server, while waiting for the client and/or network to handle the updates already sent.
Note that in this case, queueing can also occur at TCP buffer level. Lightstreamer does its best to reduce enqueueing in the TCP buffers, but it has not full control, particularly on the browser side.
With polling, this internal queueing is eliminated.

Are you interested in polling in order to cope with the cases in which the infrastructure doesn't allow streaming or in order to reduce the internal queueing, as said above?
In the latter case, polling over websockets would provide the best performances;
in the former case, you should use polling over http, as websockets would be unavailable in a similar scenario.
You can use setForcedTransport for this setting.

The default client and Server configuration for polling is already adequate, as it does not introduce any pause (apart from <max_delay_millis>).