Interest management is the process of delivering interesting data to those interested. An entity ensures that its sensor model has all the correct entities available to it by subscribing to the set of categories known to contain those entities. That subscription fills the local cache with replicas of the interesting entities. Then the sensor can be implemented to immediately query the cache knowing that it contains accurate data.
The policy the application uses to fill the cache can be as simple as a grid, or as complex as a dynamically determined set of clusters of nearby entities. In either case, the consumer knows the category being used by any entities that might be within the consumer's area of interest, field of view, line of sight, range of hearing or what not. Note that the producer generally produces into a single category.
So now producers send updates, and interested consumers get them. But what if someone moves? They will change the category they produce into, or the set of categories they consume from. But what about latency? That is where it gets "fun". A consumer can use its maximum velocity and an estimate of subscription latency to compute the extra distance needed to guarantee that the cache contains those moving entities even when the producer and consumer are moving toward each other at maximum velocity. That will ensure that its cache is populated with any entities that will-be interesting in a few moments. The cache will necessarily have more entities than are logically visible, so the sensor algorithm must filter out ones that are too far away but have been delivered "just in case".
But what about producers that are moving? We use latency hiding and bandwidth optimization to reduce the number of updates being sent, and then predict where the producer is at the current time. But that increases the latency of the produced data. OK. We increase the subscription range even farther. But you have to factor in the producer's maximum velocity. And you don't know what type of entity it is. So you have to use the maximum velocity of the fastest entity in the game.
That can suck if there are a few jet fighters, but tons of infantry. The solution? Separate the fast movers from the slow movers and use two distinct "layers" of categories. You would subscribe out quite a bit further for fast movers, but wouldn't get any slow movers in that wide subscription because they are produced into a different layer. And there won't be very many fast movers to consume. If you think about it, there is just no way to consume all the data from fast movers that are far away and might suddenly move toward you if there are a lot of them. The worst case example is an entity that can teleport anywhere instantly. Consumers would have to be subscribed to everything. Or you would have to change the game design to hide the latency from subscribing on-demand (like making a poof of smoke, or invulnerable just after teleport, or something).
The beauty of category based subscription is that these game-specific factors can be reduced to a set of integer values that can be computed by both producer and consumer. The consumer doesn't need to know that there are entities in the interesting category. If there are, the publish/subscribe system will deliver them. All that system does is match up integer values: hey this thing goes to these guys. The system doesn't need to assume that the categories are related to geometry, or that they are a linear function, or anything. You can use them as sets. Or unique addresses for multi-point to point communication, or for radio broadcasts on a single station. Or anything.
But my point: you have to use prediction to reduce bandwidth. But you also have to use prediction when subscribing or might miss something "interesting".
No comments:
Post a Comment