I've had some success with removing the reliance on a priority queue, and still remain glitch free.
I'm also able to update the nodes within a transaction concurrently with a thread pool. (Updating with unique thread for each node ATM, still playing around with the implementation.)
It works the following way:
each node has a visited flag and a dirty state
the dirty state has three values "Dirty", "Clean" and "Unknown"
On Sink Send --> add its dependent nodes to a processing list. and mark the sink node dirty
On End of Transaction -->
- get node from processing list, process its dependencies unless visited mark visited as you go, and move your way up the graph marking dirty state either "Dirty" or "Clean".
- each level up the graph if none of the dependencies were dirty, then the current node is still clean, otherwise the current node is marked dirty and its value-state is updated.
Last Step End Of Transaction --> reset states of visited nodes.
all nodes are protected by a Mutex and either dependencies or dependents are able to be processed in parallel. Locking only when two or more threads are trying to visit the same node.
The end effect is all ancestors of listeners are visited just 1 if the listener is going to fire, and only nodes whose state needs to change will execute its update function. It can potentially visit more nodes than the current push-based implementation, but it will visit less nodes than a fully pull-based implementation. It is sort of between the two. And avoids that O(n*log(n)) of the common priority queue.
Also memory management is not done yet in this version. Still playing around.
I'll try to explain it better.
For the current node being processed insure all dependencies have been processed. If any dependencies are discovered to change then update the state of the current node. Then if the current node ends up changing value-wise state then process the dependent nodes concurrently using thread pool.
I'm yet to use a thread pool. ATM I'm just creating new threads all the time.