Yes - and additionally, both of them technically re-use the existing state between draw calls. For example, if you bind some data to a buffer in WebGL, it's there until you do something else with it. Similarly, in Three.JS, if you setup some primitives in a scene, it's there until you explicitly change it.
However, in real-world use, for WebGL you're really going to be destroying and re-creating that state every tick. OK that's a bit of a lie because it needs a very minimal caching system - for example to not switch shader programs if the desired one is the current one, but that's a super thin layer.
This difference - constant re-creation vs. genuinely keeping the graphics state in memory, is a biggie imho.
I'll give a practical example.
Let's say you want to detect when an object is touched.
With FRP the flow is like
event -> send -> logic -> graph / state -> listen -> render. In other words, the way you set which object is touched is a declarative statement of event + current state
This is possible not only because of FRP, but because there's no need to maintain anything after the render step.
With Three.JS and PIXI, it gets inverted. Those systems start with:
state -> render -> listen -> event -> callback. In other words, if you want touch detection on an object, you have to dispatch it from the object, and that object only exists via render. It gets worse when you're integrating this inverted system with FRP, because then it continues on as above. So you end up with:
state -> render -> listen-> event -> callback/send (to frp) -> logic -> graph / state -> listen -> render. That's not even the whole picture because the final state there must be looped around to the initial one for listening to detect events again.
That's not even getting into the issues of destroying objects cleanly, which Sodium does inherently (I think?) but these other systems need "dispose()" type calls... so when do you call that? If an object's removal is just a declarative expression, now you need to add in extra calls for that. It gets messy.
Yes - they can be integrated, and I totally believe that there are some clean solutions (as no doubt your demo will show!)
However, it really adds layers of complexity that simply disappear when you do the rendering by hand at the end of the pipeline. For example - an object of pure data disappeared? It won't get rendered. Simple as that (OK you might still want to clean up loaded data - but that's a different thing, and by default not what happens when you dispose objects in PIXI).
On the flip side, doing gl calls by hand adds its layer of headache. I wouldn't say complexity though - it's just annoying that one has to spend time on low-level stuff when there are great renderers out there.
I'm dealing with this on a work project and I can't share the code to that - but I am picking away at a pong demo slowly but surely which I'll share here when I can get to it I can make the following guarantees about it:
- The proprietary WebGL layer is a very thin wrapper, with extremely basic caching checks
- The data flows elegantly from top to bottom, with no need to re-dispatch from within objects/cells.
Though it won't have the touch detection stuff mentioned above (for my work project I'm not using bounding boxes or things, I'm using the "color picker" / framebuffer approach mentioned here: http://www.opengl-tutorial.org/miscellaneous/clicking-on-objects/picking-with-an-opengl-hack/ ... it does require sending info from the scene, but it's a clear cycle since it's just setting data on render() and reading that raw data (not the graph data) on event send)