Nexus Project: WebGL Client/Server Communications Test using RDF over WebSockets

 A video from a July 25, 2011 showing the first successful test run of my Nexus Project's WebGL client/server communications using HTML5 WebSockets rather than http polling.  This visualization shows Friend of a Friend (FOAF) RDF graph data being displayed in 3D as it's layout is being determined by a 3D force-directed layout algorithm.  I got tired of digging up the video on my iPhone to show people so I decided to post it.  There have been many things done since this video (latest browser support, Jetty 8, GLGE 0.9, speed improvements, and better screen capture than my iPhone too ;-)  I have been considering a different RDF serialization rather than N-TRIPLES since N-TRIPLES since it is hopelessly uncompressed, but it made for the easiest implementation since N-TRIPLE parsers are easy to write in javascript.  Jena also supports N-TRIPLES serialization so nothing had to be done on the server end of things.  I was just at the ISWC 2012 in Boston and it was suggested to me to use Turtle RDF (I was also considering JSON-LD or even the binary RDF format), but honestly, the speed of N-TRIPLES is sufficient for now and I would rather work towards a first release of the software.  It's too aluring to endlessly tinker (and I love to tinker by the way).


RDF Triples over HTML5 WebSockets

From the beginning, I wanted Nexus to be a collaborative visualization system allowing multiple clients in multiple locations to see the same visualizations in real-time.  The issue that arises here is knowing "where" in the 3D semantic web visualization the other clients (people/avatars) are and what direction they are looking at.  In the 3D digital world, you have the concept of a "camera".  This is essentially your point-of-view in a particular 3D simulation.  As the camera moves, your view of the model changes as well.  In order to know where the other clients are in the simulation, the camera position and rotation data on all clients are converted to RDF triples and then sent to the Nexus server to be resent and synchronized to all other clients.  Nexus eats, breathes, and internalizes everything as RDF.  HTTP polling would not work well as a transport for these triples, especially with a dozen or more clients all trying to sychronize with each other.  The solution is sending the RDF N-Triples using the HTML5 WebSocket protocol. 

What are WebSockets?  The WebSocket protocol is a bi-directional, full-duplex communications protocol that is part of the HTML5 specification.  WebSockets allow my WebGL clients to talk back and forth with the Nexus server without resorting to http polling.  I will be adding WebSockets to my OpenSimulator client as well.

I've embedded Jetty in Nexus so Apache Tomcat is no longer necessary to run Nexus which simplifies the deployment of the Nexus server software.  Jetty also has a nice clean HTML5 WebSockets implementation and allows me to do both http and WebSockets on the same ip and port.  Nexus client/server communications are all just streams of RDF triples going in both directions using the HTML5 WebSockets protocol.

Here is my poster for my 2011 Gordon Conference on Visualization in Science and Education that I did a couple weeks ago where I presented the progress so far on Nexus.



Nexus WebGL 3D RDF client in Technicolor

It took less time than I thought it would, but here is an updated version of the 3D FOAF graph from my last posting with node sizes determined by the log base 10 of the number of links into a particular node.  The coulombs law for the larger nodes is adjusted so that larger nodes "push" out harder to accomodate the larger spheres preventing sphere clashes.  This images was taken with the WebGL running in Chrome.

Next on the agenda for additional functionality is the actual display of text labels over subjects, predicates, and objects.  Also to be added is WebGL camera and avatar positioning data.  What's this?  In the Opensimulator client, dozens of people can view and interact with the same RDF model/structure.  Where one of those people are looking or focusing their attention is indicated by their 3D cursor or avatar.  However, this leaves the WebGL client users in the dark as to what the OpenSimulator users and/or other WebGL clients are doing in the simulation.  I am planning to synchronize this information between all of the clients by streaming the avatar (or camera position data in the case of WebGL) back to the Nexus server where it will be pushed out to all clients in the form of more RDF triples.

The SPARQL commands for the colors and such for this image are as follows:

1) Make everything blue
insert {?rnode <nex:color> "0,0,1"} where {?node <nex:rnode> ?rnode}
insert {?pnode <nex:color> "0,0,1"} where {?node <nex:pnode> ?pnode}

2) Color white all literals
insert {?lnode <nex:color> "1,1,1"} where {?node <nex:lnode> ?lnode}

3) Color red all triples that are of foaf:knows
modify delete {?rnode <nex:color> "0,0,1"} insert {?rnode <nex:color> "1,0,0"}  where {?node <nex:rnode> ?rnode . ?node foaf:knows ?o }
modify delete {?pnode <nex:color> "0,0,1"} insert {?pnode <nex:color> "1,0,0"}  where {?node <nex:pnode> ?pnode . ?node rdf:predicate foaf:knows }

4) color green all triples of type rdf:type
modify delete {?rnode <nex:color> "0,0,1"} insert {?rnode <nex:color> "0,1,0"}  where {?node <nex:rnode> ?rnode . ?node rdf:type ?o }
modify delete {?pnode <nex:color> "0,0,1"} insert {?pnode <nex:color> "0,1,0"}  where {?node <nex:pnode> ?pnode . ?node rdf:predicate rdf:type }

5) Make everything shiny
insert {?rnode <nex:shiny> "3"} where {?node <nex:rnode> ?rnode}
insert {?pnode <nex:shiny> "3"} where {?node <nex:pnode> ?pnode}
insert {?lnode <nex:shiny> "3"} where {?node <nex:lnode> ?lnode}

Yes, I am planning on coming up with a far easier interface for the user other than SPARQL. :-)


3D RDF FOAF in WebGL-HTML5 linked to OpenSimulator

The adjacent image is of Tim Berners-Lee's FOAF file imaged with a new HTML5 / WebGL client I am developing for my Nexus RDF visualization server. WebGL allows for sophisticated 3D graphics within a web browser with no plug-in required.  The visualization is in 3D with a layout determined by a force-directed algorithm driven by the Nexus server.  The below color image is also Tim Berners-Lee's FOAF file, imaged in the same fashion, but from within an OpenSimulator region.  The twist is that both images are created off of the same server session.  In other words, the session is occuring concurrently in the HTML5/WebGL client and the OpenSimulator region allowing multiple users in the OpenSimulator region to collaborate in real-time with multiple HTML5 / WebGL clients.

In the intial testing/debugging of the HTML5 / WebGL client, I was able to get 14-16 frames per second using FireFox 5 (beta).  Greater frame rates were achievable in testing with Chrome.

To speed the development of the HTML5 /WebGL client, I made use of Paul Brunt's GLGE WebGL library which is an amazing piece of work in itself.  Currently, N-triples over HTTP is used to communicate between the clients and the server, but WebSockets is being explored.

The OpenSimulator client avoids the use of the standard OpenSim object inventory for object handling by using an RDF store with dereferenceable URIs.

Hopefully, in the next couple of weeks I will have color and variable nodesizes debugged.


Haylyn - Collaborative 3D Semantic Web Visualization and Analytics (Formerly Nexus)

Haylyn is an experimental collaborative 3D Semantic Web visualization tool being built with (WebGL/OpenSimulator) to test various ideas and design concepts in visualizations, software design, and algorithms.

Some key paradigms and principals are being following in Haylyn's design:
1) Must be collaborative - all visualizations must be sharable in real-time accross multiple clients regardless of location.
2) All-RDF - rather than use any custom formats or internal data representations, RDF is used throughout Haylyn's architecture.  Haylyn consumes, internalizes, and exports everythying as RDF - Client/server communications are in RDF, user and client sessions are in RDF, cursor postion and directional vectors are in RDF, even the visualizations themselves are in RDF which allows them to be tightly coupled with the original data itself.
3) Explore 3D - many graph layout programs use 2D layout but 3D is being explored in Haylyn including 4D (time-based)
4) If a best-practice dogma is encountered, I follow this quote:

“Do not go where the path may lead, go instead where there is no path and leave a trail.”
Ralph Waldo Emerson

Molecular visualization is achievable in Haylyn because of it's ontology-driven visualization model.  The added benefit in doing it this way is that other semantic data sources can be linked and referenced while searching for/or working with a particular structure(s).  In addition, since Haylyn is driven by a SPARQL query engine (Jena ARQ), molecular selection criteria become more flexible by allowing a SPARQL query to be used to pick which parts of a structure are acted upon for display or modification.  Haylyn is not limited to just molecular visualization but will be able to visualize various semantic datatypes (FOAF, DOAC, etc) from multiple data sources.


Subscribe to WebGL