Nexus Project: WebGL Client/Server Communications Test using RDF over WebSockets

 A video from a July 25, 2011 showing the first successful test run of my Nexus Project's WebGL client/server communications using HTML5 WebSockets rather than http polling.  This visualization shows Friend of a Friend (FOAF) RDF graph data being displayed in 3D as it's layout is being determined by a 3D force-directed layout algorithm.  I got tired of digging up the video on my iPhone to show people so I decided to post it.  There have been many things done since this video (latest browser support, Jetty 8, GLGE 0.9, speed improvements, and better screen capture than my iPhone too ;-)  I have been considering a different RDF serialization rather than N-TRIPLES since N-TRIPLES since it is hopelessly uncompressed, but it made for the easiest implementation since N-TRIPLE parsers are easy to write in javascript.  Jena also supports N-TRIPLES serialization so nothing had to be done on the server end of things.  I was just at the ISWC 2012 in Boston and it was suggested to me to use Turtle RDF (I was also considering JSON-LD or even the binary RDF format), but honestly, the speed of N-TRIPLES is sufficient for now and I would rather work towards a first release of the software.  It's too aluring to endlessly tinker (and I love to tinker by the way).


Nexus Upgraded from Jetty 7 to Jetty 8 now supporting WebSocket RFC6455

Nexus has been upgraded from Jetty 7 to the latest Jetty 8 which supports the RFC 6455 version of the WebSocket protocol.  This allows the use of the latest FireFox 13 and Google Chrome browsers and, of course, all of the improvements to both browsers, especially the HTML5 and WebGL support.  I've been holding back on updating because of the number of revisions the WebSocket protocol has been going through.  I wanted to spend my time on core Nexus development rather than just bumping the WebSocket revision support up a notch.  Fortunately, the revisions needed to the Jetty parts of the Nexus code were easy.  Thank you Jetty team!  An added benefit to the code update was the removal of a non thread-safe queue I was using which was causing intermittent crashes when using multiple clients on the Nexus server.


Building HTML5 pages one RDF Triple at a Time

Alright, it doesn't look like much, but there is something special about the web dialog that is being displayed over the 3D graph - it was built triple by triple.  I had reached the point in developing Nexus where I needed to have interactive dialogs for display control and user-input.  Up until this point, I had been working with 3D objects and taking SPARQL commands via a text box at the bottom of the screen, but easier control over visualizations was needed.  Nexus client/server communications works by passing the RDF data that represents the 3D visual displays to the client from the server as RDF over the HTML5 WebSockets protocol with the client sending back any responses to the server back over the same bi-directional WebSockets connection.

But how to handle HTML5 web dialogs?  Also, in keeping with one of the Nexus design principals "it must be collaborative", how would I keep HTML5 dialogs synchronized between multiple clients?  Another Nexus design principal is "it must be RDF".  If you think about it, HTML, CSS, SVG, and RDF are all just data in the end, but four different styles of data formats.  If we have RDF, why not just make them all RDF?  I had looked around and people had asked the question of how to represent actual HTML pages as RDF years ago, but they only comprehensive work I could find on the subject, is TopQuadrant's SPARQL Web Pages. (  TopQuadrant represents SPARQL (as SPIN - SPARQL as RDF) and HTML/CSS as RDF and then processes the combination of RDF data server-side to render HTML pages (in the same fashion as JSP, ASP, PHP) and then passes the rendered HTML/CSS pages to the client.  TopQuadrant even has RDF ontologies to represent HTML 4.01, CSS, and SVG Tiny1.2.

But, I wanted to put a slightly different "spin" on this.  ;-)  I decided I wanted to take the web pages represented as RDF on the server and then pass that data to the client as RDF over the WebSockets and then render the web pages (or fragments) triple-by-triple using the HTML Document Object Model (DOM) client-side.  I created my own HTML/CSS ontologies since TopQuadrant's is weighted towards their SPIN technology.  The above image is a 3D depiction of Tim Berner-Lee's FOAF file as a 3D graph.  The black dialog was built by executing the following SPARQL query against the graph to determine the number of each type of predicate used in the RDF graph.  This is a precursor dialog to allow a user to remove uninformative predicates from the view (for example removing triples that represent gender when you know you are looking at a same-sex population of people).

SPARQL Query to determine numbers of different predicates:
select distinct ?p (count(?p) as ?count)
where {graph ?g {?s ?p ?o}}
group by ?p
order by desc(?count)

After the execution of this SPARQL query, I follow the same model as I do in the 3D visualizations.  You never see your actual data, you see something that is a visual representation of your data.  By having this visualization layer over the actual data, it allows for multiple visualizations of the same thing at the same time.  It also allows for an amazing level of flexibility since visual data can be manipulated by a variety of SPARQL update queries against a combination of the visual data and the actual data (both being represented as RDF). The results are wrapped in HTML/CSS (an RDF version of it) which then creates the final RDF graph which can be viewed in this link (HTML/CSS as RDF).  This is the actual data that is sent over the WebSockets connection to build the dialog.  The client then reads the data triple by triple and then creates the HTML fragment (wrapped in a DIV) and then adds it to the existing web page.  Each row of the table in RDF has a <nex:clickable> set to true on it.  What is this for?  It tells the client to attach an onclick handler to the row.  All rows have a unique ID set.  The onclick handler simply sends a single triple back to the server to indicate which object (row) has been clicked.  For example:

<> <html:event> <html:hasBeenClicked>

It is then up to the server to determine how to respond to the end-user's click.  For what I am working on, at the moment, is that the background color will be set to a "highlight" color which will take a single triple to the client to do.  Keep in mind that another client may be attached to this visualization at the same time and that same triple can also be sent to keep that client's version of the dialog in synch with the client who did the clicking.  There is also no reason why multiple clients cannot click on the same dialog at the same time and on different rows (but on different browsers) for everyone to pick what they want to select.  Singular triple changes allows for alot of flexibility, as well as, a "AJAX-like" experience since no screen refreshes are required on any of the concurrent clients.

Other updates since last blog post:

1) The force-directed layout engine as been re-written to take advantage of multiple cores.  This has greatly accelerated layout computations since I work on a 6-core computer most of the time.

2) The "classic" RDF reification model which allows Nexus to make statements about specific triple's literals and specific triple's predicates has been replaced with a Named Graph version of RDF reification.  The notation is much easier to work with than using rdf:Statement, rdf:subject, rdf:predicate, rdf:object styled reification.  This was done after the newer Apache Jena library was added to Nexus.

3) Creation of a threaded 3D visualization "controller" - the earlier version of Nexus worked SPARQL command by SPARQL command.  In the new version, it can process multiple triples in batches, as well as, run layouts in a separate thread(s) which also enables incremental updates to the current visualization.  Essentially, a user can watch a graph layout occur as it happens.  This has been very helpful in debugging since I can see live what is actually happening.

4) XY/XZ client rotations and zooming with the ablity to alt-click a new center of rotation to allow "camming" through the 3D simulations.  Fun with quaternions!  (

5) COLLADA duck avatar representation of different clients.  This is needed to let a client know where other clients are collaboratively "looking" in the 3D visualization.  The duck was less boring than a ball or a cube or an arrow. And it makes my kids laugh.  :-)



RDF Triples over HTML5 WebSockets

From the beginning, I wanted Nexus to be a collaborative visualization system allowing multiple clients in multiple locations to see the same visualizations in real-time.  The issue that arises here is knowing "where" in the 3D semantic web visualization the other clients (people/avatars) are and what direction they are looking at.  In the 3D digital world, you have the concept of a "camera".  This is essentially your point-of-view in a particular 3D simulation.  As the camera moves, your view of the model changes as well.  In order to know where the other clients are in the simulation, the camera position and rotation data on all clients are converted to RDF triples and then sent to the Nexus server to be resent and synchronized to all other clients.  Nexus eats, breathes, and internalizes everything as RDF.  HTTP polling would not work well as a transport for these triples, especially with a dozen or more clients all trying to sychronize with each other.  The solution is sending the RDF N-Triples using the HTML5 WebSocket protocol. 

What are WebSockets?  The WebSocket protocol is a bi-directional, full-duplex communications protocol that is part of the HTML5 specification.  WebSockets allow my WebGL clients to talk back and forth with the Nexus server without resorting to http polling.  I will be adding WebSockets to my OpenSimulator client as well.

I've embedded Jetty in Nexus so Apache Tomcat is no longer necessary to run Nexus which simplifies the deployment of the Nexus server software.  Jetty also has a nice clean HTML5 WebSockets implementation and allows me to do both http and WebSockets on the same ip and port.  Nexus client/server communications are all just streams of RDF triples going in both directions using the HTML5 WebSockets protocol.

Here is my poster for my 2011 Gordon Conference on Visualization in Science and Education that I did a couple weeks ago where I presented the progress so far on Nexus.



3D RDF FOAF in WebGL-HTML5 linked to OpenSimulator

The adjacent image is of Tim Berners-Lee's FOAF file imaged with a new HTML5 / WebGL client I am developing for my Nexus RDF visualization server. WebGL allows for sophisticated 3D graphics within a web browser with no plug-in required.  The visualization is in 3D with a layout determined by a force-directed algorithm driven by the Nexus server.  The below color image is also Tim Berners-Lee's FOAF file, imaged in the same fashion, but from within an OpenSimulator region.  The twist is that both images are created off of the same server session.  In other words, the session is occuring concurrently in the HTML5/WebGL client and the OpenSimulator region allowing multiple users in the OpenSimulator region to collaborate in real-time with multiple HTML5 / WebGL clients.

In the intial testing/debugging of the HTML5 / WebGL client, I was able to get 14-16 frames per second using FireFox 5 (beta).  Greater frame rates were achievable in testing with Chrome.

To speed the development of the HTML5 /WebGL client, I made use of Paul Brunt's GLGE WebGL library which is an amazing piece of work in itself.  Currently, N-triples over HTTP is used to communicate between the clients and the server, but WebSockets is being explored.

The OpenSimulator client avoids the use of the standard OpenSim object inventory for object handling by using an RDF store with dereferenceable URIs.

Hopefully, in the next couple of weeks I will have color and variable nodesizes debugged.


Subscribe to HTML5