We have a web-based research prototype that allows for multi-browser interaction on multiple devices. For example, interactions in a browser on a tablet would be reflected in the browser on my laptop. Since this is still in development, I also run the server on my laptop.
This was all fine until we introduced the security service into our development branch (provided by another partner in our consortium). The system now uses cas authentication, which is reliant on a specific domain name rather than just an IP address. For example, if the system was called sausages, we would need to connect via the url http://sausages:9090. To solve this locally, we just modify the hosts file on development machines. The issue is that I can no longer connect from the tablet since the tablet has no idea how to resolve the domain name ‘sausages’. I could root the tablet and modify its host file, but I’d prefer not to go down that route since I’d have to modify it every time my laptop’s IP address changes.
The solution I came up with was to use dnsmasq as a lightweight DNS server on my laptop. Since we’re already using docker in the project, it was easy enough to add it our existing compose file. However, there is still the issue that the DNS lookup for sausages would need to change every time my laptop’s IP changed.
To address this, I created a small network (literally) using a TP-Link TL-WR702N wireless router (see below). This thing is 57mm*57mm*18mm in size and can be powered from USB. I configured its DHCP settings such that my laptop always has the same IP, which is also added as a DNS address. Then I plug it into my laptop’s USB port and connect both the laptop and tablet. Hey presto! Without changing the config of either the laptop or tablet, I can go to http://sausages:9090 in the browser on the tablet and connect to the server on the laptop (resolved via dnsmasq on the laptop).
The only disadvantage is that it takes my laptop off my main network. However, I can connect to the router either wirelessly or wired, so I can always use the other option to connect to two networks at once.
Data analysts must often attend to several perspectives on a dataset concurrently. A common example is when the data have many attributes that carry spatial, temporal and other descriptive characteristics. Analysts need approaches that enable these many perspectives to be considered concurrently, so that they can build a comprehensive, multi-faceted understanding of phenomena. In this work, we propose a design framework for producing composite faceted views that incorporate different levels of visual abstraction for multiple perspectives. Fluid transitions and selective varying of these abstractions encourages concurrent analysis across perspectives. The software, which can be seen in the video below, was developed in Java using the Processing.org graphical library.
Saif Hossenbaccus, a final year student who I’m supervising for his dissertation, is investigating high resolution gaming and developed the Generic Space Shooter game. Saif used these two weeks to get feedback on an early prototype.
To test whether I could get Unity3D applications working smoothly on the MDX Powerwall, I used a car racing demo (https://www.assetstore.unity3d.com/#/content/10, created by Morten Sommer). I modified the demo such that the user input was synchronised across three instances of the application (one running on each graphics card). The end result is a racing game running at a resolution 15360×4220 pixels (~66 million) and well over 60 frames per second. You can see some jitter on the right had side of the car where the synchronisation isn’t quite working properly.
Now that we know Unity3D applications work, we can use it as a platform for building bespoke, high-resolution, interactive, 3D environments. For example, to visualise geographic or astronomy data.
P.s. The silhouette of my very static head is there for you to get a sense of scale.
The MDX Powerwall is a 64 million pixel display that I have designed and build at Middlesex University. It is constructed of 18 Dell 27″ monitors, each with a resolution of 2560×1440, giving a total of 1536×4320. It is powered by a single PC thanks to three AMD Radeon 6870 Eyefinity graphics cards.
The CRISIS project aimed to deliver both field exercise and command post exercise training. The XVR simulation tool allows us to do the former, and I was required to create the latter. It was developed during a two-week post at E-Semble, where we were able to collaborate on the interactions between XVR and the command post tool, including the sending of streaming video data and enter/exit command post events.
The tool is developed in Java, and makes use of a Swing UI, the Processing graphics library, and an ActiveMQ connection for communicating with XVR and synchronising multiple instances of the command post (in a multi-user training environment). You can see the tool in the video below.
During the 3rd Visual Analytics Summer School, hosted at Middlesex University, myself and Rick Walker gave a 2 hour tutorial on R, an Open-Source environment for statistics and graphics. The aim was to demonstrate the capabilities of R to those that haven’t used it before. Feel free to try the tutorial. All the material you need can be found here.
After spending two days working on a conference paper submission to IEEE VAST, I decided to give my myself a Friday afternoon project. I took some of the earlier Microsoft Kinect work I had done and connected it to the XVR simulation software that we’re using as part of the FP7 CRISIS project.
The XVR software is built in Unity 3D, which means we can’t directly use the Microsoft SDK. As a workaround I produce/consume Kinect messages through an ActiveMQ broker. This means we
could also make use of the voice recognition software that Microsoft offer.
The end users of CRISIS project made it clear that voice communication was a key requirement for the final training system. The primary reason being that trainee communications needed to be made available for analysis during after action review. For that reason, I was required to develop a bespoke voice communication tool. It was developed in C# using the .NET framework, and made use of socket connections for the voice data, a Restful service client for storing the data, an ActiveMQ producer for sending real-time events, and a WCF service for receiving push-to-talk events. The images below show the client and server interfaces, and the tool being used by Icelandic airport emergency response teams during a training event in June, held at Reykjavik airport. The tool has also been deployed to the Portuguese airport authority, based at Lisbon airport, and British Transport Police, based in London.