LOLA, Ultragrid and MVTP updates
- Claudio Allocchio (GARR)
- Milos Liska (CESNET)
- Sven Ubik (CESNET)
Project leads for LOLA, Ultragrid and CESNET’s 4K streaming appliance (MVTP) share recent developments for each platform. The presentation will include the recent release of LOLA v1.5.0, new features and a look to the future of LOLA v2.0 and multisite functionally. Milos Liska will lead a walkthrough of the new Ultragrid graphical user interface, which opens of the application to less experienced users. Sven Ubik will share the most recent work with the MVTP appliance with a live demonstration of the technology in action.
Claudio is one of the world leaders in this field, and one of the inventors/developers of LOLA (background presentation about LOLA). Today’s presentation is to update the community on the technological advances in the new version LOLA v1.5.0. [JB comment – the tech developments are remarkable, including fast JPEG compression and 120FPS video support – which I find astounding in an internet-based video service, let alone one where super low latencies are crucial].
Claudio asks us to ‘dare the impossible’, and cites an example of a Strauss violin performance challenging the assumption that cross-Atlantic collaboration is impossible, stating that 40ms between London and NYC is possible [JB comment – this is a game changer, because 40ms is right on the cusp of what’s possible between musicians – in contrast the the 91ms Boston<—>Edinburgh session I experiences last year].
Some LOLA users ask if they can walk around the musician at the other end, and Claudio talks about support for ambisonics in the system. Some music teachers want to ‘teach their way’ and he alludes briefly to the possibilities enabled by avatars and MoCap gloves.
One big user ask is ‘can we make it smaller?’ [LOLA is a PC-based Windows desktop machine with very demanding hardware interface requirements, and these can shut out entry users]. Claudio’s team has developed a ‘home studio’ version of LoLA using fast desktop interfaces, and for the first time (via BootCamp) users can run LOLA on a Mac. A PC laptop version of a full LOLA system, based around a fast gaming PC, now comes in at around $1300. Caveats – this is for teaching and for emergency situations, and a tower PC is preferable – Claudio advises ‘don’t use a laptop system to run a demo for ministers and politicians’!. He ends by demoing a live Mac-based LOLA connection between Miami and Trieste (the camera connection works really well, but it’s a public holiday there today, so we see flashing lights in a darkened network room!).
Our next presenter is Milos Liska of CESNET, updating us on the latest developments in UltraGrid (an open-source software hardware-based low-latency HD solution that runs up to 8K video). Like LOLA, UltraGrid is moving towards 120FPS video support and laptop versions, and is also working to incorporate SMPTE standards and 4K 60FPS video transmissions. UltraGrid cannot yet match LOLA in terms of latency, but it can now achieve 30ms end to end display video, and under normal conditions around 40ms video (80ms round trip). Audio latency of 15ms can be achieved reliably.
Milos provides a lot of detail about the new spec and the nest steps, and describes some projects, including a collaboration with the Manhattan School of Music, and some pan-European multi-country dance/visual/music performances – Near In The Distance 3.
Like LOLA, UltraGrid is now looking at accessibility and the cost of entry, so Milos talks about the UltraGrid kit, an affordable solution aimed at high schools.
Finally, Milos summarises a Telematic Performance Format – a series of concerts between City University Hong Kong, UC San Diego and Stanford University. The Zurich-Hong Kong latency was quite high at 270ms (because they used the regular Internet rather than direct Internet2 site to site connections). He remarks on how successful the concerts were despite this level of latency.
Our final presenter this morning is Sven Ubik, also from CESNET, with a session about Low Latency Hardware. He begins with an overview of programmable hardware; hardware designers write a specification, which allows the hardware to be configured in FPGA – ‘Field Programmable Gate Array’ – effectively making the hardware infinitely programmable. CESNET created their own hardware using this method called MVTP. Impressively, it provides 8 channels of audio at 24 bit up to 96kHz and adds less than 1ms latency. He describes the TICO codec (more info at CESNET). Sven notes that modern SDI cameras generally have a frame buffer of a single frame of latency, which adds ~30ms at 30FPS. He cites a buffer-less Dream Chip Technologies camera that has latency of 100 microseconds (exceeding physical shutter time). This hardware solution, Sven states, can achieve a latency of ~3ms between camera and display, albeit with substantial sensitivity to network jitter.
We hear a video of a Trio for Flute, Clarinet and Bassoon, Op.32 (Kummer, Kaspar), where each player was able to successfully navigate the latent environment with a good A/V experience in the room. Sven observes that mixing was psychologically important for the players – that is, the positioning of the sound sources in the stereo image relating to the mic/camera/player positions. During the questions, he notes that the players wanted open headphones so they could hear themselves (a common request for musicians working in ensemble-latency environments).
The priorities for the project, in order of importance, are: ease of use, cost, and bandwidth. So ease of use is the most important factor, because although bandwidth may increase over time, people will remain the same!