Blog posts

Handcrafted versus deep learning features

Hello there!

Here is my new paper on using traditional algorithms versus deep learning to annotate the quality of experience (QoE) of video calls. Following is the gist of  the paper:

Handcrafted: The paper initially focuses constructing new objective QoE metrics to predict the QoE of users. Typical objective metrics are video bitrate, frames per second, and the amount of stall during video.  It uses a typical machine learning model(s) such as logistic regression or SVM to map these objective metrics with a real QoE ground-truth collected users. It turns out that these metrics are not scale across large set of videos, devices or applications. So, we go for a generalized version of the model that can predict the QoE using deep learning.

Deep learning: We use the same set of videos and the ground-truth QoE labels collected from the users, and let the deep learning model to learn the features. Turns out the model can learn the features better than handcrafted features. More interestingly, a combined model that uses both handcrafted and deep learned features produce much better accuracy (> 95%) in labeling the QoE of videos.

Please find the full paper for more details in the following link.

Handcrafted versus Deep Learning Classification for Scalable Video QoE Annotation

Advertisements

Frame Rending in Virtual Reality is Expensive on Mobile Devices

Following three research papers show that the key bottleneck in un-tethered VR experience on mobile devices is Rendering a Frame. And here is how they try to tackle this problem. These three papers are a nice series of work on practical VR experience on mobile devices.

FlashBack: Mobisys’2016

FlashBack is one of the first papers that studies the impact of running the VR applications on mobile devices. The authors say that if frame rendering is very compute intensive and the mobile cheap hardware does not meet this compute requirements. Instead, they argue that there is lot of storage often underutilized on mobile devices, and this storage can be used to memoize the pre-rendered images. They show the effectiveness of their method by pre-rendering all the possible orientations of images for Viking Village VR application offline and by storing in three tiered hierarchical cache (GPU memory, RAM, SSD).

Comments: Very specific to one VR application. Does not scale well.

Furion: Mobicom’2017

Furion is a fantastic paper that investigates how the current mobile devices can satisfy the rendering requirements of VR applications, in cooperation with a remote rendering server. The authors also argue that future mobile hardware alone is not going to satisfy the rendering compute requirements because the mobile device enhancements are almost saturated. Further, they show that the wireless networks are not capable if all the rendering is placed on a remote server and stream the raw frame data over the wireless link (A raw frame requires a data rate of tens of Gbps network which is way far from the existing and upcoming future future wireless technologies such as 802.11ad, ay or ax).

Instead, Furion characterizes different VR applications and figures out a way to split the rendering into foreground and background activities. They show that the foreground activities are light weight in load and are not predictable, and hence they can be placed at the mobile device. Whereas the background activities are heavy weight in rendering load and predictable, and hence they can be placed at a remote server. The remote server also does some sort of compression to optimize the data over wireless links, and the mobile device decodes the data and displays.

Cutting the Cord: Mobisys’2018

This paper follows Furion by offloading complete rendering computation to a remote server. The authors argue that the mobile device display rate should be in sync with the remote server so as to avoid frames missing and unnecessary frame display delays. They propose a parallel rendering and streaming pipeline where the remote server first renders the left eye image of VR application and starts streaming (encode, transfer, and decode) it while simultaneously rendering the right eye image. Further, it proposes remote Vsync driven rendering approach to synchronize the remote server frame rendering and mobile device display rate.

EnvironmentError: IOError: usb rx6 transfer status: LIBUSB_TRANSFER_OVERFLOW

If you are facing the following problem, that means the FPGA tried to load too much data into the DMA FIFO in the Cypress FX3 (USB chip) and got crashed. This might be resolved in some later versions of UHD, but you can use below workaround a suggested by Michael at here.

terminate called after throwing an instance of 'uhd::io_error'
  what():  EnvironmentError: IOError: usb rx6 transfer status: LIBUSB_TRANSFER_OVERFLOW
Aborted (core dumped)

use the command utils/b2xx_fx3_utils --reset-device if you ever have to Ctrl-C your application

One of the best books to understand the history of humankind

A favorite from the recent reads of my summer.
Sapiens: A Brief History of Humankind, by Yuval Noah Harari.
The book brings back the memories from the stone age to ice age to agricultural revolution to scientific revolution. I was fascinated to learn about the details of how the homo sapiens spread around the world during the ice age and got separated into virtual worlds until few hundred years ago. It was a one of the thought-provoking experience I have ever had.
I highly recommend this for each and every book lover. Happy #BookLoversDay 🙂 🙂