Pleco Phase 03: Finally drivable

I wrote about Phase 02 in September 2013 and listed goals for the Phase 03.

Most of the goals are implemented and finally it is a joy to remote drive the car.

I attached a wide angle lens to the existing camera and that made a huge difference. The camera crops a bit from the 180° angle but clearly the wider the better. I also made my first 3D print to better attach the camera to the servos.

Tegra 3 based Ouya was replaced with a Tegra K1 based Jetson TK1. That made it easy to stream a low latency H264 video over the network. Due to USB 2.0 and network limitations, the best quality video stream that can be enabled from the controller application is 800×600 2 Mbps.

The Microsoft Lifecam decreases the FPS in low light conditions so I added some V4L2 controls. The brightness is set to minimum to get the maximum FPS. In addition to that, the controller application now has manual focus and manual zoom.

The latency is low enough to drive over a long distance. In local wired network, the application latency (from the controller application to the slave application and back to the controller application) is 1-2 ms. Over my local WIFI it increases to 4 ms. In the video above the car was connected to an LTE network and the communication was routed over a distance of 800 km (the relay is in Stockholm and I live near Helsinki). The latency was around 60-80 ms and it did not introduce any noticeable delay. A friend of mine even drove the car from Gold Coast (that is in Australia, almost 15000 km away!). The latency was around 450 ms and while the delay was obvious, he was still able to drive it.

I also measured the actual visual delay, i.e. how long it takes for the controller application to show what the camera sees (“photon to display”). I did that by taping a led to the camera and using an external microcontroller with two light sensors. One sensor was taped to the led on the camera and the other one was taped to the monitor showing the controller application. I then measured the difference of the two sensors.

The visual latency was about 105 ms with 1 ms network latency. The camera is supposed to be 30 FPS so that can introduce a maximum of 33 ms of latency and my monitor is 60 Hz, so it can introduce a 16 ms latency. On average something else still introduced a 80 ms latency. That 80 ms may be the sum of getting the video stream from the USB camera to the Jetson, encoding it, sending it, buffering a frame, decoding it and then finally showing it. While it might be possible to get the latency down, it is already low enough for remote driving within a 1000km range :)

The project will never end and there are already some goals for the Phase 04:

  • GPS
  • AHRS based car orientation visualisation
  • 60 FPS (stereo?) camera
  • Control the webcam based on driver’s head orientation
  • Improved gamepad etc. controls

XBMC for Tegra with full HW acceleration

XBMC at UltraHD resolution.
XBMC running at UltraHD resolution on Jetson TK1.

Thanks to Markus Tavenrath there is finally a fully accelerated Kodi (previously known as XBMC) for Tegra K1 based devices like the Jetson TK1. Some of the Kodi patches are already upstreamed and the rest will be hopefully soon. Jetson supports X.Org very well and things like XRandR based TV refresh rate changes work perfectly.

I have a simple lab power supply (Mastech DF17132) and I made some quick power measurements with it. I did only guestimate the typical reading instead of making multiple tests and calculating the averages. So the numbers should be close to the truth but not scientifically accurate. I replaced the power supply that came with the Jetson with the lab power, so I have not included the consumption of the Jetson’s external power supply in the measurements.

The on-demand CPU frequency governor does not provide proper CPU clocks for smooth Kodi UI and video playback, so all the Kodi tests have been run with all four CPU cores forced on-line and with performance governor.

My USB hub, USB keyboard and USB mouse consume about 0.84W. The very noisy fan takes about 0.72W. That is about 1.5W that is included in the numbers below but maybe should not as everybody use different USB devices and nobody can use that fan in an HTPC setup.

In full screen video test cases the TV’s refresh rate has been changed to 24Hz and when the XBMC UI is visible, the refresh rate is 60Hz for 1080p TV and 30Hz for 2160p TV (HDMI 1.4 limitation).

1920×1080 Sony TV
Test case Power consumption
Slim Login 3.36
XFCE desktop, idle, ondemand 3.36
XFCE desktop, idle, performance 3.72
Kodi main menu 5.40
Kodi full screen video 1080p24 5.16
Kodi full screen video 1080p60 6.00
Kodi full screen video 2160p24 5.40

In the table below, the resolution used for all the cases is Ultra HD or 2160p or 3840×2160. The term “4k” is misleading as people often really mean the Ultra HD and not for example 4096×2160.

3840×2160 LG TV
Test case Power consumption
XFCE desktop, idle, ondemand 3.72
XFCE desktop, idle, performance 4.08
Kodi main menu 5.52
Kodi full screen video 1080p24 (max gpu clocks) 6.96
Kodi full screen video 2160p24 (max gpu clocks) 6.96

I think decoding 1080p24 with 5.16W (or 3.7W without USB peripherals and the fan) is pretty good! Add 2 watts and you get 2160p.

After watching different trailers over and over I definitely would like to see movies being produced at higher FPS rates and distributed at higher bitrates instead of bumping the resolution from Full HD to Ultra HD. That 1080p60 looked awesome.

The biggest complaint I have about the Jetson, is the fan. The noise is way too loud even for development use, not to mention using Jetson as an HTPC in the living room. There is some discussion about using passive coolers in the NVIDIA forums.

For installation instructions and other tips see the Installing Kodi wiki page.

Pleco Phase02 completed

It’s been two years already since I posted about Phase01 being completed. It was a simple track based vehicle using cheap DC motors and the hull was built from Meccano parts:

Pleco Phase01

For Phase02 I decided to go with a ready made car and bought an RC Rock Crawler. It’s about four times the size of the Phase01:

Pleco Phase02 open

Pleco Phase02

The software is roughly the same as it was for Phase01. I’ve fixed a lot of bugs, simplified some things and added a bunch of new features. The slave has a sonar next to webcam and it measures the distance to what ever the camera is pointing at. The slave also measures the current consumption and battery voltage level. All details are shown on the GUI.

The hardware on the other hand has changed a lot. Instead of handling the servos directly from Linux there’s now a separate self-designed Cortex-M4 based microcontroller board for driving the PWM signals. In the future the microcontroller can update the PWM signals real-time based on other sensors without having to worry about latency issues in Linux.

The Linux is now running on a Tegra3 based Ouya gaming device. Ouya is not as convenient for robotics as Gumstix was but personally I think Tegra has better Linux support than OMAPs. And since there is now a separate control board for motors and sensors, it’s enough for the Ouya to only have a USB connection to the control board.

Ouya needs 12 volts while the motors need 5 volts and I’m expecting to need 3.3 volts as well. So I now have three switching regulators connected to the battery for the needed voltage levels. And a USB hub because of the USB based control board and webcam. There are also lots of cables. So I’m not actually able to get everything nicely inside the car, but almost.

Controlling an RC car with a keyboard is inconvenient, so I’ve added support for gamepads. I’m currently using a Bluetooth based gamepad from Logitech:

Logitech F710 gamepad

When driving in a local WiFi network, there’s no noticeable latency at all. It’s still not possible to really drive based on the camera only, the viewing angle is too narrow, turning the camera is cumbersome and the video quality needs some tweaking.

Now that I’ve made some nice progress with the project, I have clear goals for the Phase03:

  1. Disassemble the webcam to get the size and the weight down and to be able to attach fish eye lenses to it.
  2. 3D print a frame for the electronics and the webcam.
  3. Tweak the video quality and camera controls so that it would finally be possible to drive based on the video stream.
  4. Control the webcam based on driver’s head orientation?

So there are plenty of interesting things to learn :)

Oh, and I’ve moved the code to github.