Pleco Phase 03: Finally drivable

I wrote about Phase 02 in September 2013 and listed goals for the Phase 03.

Most of the goals are implemented and finally it is a joy to remote drive the car.

I attached a wide angle lens to the existing camera and that made a huge difference. The camera crops a bit from the 180° angle but clearly the wider the better. I also made my first 3D print to better attach the camera to the servos.

Tegra 3 based Ouya was replaced with a Tegra K1 based Jetson TK1. That made it easy to stream a low latency H264 video over the network. Due to USB 2.0 and network limitations, the best quality video stream that can be enabled from the controller application is 800×600 2 Mbps.

The Microsoft Lifecam decreases the FPS in low light conditions so I added some V4L2 controls. The brightness is set to minimum to get the maximum FPS. In addition to that, the controller application now has manual focus and manual zoom.

The latency is low enough to drive over a long distance. In local wired network, the application latency (from the controller application to the slave application and back to the controller application) is 1-2 ms. Over my local WIFI it increases to 4 ms. In the video above the car was connected to an LTE network and the communication was routed over a distance of 800 km (the relay is in Stockholm and I live near Helsinki). The latency was around 60-80 ms and it did not introduce any noticeable delay. A friend of mine even drove the car from Gold Coast (that is in Australia, almost 15000 km away!). The latency was around 450 ms and while the delay was obvious, he was still able to drive it.

I also measured the actual visual delay, i.e. how long it takes for the controller application to show what the camera sees (“photon to display”). I did that by taping a led to the camera and using an external microcontroller with two light sensors. One sensor was taped to the led on the camera and the other one was taped to the monitor showing the controller application. I then measured the difference of the two sensors.

The visual latency was about 105 ms with 1 ms network latency. The camera is supposed to be 30 FPS so that can introduce a maximum of 33 ms of latency and my monitor is 60 Hz, so it can introduce a 16 ms latency. On average something else still introduced a 80 ms latency. That 80 ms may be the sum of getting the video stream from the USB camera to the Jetson, encoding it, sending it, buffering a frame, decoding it and then finally showing it. While it might be possible to get the latency down, it is already low enough for remote driving within a 1000km range :)

The project will never end and there are already some goals for the Phase 04:

  • GPS
  • AHRS based car orientation visualisation
  • 60 FPS (stereo?) camera
  • Control the webcam based on driver’s head orientation
  • Improved gamepad etc. controls

Wireless MCUs and power consumption, part II

In the Part I I described my real project with the radio: a wireless power consumption meter. But it is supposed to be a low power MCU so why not run it indefinitely with a renewable power source?

Solar power is the easiest form of renewable energy and I decided to try it out. That is of course not an option in the dark closet where the smart electricity meter is so I ended up designing a modular solar powered wireless soil moisture sensor.

Measuring the moisture of the soil is only a secondary objective. The real goal is to see if I can keep the radio running continuously throughout the Finnish winter. In December 2013 we had a total of 24 minutes of sunshine during a period of 18 days, so having solar power available should not be taken for granted.

Another issue is the temperature as common batteries must not be charged below zero degrees Celsius. I decided to go with a large solar panel from Sparkfun and 10 F super capacitors from Digikey. The panel provides a maximum of 9.15V which nicely matches four 2.7V super capacitors in series.

The voltage of solar panels decreases quickly if too much power is drawn from them. All larger solar panel systems use maximum power point tracking (MPPT) which tracks the voltage and limits the power draw if the voltage drops. I tried to find MPPT solutions for small systems but did not find anything suitable for this project. Adafruit’s Solar Lithium Ion/Polymer charger is based on MCP73871 but that seems to be designed for batteries alone. There is also bq25570 from TI which looks otherwise very well suited for this kind of project but it seems to be designed for even smaller solar panels.

Solar powered soil moisture sensor
Solar powered wireless soil moisture sensor

The final software will make the measurements once an hour and sleep the rest of the time in the deepest sleep mode. The power consumption in the sleep is only a few micro-amps for the whole thing.

I made some initial tests with a five minute interval behind glass windows and without direct sunshine. During the last day I tried to keep the panel pointed directly to the sun, when possible and that clearly shows in the graph below.

Solar measurements
Solar panel and super capacitor voltages over time.

For now I have just printed out the values and created a graph about the measurements. For the electricity measurements I was using Sparkfun’s Phant on my own server but that seemed to lack features and stability. Currently I am testing ThingSpeak and letting them handle the hosting. It is open source so I could move it to my own server as well. So far the ThingSpeak looks good.

Based on the test with shorter measurement interval, I am hopeful about the radio being able to run through the long and dark winter in Finland.

Wireless MCUs and power consumption, part I

Electricity is expensive and being low power is environmentally friendly. And what is more motivating than seeing the total power consumption in real time? There are plenty of commercial products out there already but will they give you the raw data so that you can plot nice graphics? Not that many. And of course self made is always .. self made.

Lately in Finland the old electricity meters have been replaced with smart meters and they seem to have leds showing the power consumption. The model I have has a led that blinks once for every consumed watt-hour so it is easy and safe to calculate the power consumption by counting the blinks.

I am using a self designed CC430 based 433Mhz radio board and a simple photo resistor to count the blinks of the led in the apartment’s smart meter. Every minute it sends the count to another radio hooked into a Raspberry Pi that in turn sends the value to a server in the network. Then I have a simple javascript based web page that shows the data with a minute or two latency.

Power consumption over time
Javascript based power consumption info page.

There is no power plug that I could use for the radio board so it is running on two AA batteries. I put everything in a small plastic box. Below is an image of the box before I placed it next to the smart meter.

power_measure_unit-20140703

The radio part of the CC430 is turned completely off when not sending and the MCU part is sleeping. The only part running is the comparator that compares the photo resistor’s output to a predefined thresholds. If the threshold is exceeded an interrupt fires and I count the interrupts. Once a minute I reset the counter, turn on the radio and send the count over the air. Running the comparator takes some 200uA and I think I might be able to just bluntly interpret the photo resistor’s output as GPIO. That should drop the consumption below 10uA. Even with the higher consumption it has been running well for a few months now as can be seen in the graph below.

Battery voltage level
2xAA voltage level over time.

Having to replace batteries is inconvenient and another project of mine will be running outside so I can use a solar panel. More on that in Part II.

My first 3D print

Mechanics has always been a problem in my robotic projects but now that the 3D printing is such a hot topic I decided to test it out.

It seems that the 3D printing services use STL format and many of the popular 3D modeling applications can export it, so there are plenty of applications to choose from. I wanted to use Blender for the modeling as I have used it briefly in the past and I would like to know it better. It was easy to use boolean operators to shape my model but it turned out that even though the model looks nice on the screen, it is not enough for real world printing. I had some broken surfaces, holes, etc. in there. As I am a Blender rookie I ended up creating the simplest cylinder and modifying its subdivided surfaces to shape my model. Easy and quite fun :)

Now that I had the model, the next step was to print it. The nearby public libraries provide 3D printing services for free or nearly free with printers from MakerBot and MiniFactory. After a couple of visits I realized that it is not that easy to jump into the 3D printing world with them. You need to know how to tune the parameters of the software, how to clean up and prepare the printer, how to preheat it, etc. You also need to model your creation in such a way that the printer can print it. The plastic is hot during the printing and not very strong so you may need to add some supporting structures during modeling. At least some of the 3D printer applications can add the supporting structures automatically but I was told they might not be very good at it.

Below is a photo showing the first three layers of an automatically created supporting structure. That looked OK but the actual model started to go wrong already on the first layer so the printing was canceled.

First layers of a 3D printed supporting structure.
First layers of a 3D printed supporting structure.

After a couple of miserable failures somebody hinted that Shapeways is a much easier way to get 3D prints and the price is low enough. They seem to be using selective laser sintering and I didn’t need to worry about supporting structures or fine tuning parameters. They also have a bunch of different materials and colors to choose from. I chose “Coral Red Strong & Flexible Polished”:

3D printed Camera Housing
Original part and the 3D printed version with my additions.
3D Printed Camera Housing Fits
The 3D printed part fits perfectly.

I have used cable ties and Meccano parts earlier but I needed something more lightweight (and better looking) and the Shapeways might very well be the solution for me.

Pleco Phase02 completed

It’s been two years already since I posted about Phase01 being completed. It was a simple track based vehicle using cheap DC motors and the hull was built from Meccano parts:

Pleco Phase01

For Phase02 I decided to go with a ready made car and bought an RC Rock Crawler. It’s about four times the size of the Phase01:

Pleco Phase02 open

Pleco Phase02

The software is roughly the same as it was for Phase01. I’ve fixed a lot of bugs, simplified some things and added a bunch of new features. The slave has a sonar next to webcam and it measures the distance to what ever the camera is pointing at. The slave also measures the current consumption and battery voltage level. All details are shown on the GUI.

The hardware on the other hand has changed a lot. Instead of handling the servos directly from Linux there’s now a separate self-designed Cortex-M4 based microcontroller board for driving the PWM signals. In the future the microcontroller can update the PWM signals real-time based on other sensors without having to worry about latency issues in Linux.

The Linux is now running on a Tegra3 based Ouya gaming device. Ouya is not as convenient for robotics as Gumstix was but personally I think Tegra has better Linux support than OMAPs. And since there is now a separate control board for motors and sensors, it’s enough for the Ouya to only have a USB connection to the control board.

Ouya needs 12 volts while the motors need 5 volts and I’m expecting to need 3.3 volts as well. So I now have three switching regulators connected to the battery for the needed voltage levels. And a USB hub because of the USB based control board and webcam. There are also lots of cables. So I’m not actually able to get everything nicely inside the car, but almost.

Controlling an RC car with a keyboard is inconvenient, so I’ve added support for gamepads. I’m currently using a Bluetooth based gamepad from Logitech:

Logitech F710 gamepad

When driving in a local WiFi network, there’s no noticeable latency at all. It’s still not possible to really drive based on the camera only, the viewing angle is too narrow, turning the camera is cumbersome and the video quality needs some tweaking.

Now that I’ve made some nice progress with the project, I have clear goals for the Phase03:

  1. Disassemble the webcam to get the size and the weight down and to be able to attach fish eye lenses to it.
  2. 3D print a frame for the electronics and the webcam.
  3. Tweak the video quality and camera controls so that it would finally be possible to drive based on the video stream.
  4. Control the webcam based on driver’s head orientation?

So there are plenty of interesting things to learn :)

Oh, and I’ve moved the code to github.

Command-line sharing for Harmattan

I use IRC and I want to be able to share photos there easily. For n900 I had implemented a sharing plugin and that worked nicely. When I got the n950 I of course wanted to do the same with that but it turned out to be a difficult task.

I started to implement webupload and SSO plugins but I never got them to work. The biggest show stopper was lacking documentation for the SSO part. Finally Mika Suonpää pointed me to Share UI plugins and now, only a few days later, I have the first version of it working for n950 :)

For some reason I don’t get my icons visible, they are always shown as a red square. All hints about that are most welcome. As is testing and feedback of the plugin. The plugin settings are in Settings -> Applications -> Command-line Share, and from there you need to enable the plugin and set the command to be run. After that the sharing plugin is visible in the Gallery -> share.

The source code can be found here and the corresponding forum thread here.

Ogg-support 1.1.1: Performance

After almost two years there’s a new version of the Ogg Support in the Fremantle Extras.

The decoder code has changed completely. Where the old one used libvorbis and vorbisdec from the GStreamer base plugins, the new one uses libav (formerly FFmpeg) and gst-av from Felipe Contreras. The impact on performance should be significant because the vorbis decoder in libav is more efficient on the n900 than the libvorbis and Felipe’s gst-av also outperforms the vorbisdec.

Thanks to Felipe for doing all the hard work. I’ve just been updating the version numbers of the dependencies and tracking the bugzilla for the known issues and fixes :)

Pleco Phase01 completed

I started playing with microcontrollers in 2005 and, if not at the very start, at least very quickly I decided to aim to have some sort of remote controlled Linux device with controllable camera with digital wireless communication. Now, 6 years later, I have completed my first phase :)

Couple of photos of the earlier devices are shown in the project page.

After several planning iterations and code rewrites I ended up using Qt both on the remote controlled Gumstix and on the GUI controller. I decided that trying to optimize everything from the memory and CPU consumption to the network bandwidth just isn’t worth the time spent in implementing it. The most CPU intensive task is the video encoding to H263 and that’s done in the DSP. I’m running MeeGo on the Gumstix and it provides e.g. the GStreamer plugins for the DSP.

Using Qt framework with self made simple protocol over UDP I got the Phase01 code implemented quite quickly compared to my previous efforts. The protocol allows low priority packets (like periodic statistics and video stream) to be lost and guarantees the passing of high priority packets (control commands etc.). Also only the latest control command of each type is retransmitted, i.e. an old packet is not retransmitted if a new overriding command has already been given.

The controller GUI shows the states the slave sends, like motor speeds, WLAN signal strength, CPU load average and some protocol statistics like round trip time and the number of retransmissions.

Currently the motors are controlled using the a,s,d,w keys in 10% steps and the camera is controlled dragging the mouse left button pressed on top of the video window.

Here’s a video (direct link) of the Phase01. You need HTML5 video capable browser with Ogg Theora/Vorbis codecs.

Command line sharing plugin in extras-devel

I reflashed my device and the biggest annoyance after restoring the backup was recompiling the sharing-cli plugin as it was not in any repository. Now it is.

It has been working for me for several months without issues, although I’ve been sharing only medium size images over a decent connection. It might not succeed in sharing videos over GPRS.

For hints about the usage, see the previous post.

Command line sharing plugin for n900

Thomas Perl made a proposal for creating a command line sharing plugin for the n900. I had already planned to implement something like that so I joined the project.

I pushed the first “proof-of-concept” quality implementation to the GIT a month ago. I’ve been using it with the Irssi script (in the scripts directory) to get http URLs with meta information to IRC. The Irssi script needs to be modified to match the directories and IRC servers in use and both the script and the sharing plugin are still missing most of the error checking and extra functionality. Nevertheless, they’ve worked for me for the past weeks.

For the sharing plugin I’ve given something like the following command line:


scp %s kulve@foo.bar.fi:~/photos_incoming/%s

There’s two times the %s as the local temporary file name doesn’t match the actual file name to be copied. And that assumes the SSH keys have been exchanged so that no passwords will be asked.

The Irssi script polls the incoming directory for new images. For each new file, it moves the file to public WWW tree, gets the meta information with exiftool and prints something like this to the specified IRC channel:


Title: Description [tags] (GPS coordinates) http://foo.bar.fi/~kulve/imagename.jpg

I modified the Irssi script a bit before pushing it to the GIT, so no guarantees it works. And that’s my first Irssi script ever, so it may do something odd ;)

There’s no debian packages yet as neither the script nor the plugin have been tested properly. Comments, testing and patches are welcome.