Pleco Phase 03: Finally drivable

I wrote about Phase 02 in September 2013 and listed goals for the Phase 03.

Most of the goals are implemented and finally it is a joy to remote drive the car.

I attached a wide angle lens to the existing camera and that made a huge difference. The camera crops a bit from the 180° angle but clearly the wider the better. I also made my first 3D print to better attach the camera to the servos.

Tegra 3 based Ouya was replaced with a Tegra K1 based Jetson TK1. That made it easy to stream a low latency H264 video over the network. Due to USB 2.0 and network limitations, the best quality video stream that can be enabled from the controller application is 800×600 2 Mbps.

The Microsoft Lifecam decreases the FPS in low light conditions so I added some V4L2 controls. The brightness is set to minimum to get the maximum FPS. In addition to that, the controller application now has manual focus and manual zoom.

The latency is low enough to drive over a long distance. In local wired network, the application latency (from the controller application to the slave application and back to the controller application) is 1-2 ms. Over my local WIFI it increases to 4 ms. In the video above the car was connected to an LTE network and the communication was routed over a distance of 800 km (the relay is in Stockholm and I live near Helsinki). The latency was around 60-80 ms and it did not introduce any noticeable delay. A friend of mine even drove the car from Gold Coast (that is in Australia, almost 15000 km away!). The latency was around 450 ms and while the delay was obvious, he was still able to drive it.

I also measured the actual visual delay, i.e. how long it takes for the controller application to show what the camera sees (“photon to display”). I did that by taping a led to the camera and using an external microcontroller with two light sensors. One sensor was taped to the led on the camera and the other one was taped to the monitor showing the controller application. I then measured the difference of the two sensors.

The visual latency was about 105 ms with 1 ms network latency. The camera is supposed to be 30 FPS so that can introduce a maximum of 33 ms of latency and my monitor is 60 Hz, so it can introduce a 16 ms latency. On average something else still introduced a 80 ms latency. That 80 ms may be the sum of getting the video stream from the USB camera to the Jetson, encoding it, sending it, buffering a frame, decoding it and then finally showing it. While it might be possible to get the latency down, it is already low enough for remote driving within a 1000km range :)

The project will never end and there are already some goals for the Phase 04:

  • GPS
  • AHRS based car orientation visualisation
  • 60 FPS (stereo?) camera
  • Control the webcam based on driver’s head orientation
  • Improved gamepad etc. controls

Wireless MCUs and power consumption, part II

In the Part I I described my real project with the radio: a wireless power consumption meter. But it is supposed to be a low power MCU so why not run it indefinitely with a renewable power source?

Solar power is the easiest form of renewable energy and I decided to try it out. That is of course not an option in the dark closet where the smart electricity meter is so I ended up designing a modular solar powered wireless soil moisture sensor.

Measuring the moisture of the soil is only a secondary objective. The real goal is to see if I can keep the radio running continuously throughout the Finnish winter. In December 2013 we had a total of 24 minutes of sunshine during a period of 18 days, so having solar power available should not be taken for granted.

Another issue is the temperature as common batteries must not be charged below zero degrees Celsius. I decided to go with a large solar panel from Sparkfun and 10 F super capacitors from Digikey. The panel provides a maximum of 9.15V which nicely matches four 2.7V super capacitors in series.

The voltage of solar panels decreases quickly if too much power is drawn from them. All larger solar panel systems use maximum power point tracking (MPPT) which tracks the voltage and limits the power draw if the voltage drops. I tried to find MPPT solutions for small systems but did not find anything suitable for this project. Adafruit’s Solar Lithium Ion/Polymer charger is based on MCP73871 but that seems to be designed for batteries alone. There is also bq25570 from TI which looks otherwise very well suited for this kind of project but it seems to be designed for even smaller solar panels.

Solar powered soil moisture sensor
Solar powered wireless soil moisture sensor

The final software will make the measurements once an hour and sleep the rest of the time in the deepest sleep mode. The power consumption in the sleep is only a few micro-amps for the whole thing.

I made some initial tests with a five minute interval behind glass windows and without direct sunshine. During the last day I tried to keep the panel pointed directly to the sun, when possible and that clearly shows in the graph below.

Solar measurements
Solar panel and super capacitor voltages over time.

For now I have just printed out the values and created a graph about the measurements. For the electricity measurements I was using Sparkfun’s Phant on my own server but that seemed to lack features and stability. Currently I am testing ThingSpeak and letting them handle the hosting. It is open source so I could move it to my own server as well. So far the ThingSpeak looks good.

Based on the test with shorter measurement interval, I am hopeful about the radio being able to run through the long and dark winter in Finland.

Wireless MCUs and power consumption, part I

Electricity is expensive and being low power is environmentally friendly. And what is more motivating than seeing the total power consumption in real time? There are plenty of commercial products out there already but will they give you the raw data so that you can plot nice graphics? Not that many. And of course self made is always .. self made.

Lately in Finland the old electricity meters have been replaced with smart meters and they seem to have leds showing the power consumption. The model I have has a led that blinks once for every consumed watt-hour so it is easy and safe to calculate the power consumption by counting the blinks.

I am using a self designed CC430 based 433Mhz radio board and a simple photo resistor to count the blinks of the led in the apartment’s smart meter. Every minute it sends the count to another radio hooked into a Raspberry Pi that in turn sends the value to a server in the network. Then I have a simple javascript based web page that shows the data with a minute or two latency.

Power consumption over time
Javascript based power consumption info page.

There is no power plug that I could use for the radio board so it is running on two AA batteries. I put everything in a small plastic box. Below is an image of the box before I placed it next to the smart meter.

power_measure_unit-20140703

The radio part of the CC430 is turned completely off when not sending and the MCU part is sleeping. The only part running is the comparator that compares the photo resistor’s output to a predefined thresholds. If the threshold is exceeded an interrupt fires and I count the interrupts. Once a minute I reset the counter, turn on the radio and send the count over the air. Running the comparator takes some 200uA and I think I might be able to just bluntly interpret the photo resistor’s output as GPIO. That should drop the consumption below 10uA. Even with the higher consumption it has been running well for a few months now as can be seen in the graph below.

Battery voltage level
2xAA voltage level over time.

Having to replace batteries is inconvenient and another project of mine will be running outside so I can use a solar panel. More on that in Part II.

My first 3D print

Mechanics has always been a problem in my robotic projects but now that the 3D printing is such a hot topic I decided to test it out.

It seems that the 3D printing services use STL format and many of the popular 3D modeling applications can export it, so there are plenty of applications to choose from. I wanted to use Blender for the modeling as I have used it briefly in the past and I would like to know it better. It was easy to use boolean operators to shape my model but it turned out that even though the model looks nice on the screen, it is not enough for real world printing. I had some broken surfaces, holes, etc. in there. As I am a Blender rookie I ended up creating the simplest cylinder and modifying its subdivided surfaces to shape my model. Easy and quite fun :)

Now that I had the model, the next step was to print it. The nearby public libraries provide 3D printing services for free or nearly free with printers from MakerBot and MiniFactory. After a couple of visits I realized that it is not that easy to jump into the 3D printing world with them. You need to know how to tune the parameters of the software, how to clean up and prepare the printer, how to preheat it, etc. You also need to model your creation in such a way that the printer can print it. The plastic is hot during the printing and not very strong so you may need to add some supporting structures during modeling. At least some of the 3D printer applications can add the supporting structures automatically but I was told they might not be very good at it.

Below is a photo showing the first three layers of an automatically created supporting structure. That looked OK but the actual model started to go wrong already on the first layer so the printing was canceled.

First layers of a 3D printed supporting structure.
First layers of a 3D printed supporting structure.

After a couple of miserable failures somebody hinted that Shapeways is a much easier way to get 3D prints and the price is low enough. They seem to be using selective laser sintering and I didn’t need to worry about supporting structures or fine tuning parameters. They also have a bunch of different materials and colors to choose from. I chose “Coral Red Strong & Flexible Polished”:

3D printed Camera Housing
Original part and the 3D printed version with my additions.
3D Printed Camera Housing Fits
The 3D printed part fits perfectly.

I have used cable ties and Meccano parts earlier but I needed something more lightweight (and better looking) and the Shapeways might very well be the solution for me.

Pleco Phase02 completed

It’s been two years already since I posted about Phase01 being completed. It was a simple track based vehicle using cheap DC motors and the hull was built from Meccano parts:

Pleco Phase01

For Phase02 I decided to go with a ready made car and bought an RC Rock Crawler. It’s about four times the size of the Phase01:

Pleco Phase02 open

Pleco Phase02

The software is roughly the same as it was for Phase01. I’ve fixed a lot of bugs, simplified some things and added a bunch of new features. The slave has a sonar next to webcam and it measures the distance to what ever the camera is pointing at. The slave also measures the current consumption and battery voltage level. All details are shown on the GUI.

The hardware on the other hand has changed a lot. Instead of handling the servos directly from Linux there’s now a separate self-designed Cortex-M4 based microcontroller board for driving the PWM signals. In the future the microcontroller can update the PWM signals real-time based on other sensors without having to worry about latency issues in Linux.

The Linux is now running on a Tegra3 based Ouya gaming device. Ouya is not as convenient for robotics as Gumstix was but personally I think Tegra has better Linux support than OMAPs. And since there is now a separate control board for motors and sensors, it’s enough for the Ouya to only have a USB connection to the control board.

Ouya needs 12 volts while the motors need 5 volts and I’m expecting to need 3.3 volts as well. So I now have three switching regulators connected to the battery for the needed voltage levels. And a USB hub because of the USB based control board and webcam. There are also lots of cables. So I’m not actually able to get everything nicely inside the car, but almost.

Controlling an RC car with a keyboard is inconvenient, so I’ve added support for gamepads. I’m currently using a Bluetooth based gamepad from Logitech:

Logitech F710 gamepad

When driving in a local WiFi network, there’s no noticeable latency at all. It’s still not possible to really drive based on the camera only, the viewing angle is too narrow, turning the camera is cumbersome and the video quality needs some tweaking.

Now that I’ve made some nice progress with the project, I have clear goals for the Phase03:

  1. Disassemble the webcam to get the size and the weight down and to be able to attach fish eye lenses to it.
  2. 3D print a frame for the electronics and the webcam.
  3. Tweak the video quality and camera controls so that it would finally be possible to drive based on the video stream.
  4. Control the webcam based on driver’s head orientation?

So there are plenty of interesting things to learn :)

Oh, and I’ve moved the code to github.

Pleco Phase01 completed

I started playing with microcontrollers in 2005 and, if not at the very start, at least very quickly I decided to aim to have some sort of remote controlled Linux device with controllable camera with digital wireless communication. Now, 6 years later, I have completed my first phase :)

Couple of photos of the earlier devices are shown in the project page.

After several planning iterations and code rewrites I ended up using Qt both on the remote controlled Gumstix and on the GUI controller. I decided that trying to optimize everything from the memory and CPU consumption to the network bandwidth just isn’t worth the time spent in implementing it. The most CPU intensive task is the video encoding to H263 and that’s done in the DSP. I’m running MeeGo on the Gumstix and it provides e.g. the GStreamer plugins for the DSP.

Using Qt framework with self made simple protocol over UDP I got the Phase01 code implemented quite quickly compared to my previous efforts. The protocol allows low priority packets (like periodic statistics and video stream) to be lost and guarantees the passing of high priority packets (control commands etc.). Also only the latest control command of each type is retransmitted, i.e. an old packet is not retransmitted if a new overriding command has already been given.

The controller GUI shows the states the slave sends, like motor speeds, WLAN signal strength, CPU load average and some protocol statistics like round trip time and the number of retransmissions.

Currently the motors are controlled using the a,s,d,w keys in 10% steps and the camera is controlled dragging the mouse left button pressed on top of the video window.

Here’s a video (direct link) of the Phase01. You need HTML5 video capable browser with Ogg Theora/Vorbis codecs.

Suspending the Workstation

I’ve always kept my computers up’n’running day and night although they are mostly idle during the time I sleep or labour at work.

I’m too lazy to shutdown the whole thing few times a day and then but the electricity bill is getting quite big, so I started to think about suspending the computer.

Of course the whole suspend feature has been used in laptops for ages but I guess not that much in Linux workstations. Occasionally I need to access my workstation remotely but that’s why there’s wake-on-lan and I still have my firewall computer always running.

I installed a package called hibernate and configured it a bit to suspend to RAM instead of suspending to disk. It takes only a few seconds to come up from the suspended state and I have all my windows right where I left them. Only the network connections get cut off, which is a bit annoying as I use SSH for certain things. There have been one or two occasions during the over a hundred suspends I’ve done so far, where the workstation hasn’t come up without extra kicking.

I measured the power consumption of two workstations with two different energy meters. One is a proper one from the electric company and one is a cheap one from a local store.

I conducted one to three measurements for each state and in the table below are the typical values.

Computer State W (Good meter) W (Cheap meter)
1 Idle 55 65
1 Suspend 2 23
1 Off 1 23
2 Idle 98 87
2 Suspend 2 21
2 off 1 19

Based on these results, you do save some power by suspending the computer (and the cheap meter is close to useless). The actual amount of saved electricity depends on the total consumption in the idle state. Newer computers have decent power management and my workstation doubles the consumption from 100W to over 200W when I e.g. start an 3D game.

Maemo Service Handler

It’s convenient for a Maemo developer to have sshd and syslogd running on the device but on normal use they just use resources and wear out the flash.

I made a little Control Panel plugin with zuh‘s help for starting and stopping System V style init script actions. It uses invoke-rc.d for starting and stopping services and update-rc.d to add and remove services in system boot time actions.

It has only basic features implemented, i.e. it can start and stop services and add and remove them from startup routines.

TODO:

  • Speed up starting time by getting rid of fork+exec
  • Add infoprints about the success of execution commands
  • Sort services alphabetically
  • Add option to hide “built-in” services

Maemo Service Handler CPA

Be aware that this includes a suid root binary for executing the System V scripts!

The plugin UI is written with C++ so you need to have extras repository on your catalogue list for gtkmm libraries:

Web address: http://repository.maemo.org/extras/
Distribution: bora
Components: free non-free

The plugin can be installed from my maemo repository:

Web address: http://tuomas.kulve.fi/debian
Distribution: bora
Components: maemo

Maemo screen grabber

Note! This is now in garage.

I made a simple wrapper for osso-screenshot-tool. It is started from the Others menu and after a 5 second delay it takes a screenshot and saves it to Images folder.

You can install it to your n800 from my maemo-repository:


Web address: http://tuomas.kulve.fi/debian
Distribution: bora
Components: maemo

Or with single click install.

Chinook

Updated for Chinook.


Web address: http://tuomas.kulve.fi/debian
Distribution: chinook
Components: maemo

Or with single click install.