In which we enable WiFi on the Wandboard Dual and Quad.

I have come to the time in a project where my glorious BeagleBone Black is sturggling to keep up.  What to do?  Fire up the Wandboard Quad!  As with the BeagleBone Black, I prefer to run Arch Linux Arm. And as before, WiFi does not immediately work (despite WiFi being built into the Wandboard Quad).  Luckily getting WiFi running is much easier than on the BeagleBone Black.

The WiFi chip on the Wandboard Dual and Quad is a Broadcom BCM4329 connected via SDIO and requires both a firmware and nvram in order to work.

First, we will load the nvram.  The command below will fetch the nvram from Freescale's github (Freescale is the maker of the processor used on the Wandboard) and place it in the proper directory.

wget -c
sudo mv -v nvram.txt /lib/firmware/brcm/brcmfmac-sdio.txt

Next we need to rename the firmware already present in Arch Linux Arm.  This is required because the kernel Arch uses for the Wanboard is v3.0.35-3 and for kernels older than v3.13, the SDIO driver used generic firmware names[2].

cp -v /lib/firmware/brcm/brcmfmac4329-sdio.bin /lib/firmware/brcm/brcmfmac-sdio.bin

Now reboot the Wandboard.  When it comes back up, the output of ip link should list wlan0 as an option.

With the WiFi adapter recognized, we can connect to the router.

Install WPA

pacman -S wpa_actiond

Create a base config file

cp -v /etc/netctl/examples/wireless-wpa-configsection /etc/netctl/wireless_wpa_configsection

Generate the required wpa_supplicant config data

wpa_passphrase SSID PASSWORD

Insert the config section into your config file

nano /etc/netctl/wireless_wpa_configsection

Now test out the connection

netctl start wireless_wpa_configsection

Assuming no errors, set WiFi to load at boot

systemctl enable netctl-auto@wlan0.service

Note 1: Loading the firmware and nvram is not necessarily specific to Arch Linux Arm. If your flavor of Linux does not recognize the WiFi adapter, give it a go.

Note 2: Since the adapter is connected using SDIO, it will not show up in the output of lspci -k or lsusb -v

[1] Wandboard
[2] Broadcom Linux drivers

I suggest you gentlemen invent a way to put a square peg in a round hole.

In which we optimize OpenCV on the BeagleBone Black.

If you have been paying attention to the BBB group on Google Groups, you may have discovered a lively thread on webcams [1]. As part of this thread, I have been working with Michael Darling to realize the best performance possible when using OpenCV to process an MJPG stream from a webcam on the BBB.  OpenCV relies on libjpeg when loading the MJPG frames to OpenCV Mats, and libjpeg is not the fastest of jpeg libraries.  However, there is another option - libjpeg-turbo [2].

While it is technically possible to compile OpenCV with libjpeg-turbo support on the BBB, you will have fewer issues and spend less time compiling if you use a cross compiler on a more powerful computer.  Michael has written a guide to cross compiling OpenCV.  Below you will find the guide as a webpage for online viewing or a pdf for download as well as the latest code to capture frames from a webcam and convert them to OpenCV Mats.  The guide is currently a draft, and we would appreciate any feedback you can provide.  Many thanks to Michael for taking the time to write this up.

Note: The code in the guide and the latest code are slightly different.

  1. -o is used to indicate which frames to convert to OpenCV Mats and requires an integer argument.
        -o 1 will convert every frame
        -o 2 will convert every 2nd (ever other) frame, and so on.
        The default is 1.
  2. -p is similar to -o in the original framegrabber.  However, it doesn't actually output anything.  It just controls if any frames are to be converted.
  3. Captured count and processed count variables have been renamed and moved to the top.
  4. Formatting has been corrected.
  5. Setting of the Mat size now uses the width and height variables.


The guide will be updated to reflect these changes in a future release, but in the meantime, you will need to adjust the command line arguments specified in the guide, namely replace -o with -p.

Guide [DRAFT]
    BBB_30fps.pdf (204.32 kb)
    framegrabberCV.c (19.77 kb)

[1] problems with webcams
[2] libjpeg-turbo

I get the news I need on the weather report

In which we publish a MJPEG stream from the BeagleBone Black.

Continuing with the series of post on OpenCV, webcams, and MJPEG, today we will look at streaming an MJPEG capture from the BBB.  Before I get into it though, you should know that I did try FFMPEG/avconv and VLC to stream video from the BBB using rtp, but the several seconds of latency made it unsuitable for my needs.  You should also know that I do not claim that this is the one true way to stream video from the BBB.

The libraries used:

  • ZeroMQ[1] and CZMQ[2] - used to create pub/sub connections between the BBB and software running on a desktop
  • OpenCV[3] - used to display the MJPEG stream


The subscriber was compiled and tested under Windows 7 using Visual Studio 2012; however, the code should compile under Linux with very few, if any, modifications.


I am working on a project that requires a video stream from the BBB be consumed in N places where N is a minimum of 2.  The stream will be processed using OpenCV, and because of the nature of this project, I need as little latency in the video stream as possible.

Theory of Operation:

The BBB will capture frames in MJPEG format from a webcam via a modified version of framegrabber.  The modified version of framegrabber can run indefinitely and outputs the frames as a series of ZeroMQ messages over a publish socket.  The clients will subscribe to the publish socket on the BBB using ZeroMQ and load each frame received into OpenCV.

The ZeroMQ pub/sub configuration allows many clients to connect to the published stream.  No synchronization is used between the publisher and subscriber; the the stream is treated as continuous, and the subscribers are free to connect and disconnect at will.


Single subscriber 640x480 - cpu use on the BBB ~4.3% and memory use ~0.8%

Multiple subscribers 640x480 - cpu use on BBB ~6.6% and memory use ~0.8%

Single subscriber 1920x1080 - cpu use on BBB ~23.2% and memory use ~3.5%

Using this setup, I have been able to stream frames with a resolution as high as 1920x1080 with little to no latency, but there is a limitation, the network.  When using this over Wifi with high resolutions or several clients running on one machine, I noticed the frame rate would drop the further I went from the router.  If you watch the output of the top command on the BBB as you get further from the router, you will see framegrabber's memory use begin to climb.  This is due to the publish socket buffering the data.  As you walk back towards the router, you will see the memory use drop until it, and the frame rate, stabilizes.  During this stabilization period you will probably experience delayed video that is displayed at a higher frame rate than normal as the buffer is flushed.

There are several things you can do reduce or eliminate this latency.

  1. If possible, use a wired connection
  2. Use an 802.11 N router and clients
  3. Make sure your WiFi router is optimally located
  4. Adjust the QoS settings of your router to give higher or highest priority to the traffic on the port you publish over


To reduce the amount of time it takes for subscribers to catch up once their connection has improved, the high water mark on the socket can be reduced.  This has the effect of dropping frames once too many are buffered and essentially reduces the amount of buffered data a subscriber has to process to get in sync.

The reader may find it interesting that it does not matter if the BBB publisher or the subscribers are started up first.  The publisher will simply dump data until at least one subscriber connects, and the subscribers will wait on the publisher.  In addition, you can kill the publisher while subscribers are connected, restart it with new (or the same) settings, and the subscribers will continue on.  The reader should verify this by changing the resolution after the subscriber or subscribers have connected.


framegrabberPub.c (17.96 kb) Publisher - you will need zhelpers.h

compile with

gcc framegrabberPub.c -lzmq -o framegrabberPub 

framegrabberSub.c (3.79 kb) C client

[BONUS] (2.52 kb) Python client

The Python client will display the stream with little latency until garbage collection occurs.  When this happens, the display will freeze and the buffered data on the BBB will increase.  Once garbage collection completes, the display will eventually synchronize much like the WiFi issue detailed above.

If your wireless router is capable of broadcasting at both 2.4 and 5 Ghz at the same time, you can improve performance when using a WiFi connection for both the publisher and the subscriber by having one connect at 2.4 Ghz and the other connect at 5 Ghz.

Added a link to zhelpers.h needed to compile the publisher.


I spy with my PS3Eye

In which we discover the limits of webcams connected to the BeagleBone Black.

As the previous couple of posts may have hinted, I am currently working on a computer vision application.  On the hardware side, I am using a webcam connected to a BeagleBone Black to capture and process images.  Finding the right camera and software configuration seems to be a challenge many people are trying to overcome.  The following is what I have learned through experimentation.

During my first foray into the world of webcams on the BBB, I chose the PS3Eye.  The PS3Eye has been used for many computer vision applications thanks to its ability to produce uncompressed 640x480 images at up to 60 FPS or uncompressed 320x240 at up to 120 FPS.  The ability to capture uncompressed images at high frame rates plus being available for $16.98 would normally make the PS3Eye a fantastic choice; however, we are dealing with the BBB.

If you plug a PS3Eye into the BBB and fire up an OpenCV application to capture at 640x480, you will receive "Select Timeout" errors instead of a video stream.  If you do the same but with the resolution set to 320x240, it will work.  It turns out the PS3Eye transfers data in bulk mode over USB.  In bulk mode, you are guaranteed to receive all of the transmission; however, you are not guaranteed timing.  What is essentially happening is the PS3Eye is saturating the bulk allotment on the USB.  The reason you encounter this problem at 640x480 and not 320x240 is because OpenCV with Python sets the frame rate to 30 FPS and provides no way to change it.  We can calculate the amount of data put on the bus as follows:

Height * Width * (Channels * 8) * FPS

So for our uncompressed images at 640x480 we have:

640 * 480 * (3 * 8) * 30 = 221184000 bits/s or ~26.36 MB/s

and 320x240 is ~6.59 MB/s

As OpenCV with Python does not allow you to set frame rate, I modified v4l2grab[1] to accept frame rate as a command line argument.  With this, I discovered you can capture images from the PS3Eye at 640x480 as long as you set the frame rate to 15 FPS or less.  You can also capture images at 320x240 at up to 60 FPS.  The astute reader will notice that 640 * 480 * (8 * 3) * 15 = 320 * 240 * (8 * 3) * 60 which is ~13.2 MB/s.  In other words, the USB on the BBB taps aout at ~13.2 MB/s for bulk transfers.

At this point you might be thinking you do not have to worry about frames per second because you will only take still shots.  It turns out uvc under Linux does not support still image capture[2].  In order to capture an image, you open the webcam the same way you would to capture a stream; however, you just grab one frame (or more if needed).

If you would like to capture 640x480 or larger images at 30 FPS or faster, all is not lost, but you will need a webcam that supports some sort of native compression.  In my case, I am using a Logitech C920.  It can compress using both H264 and MJPEG.  If you want to capture a video stream, H264 is probably your best choice as it should have fewer compression artifacts.  It you are after still shots, MJPEG will be your friend.

MJPEG typically compresses each frame as a separate jpeg*.  Since MJPEG uses intra-frame compression, you only need to capture one image for a still shot.  H264 uses inter-frame compression - meaning it relies on information from several frames to determine how to compress the current frame.  In order to reconstruct the frame, you need all the ones involved in the compression.  I know the last two sentences are a great simplification, but they suffice for our discussion.

In order to test the different combinations of frame rates and encodings, I extended the v4l2 capture sample available from the project's website[3].  To the base sample I added the ability to specify image dimensions, frame rate, and pixel format (ie compression).  I also added handling for CTRL-C so the webcam is not left in an inconsistent state if you kill the program with CTRL-C, and the ability to set the timeout value and maximum number of timeouts.

The program is available here framegrabber.c.

Please note this software is not finished.  I am publishing it now so others may use it to determine the capabilities of their webcams, but I will be improving and extending it in the future.  You may consider the capture timing functionality described in 1 below to be complete while the saving of frames described in 2 will change.

To compile framegrabber you must have the development files for v4l2 installed.

Compile with:

gcc framegrabber.c -o framegrabber

At this time, framegrabber is intended to be used in one of two ways.

1.  Timing frame capture rates.

To time frame capture rates, simply pass framegrabber to time, omit the -o switch, and set -c to the number of frames you would like to capture.  Omitting -o instructs the program to simply discard all captured frames.  In this mode, framegrabber will capture c number of frames from the webcam as fast as possible.

Here is the simplest case:

time ./framegrabber -c 1000

And here we set the pixel format, image dimensions, and frame rate:

time ./framegrabber -f mjpeg -H 480 -W 640 -c 1000 -I 30

Have a look at all the other command line switches to get a sense of the possibilities.

2.  Capturing frames from a webcam

As mentioned above, I extended the application v4l2grab to support setting the frame rate.  v4l2grab allows you to capture jpeg images from webcams that support the YUYV format.  It grabs frames in YUYV format and then converts the frames to jpeg.

When capturing frames with framegrabber, the raw frame is written out.  No conversion to jpeg is done.  This is mostly a proof on concept to show that frames captured in MJPEG format are individual jpegs and can be written out without further processing.  This has been tested with a Logitech C920, and the output is indeed an jpeg image.  Capturing in H264 and YUYV format will also work, but you will not be able to simply open the resulting file in your favorite image editor.

Currently there is no way to specify the filename for the frame or frames captured, and if -c is greater than one, the first c - 1 frames will be overwritten by framec.  To capture a frame, include the -o switch and set -c to one.  The resulting frame will be written to capture.jpg.

./framegrabber -f mjpeg -H 480 -W 640 -c 1 -o

And now for the results of testing both the PS3Eye and Logitech C920

Here we see capturing 1000 frames from the Logitech C920 in MJPEG format takes ~33.6 seconds which is ~29.76 frames per second.

Here we see capturing 1000 frames from the Logitech C920 in YUYV format takes ~67 seconds which is ~14.92 frames per second.

Moving to the PS3Eye we see that if we try to capture at 30 FPS, we receive a select time out error, but if we set the frame rate to 15, we are successful.  If you compare the results of the PS3Eye capture with the results of the Logitech C920 YUYV test, you will see the real times are essentially the same, almost 15 frames per second.

At this point you maybe wondering why the Logitech C920 does not receive select timeouts at 30 FPS YUYV but the PS3Eye does.  If you notice, even though we set the frame rate to 30 FPS, we receive frames from the C920 at about 15 FPS.  The C920 use isochronous transfers as opposed to bulk like the PS3Eye, and isochronous guarantees transfer speed but not delivery.  It is likely that frames are getting dropped but enough make it through fast enough that we do not receive select timeouts.  I have not tested this further as of now.  For more information on USB transfers see [4].

In our final screenshot we can see that framegrabber uses very little cpu (.3%) while just grabbing frames.

I hope you find framegrabber useful.  The interested reader can extend the process_image function to do as they will with frames captured from the webcam.

It seems some MJPEG streams omit the Huffman table causing the resulting jpeg captures to fail to open in some programs[5].  The error message, if any, is something along the lines of "Huffman table 0x00 was not defined".  If you cannot open the MJPEG captures, please try the Python script below.  MJPEGpatcher should patch the captured jpeg with the required information.  It takes a single command line argument, -f FILENAME, and outputs FILENAME[patched].jpg.  The Logitech C920 does not exhibit this behavior.  MJPEGpatcher has been tested and works on images captured by a Toshiba webcam built into a laptop as well as an image submitted by a reader.  I would appreciate any feedback.

William C Bonner pointed out in the comments that I neglected to provide any timing information for the C920 for resolutions greater than 640x480.  When researching for this post, I was interested in explaining why the C920 could provide 640x480 at 30 FPS  and the PS3Eye could not.  In doing so, I focused on the greatest resolution and frame rate the two cameras had in common.  To redress my omission, here are timings for the C920 for 1920x1080 at 30 FPS in MJPEG, H264, and YUYV formats.  It can be seen below that the C920 is able to provide 1920x1080 at 30 FPS in both MJPEG and H264 formats, but YUYV tops out around 2.49 FPS


framegrabber.c (18.79 kb) (6.46 kb)


time-y wimey... stuff

it's more like a big ball of wibbly wobbly... time-y wimey... stuff

In which we get a Dallas Semiconductor real time clock (RTC) working on the Beagle Bone Black over the I2C bus under Arch Linux Arm.

The Beagle Bone Black has a real time clock (RTC from now on); however, the RTC does not have a battery to allow it to maintain the time when the board is powered off.  This will generally not pose a problem if your board is always connected to the Internet over the wired interface, but may be a stumbling block otherwise.  Luckily for you, it is easy enough to add a RTC.

Arch Linux Arm has the I2C bus enabled by default, and Arch Linux Arm has the driver for the DS1307 compiled into the kernel as can be seen below in the screenshot of the config file.  This may not be absolutely critical but should prevent any timing issues others have experienced when loading kernel modules at boot.

One great thing about the DS1307 driver is that the driver actually works with several different RTCs.  The below screenshot shows the RTCs listed in the driver source.  This tutorial uses the ChronoDot based on the DS3231; however, you should be able to use any listed.

More information on the ChronoDot can be found at [1], and it was purchased from [2].  It is essentially a break out board for the DS3231, an extremely accurate real time clock.

Connecting the ChronoDot to the Beagle Bone Black requires just four wires.  All you need to do is connect GND, Vcc, SCL, and SDA.  By default you can find all the pins needed on expansion header P9.

GND    pin 1 or 2
Vcc      pin 3 or 4
SCL      pin 19 (I2C2)
SDA     pin 20 (I2C2)

You will want to use the I2C2 bus as I2C1 is used by the board itself.

Power off your board and connect the corresponding pins on the ChronoDot to the pins on the Beagle Bone Black as listed above.  You can safely ignore the other pins on the ChronoDot as they are not needed for adding a RTC to the Beagle Bone Black.

Power on the Beagle Bone Black and you are almost done.

If you would like to verify that your Beagle Bone Black can see the ChronoDot, you can install the I2C-tools package with
pacman -S i2c-tools
Once installed, you can issue the following command
i2cdetect –y –r  1
This will scan I2C bus 1 (bus 1 in software corresponds to I2C2 on the hardware) for connected devices.  If you have connected the ChronoDot correctly, you should see an entry for 68 as in the image below.  68 is the bus address of the DS3231.  If you do not see a 68, verify your connections and/or the location of the I2C2 bus pins as they can be changed in software.


In order to register the new I2C RTC, you need to issue the command

echo ds3231 0x68 >/sys/bus/i2c/devices/i2c-1/new_device

After this you should see an entry for /dev/rtc1 as well as /sys/class/rtc/rtc1 – both point to the same device.

To verify you have the right rtc, check the name with

cat /sys/class/rtc/rtc1/name

and to check the time

cat /sys/class/rtc/rtc1/time

Most likely the time is wrong, and you should correct it before proceeding.  The easiest way to do this is to update the system time using ntpd and then synchronize the RTC.

If ntpd is running, you can either wait for it to eventually update the system time or stop it and force an update with

systemctl stop ntpd
ntpd –dqg

Once the system time is correct, synchronize the RTC with

hwclock -f /dev/rtc1 –w

Notice we are specifying the hwclock to use in the command above.

Now that the RTC has been set to the correct time, we can use it at boot to set our system time.

The first thing you need to do is create a script to recreate the I2C RTC device at boot and synchronize the system clock with it.  Once that is done, you can configure the system to run the script at boot.

Create the script with

nano /usr/lib/systemd/scripts/rtc

and type or copy and paste the following into the script

echo ds3231 0x68 >/sys/bus/i2c/devices/i2c-1/new_device
hwclock -f /dev/rtc1 –s

As promised, this will recreate the I2C RTC and synchronize the system clock to it.

Next you will need to make the script executable with

chmod 755 /usr/lib/systemd/scripts/rtc

Now to make it run at boot.

Create the systemd file with

nano /etc/systemd/system/rtc.service

and type or copy and paste the following into the script

Description=RTC clock



Test it out by issuing

systemctl start rtc

If you receive no errors, you can enable this to run at boot with

systemctl enable rtc

Otherwise review the error messages to determine the issue.

You might have noticed the line Before=netctl-auto@wlan0.service in the systemd file. This instructs systemd  to bring up the RTC before starting the wireless LAN adapter.  This should prevent any wpasupplicant issues concerning timestamp validation.

You should now have a working battery backed RTC on your Beagle Bone Black.


The fruits of my labor

In which we try to get the 802.11n wifi module (RTl8192cu Chipset) from Ada Fruit running on a Beagle Board Black under Arch Linux Arm

Please note:  This guide assumes you already have Arch Linux Arm running on your Beagle Bone Black.

First things first, let’s get your installation up to date and install a few needed packages.

You will need the following:

  • iw – command line interface to manage wireless devices (replaced iwconfig)
  • wpa_actiond – needed to automatically connect to wifi networks on boot
  • netctl – used to control the state of the system services for the network profile manager (replaced netcfg)
  • ifplugd – needed to automatically connect to wired networks on boot


Getting started, install iw and wpa_action

pacman -S iw wpa_actiond 

Next, you need to remove netcfg as it conflicts with netctl.  If you fail to remove netcfg first, you will likely receive errors when you attempt to install netctl.

pacman -Rs netcfg

You need to update the system before installing netctl and ifplugd.  If you try to install them first, you will receive a message that the packages cannot be found.

Update you system with

pacman -Syu

With the system up to date, install netctl and ifplugd with

pacman -S netctl ifplugd

All required packages are now installed and up to date, and you are ready to configure the networks.

The first thing you want to do is ensure the system will bring up the wired network.  Otherwise, if there is no wifi connection, you will need a keyboard, mouse, and monitor plugged into the Beagle Bone Black in order to issue commands1.

Copy the default dhcp ethernet connection from the examples2 directory to /etc/netctl/

cp /etc/netctl/examples/ethernet-dhcp  /etc/netctl/

Now set it to start when the system starts

systemctl enable netctl-ifplugd@eth0.service

If you were to reboot now, you should at least be able to establish an ethernet connection.

On to wifi!

My network is configured to use WPA to secure the wireless connection, and for security sake, I hope yours is too.  WPA is handled by wpa_supplicant.  The Arch Linux Wiki indicates nl80211 is the preferred driver for use with wpa_supplicant instead of the older wext.  Unfortunately, I could not get the wifi module to connect if wpa_supplicant was using the nl80211 driver.

To ensure wpa_wupplicant uses wext edit fi.epitest.hostap.WPASupplicant.service and add -D wext

nano /usr/share/dbus-1/system-services/fi.epitest.hostap.WPASupplicant.service

Another issue I encountered was the need to have the system clock set so wpa_supplicant can validate timestamps.  If the system clock is too far off, validation will fail, and it will not connect (more on this later).  By default, Arch Linux Arm runs OpenNTP to correct the system time.  Depending on how long your board has been running, your system time may or may not be correct.

Issue the command date to check the system time.

If your system time is off, you have a couple options for setting the system clock

Setting is manually with something like

timedatectl set-time "2012-10-30 18:17:16"

Setting it automatically with NTP

I prefer the second choice.


Arch Linux Arm comes with OpenNTP by default.  This package is not maintained for Linux, so let’s switch it out for regular old NTP.

pacman -S ntp

When asked, allow pacman to remove OpenNTP.

Now you can set the time from the internet with

ntpd -dqg

Leave out the d if you do not want to see diagnostic output.

Finally, set the hardware clock to the updated time with

hwclock –w

You can set ntp to run at boot with

systemctl enable ntpd


Now make a config file for wifi in /etc/netctl/ by copying the wireless_wpa_configsection configuration from the examples directory

cp /etc/netctl/examples/wireless-wpa-configsection /etc/netctl/wireless_wpa_configsection

There are other wpa config files in the examples directories, but the configsection configuration is the only format that can be loaded automatically at boot.

Also, please notice the hyphens in the copy to name have been replaced with underscores.  If you do not change the hyphens, you may get cryptic errors and your wifi configuration will not work.  Apparently this has to do with how hyphens are (mis)handled.

You can use wpa_passphrase to generate the required wpa_supplicant config data like so

wpa_passphrase SSID PASSWORD

This will output the required configuration.   I hope your password is better than the one in the screenshot.

Copy this data and place it in wireless_wpa_configsection

nano /etc/netctl/wireless_wpa_configsection

Note the single quotes around the configuration keys in the screenshot.

To test your configuration, issue the following commands

netctl start wireless_wpa_configsection

If you receive an error, check the status as instructed in the message.  The first time I tried to connect, it timed out; however, attempting to start netctl again worked.

If all goes well, you should see an entry for wlan0 with an ip address.  If not, double check the configuration file and ensure your SSID and password are correct.  If you have a router that broadcasts on both 2.4 Ghz and 5 Ghz, be sure and use the SSID in use on the 2.4 Ghz spectrum as the Ada Fruit adapter is 2.4 Ghz.

With everything working, you can set wifi to start at boot with

systemctl enable netctl-auto@wlan0.service

As long as you do not fully power off the board (ie by holding the power button for greater than 8 seconds, by issuing the poweroff command, or by losing power) wifi will automatically reconnect.  In other words, if you reboot the board everything will be fine.  If the power is interrupted, wifi most likely will not automatically reconnect.

This is because the real time clock (RTC) on the Beagle Bone Black does not have a battery backup.  When you power off the board, the RTC is reset.  This causes wpa_supplicant to fail timestamp checks.  I am currently looking at two solutions to this problem.

  1. Using fake hardware clock to periodically save the time to a file and reload it at boot
  2. Using a battery backed RTC over SPI or I2C to maintain the time when the board is powered off


Depending on your needs, solution 1 may not be an option.  I will update these instructions after I have tested both.

1Unless you have installed and configured USB Gadet, but that is a topic for another post.
2In this post, I assume you are using DHCP.  If not, use the static IP address config files.