I spy with my PS3Eye

In which we discover the limits of webcams connected to the BeagleBone Black.

As the previous couple of posts may have hinted, I am currently working on a computer vision application.  On the hardware side, I am using a webcam connected to a BeagleBone Black to capture and process images.  Finding the right camera and software configuration seems to be a challenge many people are trying to overcome.  The following is what I have learned through experimentation.

During my first foray into the world of webcams on the BBB, I chose the PS3Eye.  The PS3Eye has been used for many computer vision applications thanks to its ability to produce uncompressed 640x480 images at up to 60 FPS or uncompressed 320x240 at up to 120 FPS.  The ability to capture uncompressed images at high frame rates plus being available for $16.98 would normally make the PS3Eye a fantastic choice; however, we are dealing with the BBB.

If you plug a PS3Eye into the BBB and fire up an OpenCV application to capture at 640x480, you will receive "Select Timeout" errors instead of a video stream.  If you do the same but with the resolution set to 320x240, it will work.  It turns out the PS3Eye transfers data in bulk mode over USB.  In bulk mode, you are guaranteed to receive all of the transmission; however, you are not guaranteed timing.  What is essentially happening is the PS3Eye is saturating the bulk allotment on the USB.  The reason you encounter this problem at 640x480 and not 320x240 is because OpenCV with Python sets the frame rate to 30 FPS and provides no way to change it.  We can calculate the amount of data put on the bus as follows:

Height * Width * (Channels * 8) * FPS

So for our uncompressed images at 640x480 we have:

640 * 480 * (3 * 8) * 30 = 221184000 bits/s or ~26.36 MB/s

and 320x240 is ~6.59 MB/s

As OpenCV with Python does not allow you to set frame rate, I modified v4l2grab[1] to accept frame rate as a command line argument.  With this, I discovered you can capture images from the PS3Eye at 640x480 as long as you set the frame rate to 15 FPS or less.  You can also capture images at 320x240 at up to 60 FPS.  The astute reader will notice that 640 * 480 * (8 * 3) * 15 = 320 * 240 * (8 * 3) * 60 which is ~13.2 MB/s.  In other words, the USB on the BBB taps aout at ~13.2 MB/s for bulk transfers.

At this point you might be thinking you do not have to worry about frames per second because you will only take still shots.  It turns out uvc under Linux does not support still image capture[2].  In order to capture an image, you open the webcam the same way you would to capture a stream; however, you just grab one frame (or more if needed).

If you would like to capture 640x480 or larger images at 30 FPS or faster, all is not lost, but you will need a webcam that supports some sort of native compression.  In my case, I am using a Logitech C920.  It can compress using both H264 and MJPEG.  If you want to capture a video stream, H264 is probably your best choice as it should have fewer compression artifacts.  It you are after still shots, MJPEG will be your friend.

MJPEG typically compresses each frame as a separate jpeg*.  Since MJPEG uses intra-frame compression, you only need to capture one image for a still shot.  H264 uses inter-frame compression - meaning it relies on information from several frames to determine how to compress the current frame.  In order to reconstruct the frame, you need all the ones involved in the compression.  I know the last two sentences are a great simplification, but they suffice for our discussion.

In order to test the different combinations of frame rates and encodings, I extended the v4l2 capture sample available from the project's website[3].  To the base sample I added the ability to specify image dimensions, frame rate, and pixel format (ie compression).  I also added handling for CTRL-C so the webcam is not left in an inconsistent state if you kill the program with CTRL-C, and the ability to set the timeout value and maximum number of timeouts.

The program is available here framegrabber.c.

Please note this software is not finished.  I am publishing it now so others may use it to determine the capabilities of their webcams, but I will be improving and extending it in the future.  You may consider the capture timing functionality described in 1 below to be complete while the saving of frames described in 2 will change.

To compile framegrabber you must have the development files for v4l2 installed.

Compile with:

gcc framegrabber.c -o framegrabber

At this time, framegrabber is intended to be used in one of two ways.

1.  Timing frame capture rates.

To time frame capture rates, simply pass framegrabber to time, omit the -o switch, and set -c to the number of frames you would like to capture.  Omitting -o instructs the program to simply discard all captured frames.  In this mode, framegrabber will capture c number of frames from the webcam as fast as possible.

Here is the simplest case:

time ./framegrabber -c 1000

And here we set the pixel format, image dimensions, and frame rate:

time ./framegrabber -f mjpeg -H 480 -W 640 -c 1000 -I 30

Have a look at all the other command line switches to get a sense of the possibilities.

2.  Capturing frames from a webcam

As mentioned above, I extended the application v4l2grab to support setting the frame rate.  v4l2grab allows you to capture jpeg images from webcams that support the YUYV format.  It grabs frames in YUYV format and then converts the frames to jpeg.

When capturing frames with framegrabber, the raw frame is written out.  No conversion to jpeg is done.  This is mostly a proof on concept to show that frames captured in MJPEG format are individual jpegs and can be written out without further processing.  This has been tested with a Logitech C920, and the output is indeed an jpeg image.  Capturing in H264 and YUYV format will also work, but you will not be able to simply open the resulting file in your favorite image editor.

Currently there is no way to specify the filename for the frame or frames captured, and if -c is greater than one, the first c - 1 frames will be overwritten by framec.  To capture a frame, include the -o switch and set -c to one.  The resulting frame will be written to capture.jpg.

./framegrabber -f mjpeg -H 480 -W 640 -c 1 -o

And now for the results of testing both the PS3Eye and Logitech C920

Here we see capturing 1000 frames from the Logitech C920 in MJPEG format takes ~33.6 seconds which is ~29.76 frames per second.

Here we see capturing 1000 frames from the Logitech C920 in YUYV format takes ~67 seconds which is ~14.92 frames per second.

Moving to the PS3Eye we see that if we try to capture at 30 FPS, we receive a select time out error, but if we set the frame rate to 15, we are successful.  If you compare the results of the PS3Eye capture with the results of the Logitech C920 YUYV test, you will see the real times are essentially the same, almost 15 frames per second.

At this point you maybe wondering why the Logitech C920 does not receive select timeouts at 30 FPS YUYV but the PS3Eye does.  If you notice, even though we set the frame rate to 30 FPS, we receive frames from the C920 at about 15 FPS.  The C920 use isochronous transfers as opposed to bulk like the PS3Eye, and isochronous guarantees transfer speed but not delivery.  It is likely that frames are getting dropped but enough make it through fast enough that we do not receive select timeouts.  I have not tested this further as of now.  For more information on USB transfers see [4].

In our final screenshot we can see that framegrabber uses very little cpu (.3%) while just grabbing frames.

I hope you find framegrabber useful.  The interested reader can extend the process_image function to do as they will with frames captured from the webcam.

It seems some MJPEG streams omit the Huffman table causing the resulting jpeg captures to fail to open in some programs[5].  The error message, if any, is something along the lines of "Huffman table 0x00 was not defined".  If you cannot open the MJPEG captures, please try the Python script MJPEGpatcher.py below.  MJPEGpatcher should patch the captured jpeg with the required information.  It takes a single command line argument, -f FILENAME, and outputs FILENAME[patched].jpg.  The Logitech C920 does not exhibit this behavior.  MJPEGpatcher has been tested and works on images captured by a Toshiba webcam built into a laptop as well as an image submitted by a reader.  I would appreciate any feedback.

William C Bonner pointed out in the comments that I neglected to provide any timing information for the C920 for resolutions greater than 640x480.  When researching for this post, I was interested in explaining why the C920 could provide 640x480 at 30 FPS  and the PS3Eye could not.  In doing so, I focused on the greatest resolution and frame rate the two cameras had in common.  To redress my omission, here are timings for the C920 for 1920x1080 at 30 FPS in MJPEG, H264, and YUYV formats.  It can be seen below that the C920 is able to provide 1920x1080 at 30 FPS in both MJPEG and H264 formats, but YUYV tops out around 2.49 FPS


framegrabber.c (18.79 kb)
MJPEGpatcher.py (6.46 kb)

[1] https://github.com/twam/v4l2grab
[2] http://www.ideasonboard.org/uvc/
[3] http://linuxtv.org/downloads/v4l-dvb-apis/capture-example.html
[4] http://www.beyondlogic.org/usbnutshell/usb4.shtml
[5] http://www.the-labs.com/Video/odmlff2-avidef.pdf

Comments (22) -

  • William C Bonner

    9/13/2013 12:42:40 PM |

    I've not dug into your code to see what I'm doing differently, but I know that I'm pulling H264 video from the C920 at 30fps on my BBB using the v4l2 interface at both 1920x1080 and 1280x720.

    I've noticed that the C920 will reduce the frame rate in low light situations. I've also noticed that the H264 output seems to be about 3MB/s at either 1080 or 720. Both of those indicate that the camera itself is making its own decisions about what is important.

    The autofocus can cause interesting effects as well. I've only just begun working with v4l2 and OpenCV so there may be plenty of things I'm overlooking, or oversimplifying.

    • Lemoneer

      9/13/2013 1:14:04 PM |


      Thanks for the comment.  I did not measure the FPS for 1920x1080 while testing for the article because I was trying to compare the PS3Eye to the C920 (based on a project someone else is working on) and the PS3Eye tops out at 640x480.  I do not remember the exact size of single frames captured at 1920x1080 in H264, but I believe it was somewhere around 300 KB.  That would work out to ~8.78 MB/s at 30 FPS which would be within the ~13.2 MB/s the USB seems to handle; however, I would think the actual stream would use less as it is likely the first frame captured would be a reference frame and not one compressed based on data from previous reference frame(s).  All that to say I would expect the C920 to be able to deliver 1920x1080 in either MJPEG or H264 at 30 FPS.  I will do the testing tonight and update the article.

  • Ray

    9/25/2013 2:09:23 AM |

    Thanks for making your work available.
    How did you determine what type of usb transfers (bulk, etc...) were taking place?
    In  framegrabber.c read_frame has 3 different ways of reading the data.  Do these differences impact thruput or latency?

    • Lemoneer

      9/25/2013 5:04:25 AM |


      You can see what type of transfer methods is used with:

      sudo lsusb -v

      Locate your camera (or device) and you should find an entry (or entries depending on how many endpoints your device creates) like the following:

      Endpoint Descriptor:
              bLength                         7
              bDescriptorType          5
              bEndpointAddress      0x02  EP 2 OUT
              bmAttributes                2
                Transfer Type             Bulk
                Synch Type                 None
                Usage Type                Data
              wMaxPacketSize         0x0040  1x 64 bytes
              bInterval                      0

      I do not have my camera with me at the moment, so these values are from a different device.  The actual ones returned from your camera might be different.

      The three different ways to transfer the data could have an impact.  They are as follows:

      IO_METHOD_MMAP - Memory Mapped buffers allocated in kernel space mapped into the application's address space.  These buffers can be either large, contiguous DMA buffers or (with hardware support) the video device's I/O memory.  This should be the fastest method of grabbing the frames and is the default.

      IO_METHOD_USERPTR - Buffers allocated in the application user space.  In this case it could be harder for the driver to provide efficient I/O.

      The previous two methods provide streaming I/O and are typically the most efficient way to move data.  The third option does not support streaming.

      IO_METHOD_READ - Uses normal system calls.  This is typically the slowest depending on the hardware and the driver used.

      Thanks for the question.  I will update the article to better distinguish the read_frame methods.

      • Ray

        9/25/2013 9:35:16 AM |

        Thanks for the info.
        Using a cheap webcam and the v4l2 code I get a select timeout while streaming at 320x240 at 30 fps at varying intervals (several minutes on average).
        If I use a good webcam like a Logitech 720p I don't get the select timeout errors.

        If stop and restart capturing when I get a select timeout then I can stream video indefinitely (the glitch in the video stream while restarting is acceptable).

        Any better ideas on how to handle select timeouts?

        • Lemoneer

          9/25/2013 10:11:20 AM |

          If your webcam uses the UVC driver, you can try the bandwidth quirk found here http://www.ideasonboard.org/uvc/faq/ with the following commands

              rmmod uvcvideo
              modprobe uvcvideo quirks=0x80

          Without having the camera to test with, I cannot really offer more.

  • Matteo

    12/10/2013 3:36:16 AM |

    Useful code!
    Permits me a question?
    So what is the lightest way (computaniolly speaking) to pass a grabbed frame to opencv ?

  • jay

    12/13/2013 2:57:23 PM |

    Thanks very much for your investigation!

    I've been using your modified v4l2grab to perform some tests and noticed similar behavior, able to achieve data rates up to about 10.4 MB/s with a logitech C270. However, I'm very interested in getting a cheap ($5) kodak S101 working at 640x480. Unfortunately it doesn't support compression and only goes down to 15 fps.. Based on your observations, this should be sufficient – below the magical ~13.2 MB/s limit, but when I specify v4l2grab to use these preferences, it simply hangs. Do you have any thoughts on what might be happening or how I might be able to debug this?


    • Lemoneer

      12/18/2013 6:12:34 AM |

      Do you have a desktop (regular) computer running Linux you can test with to confirm the camera works under Linux?

      • jay

        12/18/2013 1:33:54 PM |


        The Kodak S101 does work under my desktop ubuntu machine. It also works on the BBB using a lower resolution, so I suspect it's a bandwidth issue – I was just sad that the 640x480 @ 15fps didn't work as I thought this would be within the max bandwidth. The S101 only supports 30 fps and 15 fps.

        I'm now interested in playing with the ps3Eye because of it's 1/4" sensor size, which, for most lenses, should allow a better field of view than the 1/6" of most cheap webcams.

  • masen

    3/8/2014 10:44:42 PM |

    Stupid question, but I ran
    ./framegrabber -f H264 -H 480 -W 640 -c 1000 -I 15; where are my outputs located ?

    • Lemoneer

      3/10/2014 5:21:42 AM |

      The images should be written to the working directory, but you have to be careful with the format.  You have specified H264 as the format.  As any given frame in H264 is dependent on other frames, you will not be able to view them with out properly decoding the entire collection (typically from the beginning).  Both the YUYV and MJPG format encode each frame as individual images.  In the case of MJPG, each frame results in a jpeg.  For YUYV, some software can open this format directly (ie OpenCV can load and display YUYV images), or you can convert it to a jpeg.

      I hope this helps.  If you have another further issues, let me know

      • Tim

        3/10/2014 7:25:09 PM |

        For some reason when I ran ./framegrabber -f yuyv -H 480 -W 640 -c 10 -I 15
        there is no capture.jpg in my working directory. Do I need to modified the code to get pictures from PS3 webcam?

        • Lemoneer

          3/10/2014 8:01:48 PM |

          framegrabber supports two modes:
              Mode One - timing mode.  Does not write out the frames.  It just captures and
                  discards them.
              Mode Two - output mode.  Captures the number of frames specified by -C.  The
                  last frame captured will be the one available on disk.  You activate this mode
                  with the -O switch.

          The switches you are looking for are
              ./framegrabber -f yuyv -H 480 -W 640 -c 10 -I 15 -o
              ./framegrabber -f mjpg -H 480 -W 640 -c 10 -I 15 -o
          if you would like a jpeg capture.

          • Tim

            3/11/2014 10:40:50 AM |

            Thanks for your timely reply. I got 640*480 from PS3 Eye but I think I will still move on to Logitech C920. The performance for yuyv worries me, both cpu-wise and fps-wise.  Very good article here, it helps me a lot.

  • Tim

    4/7/2014 9:50:52 AM |

    I noticed that you said even if only one frame is grabbed, the camera still streams video to Beaglebone Black.
    If I want to take still shots at 1920x1080 every 5 seconds, do I need to call stop_capturing();uninit_device();close_device();
    every time a frame is captured in order to stop camera from streaming? Is there any thing I can do to minimize CPU consumption on FrameGrabber?

    I need to capture pictures from two C920 periodically while doing some other CPU intensive stuff.

    • Lemoneer

      6/10/2014 2:20:24 PM |


      I apologize for taking so long to respond; somehow how I missed your comment.

      You can call stop_capturing();uninit_device();close_device(); and then set it back up when you want another image, but the overhead may not be worth it.  What I have been doing is just ignoring X number of frames based on the FPS.  For every 5 seconds, I would ignore 150 frames between each frame I process.  MJPG frames are small enough that the USB load should not cause you any trouble, and they should not cause any load on the CPU.  The version of framegrabber I wrote for this post blog.lemoneerlabs.com/.../bbb-optimized-opencv-mjpeg-stream has a flag, -o, that allows you to specify how many frames to skip and should point you in the right direction.

      • Tim

        12/4/2014 11:29:17 AM |

        Hey Lemoneer,
        I have been testing framerabber for C920 and it looks like the camera didn't auto-focus at all even though auto_focus  field was set to one. Is there anyway to know when C920 is done focusing and then take a picture?


  • Victor Santos

    9/28/2014 9:44:59 AM |

    At first, thanks so much for the contribuition, it is a very interesting investigation and the codes are so much useful.

    I had no success in capture MJPEG with my Logitech C270 using "./framegrabber -f mjpeg -H 480 -W 640 -c 1 -o". The terminal returns 0 captured frames, when I try to open the capture.jpg with image views I got the error "Huffman table 0x00 was not defined", and when I patch the file with your python script MJPEGpatcher I got a "Bogus marker lenght error".

    In theory, the C270 supports MJPEG, I don't know what this is happening, searched a lot with no results.

    I had no success even when I captured using YUYV pixel format too.


  • Victor Santos

    9/28/2014 9:00:41 PM |

    At first, thanks so much for the contribuition, very interesting investigation and so usefull codes.
    I had no success in capture MJPEG with my Logitech C270 using "./framegrabber -f mjpeg -H 480 -W 640 -c 1 -o". Terminal returns 0 captured frames, when I try to open the capture.jpg with image views I got the "Huffman table 0x00 was not defined" error, and when I patch the file with your python script MJPEGpatcher I got a "Bogus marker lenght error".
    Theorycally C270 supports MJPEG, I don't know what this is happening, searched a lot and no results.
    I had no success when captured using YUYV pixel format too.


  • michael

    12/4/2014 6:05:12 AM |

    The file does not compile propertly:

    RAMEGRABBER.C: In function ‘void init_mmap()’:
    FRAMEGRABBER.C:430:47: error: invalid conversion from ‘void*’ to ‘buffer*’ [-fpermissive]
      buffers = calloc(req.count, sizeof (*buffers));
    FRAMEGRABBER.C: In function ‘void init_userp(unsigned int)’:
    FRAMEGRABBER.C:481:39: error: invalid conversion from ‘void*’ to ‘buffer*’ [-fpermissive]
      buffers = calloc(4, sizeof (*buffers));
    rov@OpenROV:/opt/GRAB-IMG$ gcc FRAMEGRABBER.C -o framegrabber
    FRAMEGRABBER.C:74:33: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings]
    static char         *dev_name = "/dev/video0";
    FRAMEGRABBER.C:87:33: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings]
    static char         *out_name = "capture.jpg";
    FRAMEGRABBER.C: In function ‘void init_read(unsigned int)’:
    FRAMEGRABBER.C:389:39: error: invalid conversion from ‘void*’ to ‘buffer*’ [-fpermissive]
      buffers = calloc(1, sizeof (*buffers));
    FRAMEGRABBER.C: In function ‘void init_mmap()’:
    FRAMEGRABBER.C:430:47: error: invalid conversion from ‘void*’ to ‘buffer*’ [-fpermissive]
      buffers = calloc(req.count, sizeof (*buffers));
    FRAMEGRABBER.C: In function ‘void init_userp(unsigned int)’:
    FRAMEGRABBER.C:481:39: error: invalid conversion from ‘void*’ to ‘buffer*’ [-fpermissive]
      buffers = calloc(4, sizeof (*buffers));

Comments are closed