I'll be back from the dead soon

In which we merge PDFs while rendering them via Ecrion's tools.

But first - I know it has been almost a year since I last posted. I even told myself I would post at least once a month this year. Well, we are 5 months into 2015, and I am not exactly setting the world on fire! As it happens, I am in the final stage of a multi year project that is becoming more "exciting" with each passing day. I have plenty of things to write about with none of the time required to do so. I hope that changes soon. In the meantime, here is a little snippet resulting from the project I am working on.

To make a rather long and winding story short, as part of the project I am working on, I have to generate PDFs of various types, and some of these PDFs are composed of multiple constituent PDFs.  To do this we are using Ecrion's Rendering Server and Data Aggregation Server.

Searching for information on how to merge PDFs with Ecrion's tools results in recommendations to purchase Ecrion's Digital Assembly Line or use some version of the code below.  The former is an expensive proposition just to merge PDFs, and the latter seems to be written against a previous version of Ecrion's tools.

using Ecrion.XF;
static void mergeOutputs(string[] inputFiles, string outputFile, 
                         Ecrion.XF.OutputFormat fmt)
    using(XFMergeContext ctx = new XFMergeContext())
        ctx.OutputFormat = fmt;
        for each(string inFile in inputFiles)
            using (XFDocument doc = new XFDocument())
In XF 2015, XFMergeContext no longer exists. Instead you need to
  1. Request a MergeContextId from the Engine
  2. Set the MergeContextId in each PDF's RenderingParameters to the Id from 1
  3. Set IsLastDocumentInMergeContext = true in the RenderingParameters for the last PDF
  4. Render all PDFs to the same stream

Here is a very simple merge of two PDFs to demonstrate this:

// Request your data diagrams from the server
// Not needed if you are not using the Data Aggregation Server
// For instance if you are using XML for your data source
var diagram1 = new ServerDiagram("[DIAGRAM_NAME]");
var jp1 = new JobParameters { Diagram = diagram1, Server = "[SERVER_NAME]" };
var dataSource1 = new DASDataSource(jp1);

var diagram2 = new ServerDiagram("[DIAGRAM_NAME]");
var jp2 = new JobParameters { Diagram = diagram2, Server = "[SERVER_NAME]" };
var dataSource2 = new DASDataSource(jp2);

var engine = new Ecrion.Ultrascale.Engine();
// All PDFs you want to render together need to be sent with the dame Id
var id = Ecrion.Ultrascale.Engine.GetNewMergeContextID();

var paramDiagram1 = new RenderingParameters { 
	OutputFormat = Ecrion.Ultrascale.Engine.OutputFormat.PDF,
	Template = new ServerDocumentTemplate("[TEMPLATE_NAME]"), Server = "[SERVER_NAME]",
	// Set the MergeContextID
	MergeContextID = id};

var paramDiagram2 = new RenderingParameters {
	OutputFormat = Ecrion.Ultrascale.Engine.OutputFormat.PDF,
	Template = new ServerDocumentTemplate("[TEMPLATE_NAME]"), Server = "[SERVER_NAME]",
	// Use the same MergeContextID as before
	MergeContextID = id,
	// Tell the engine this is the last one
	IsLastDocumentInMergeContext = true};

using (var outputStream = new FileStream(@"FileName.pdf", FileMode.Create))
	// Render both to the same stream
	engine.Render(dataSource1, outputStream, paramDiagram1);
	engine.Render(dataSource2, outputStream, paramDiagram2);
Please note: the above was written to (attempt to) clearly demonstrate the idea. It is not intended to be the best of most efficient code possible. That I leave to you dear reader.

Never label anything 'Miscellaneous.'

In which we apply barcode labels to the world.

I am currently working on a project that involves process tracking and automation. The story in a nutshell is the current process is too tedious and cumbersome and results in a great deal of extra work to compile and interpret the metrics. What better way to track a thing, physical or otherwise*, than to affix a barcode to it? None! And did I mention, barcodes are cool?

When making the prototype, I found a plethora of sites that allow users to make barcode labels one at a time with garish watermarks. Perhaps this is workable if one needs but a few barcodes, but it was unsuitable for my needs. Thus I set out to make my own bardcode label maker.

As it turns out, a gentleman by the name of Matthew Welch has already done most of the heavy lifting. Mr. Welch has created and released a barcode font[1] for 3 of 9 (code39) barcodes. What's more, he has done so free of charge. With Mr. Welch's font in hand, I created a simple app to generate barcodes.

BarcodeGen source on bitbucket or executable BarcodeGen.zip (10.6KB)

Both the repo on bitbucket and the zip contain the required barcode font that you must install before using the app. Simply open the font file and click the Install button. There are two versions of the font, the standard one and the extended one. The app uses the standard one, but you can modify this line

private static readonly Font BarcodeFont = new Font("Free 3 of 9", 36);
where the string is the font name and the number is the font size (you can make the label larger or smaller by adjusting the font size) to use the extended font.

You can also change the font used for the caption by modifying this line

private static readonly Font CaptionFont = new Font("Consolas", 10);
The generation of each barcode happens in DrawBarcodeLabel. Essentially we create a graphics object to measure how much space we need to render the barcode and the caption.  Whichever is wider is used as the width of the label. To center the smaller of the two strings in the label, we find the center of the larger one, then move to the left half the width of the smaller one. Rinse and repeat.
private static Image DrawBarcodeLabel(string barcode, string caption, Font font, Font captionFont, Color textColor, Color backColor)
            // get a graphics object
            var img = new Bitmap(1, 1);
            var drawing = Graphics.FromImage(img);

            // measure the barcode size
            var textSizeBarcode = drawing.MeasureString(barcode, font);
            // measure the caption size
            var textSizeCaption = drawing.MeasureString(caption, captionFont);

            // calculate the total image size
            var width = textSizeBarcode.Width > textSizeCaption.Width ? textSizeBarcode.Width : textSizeCaption.Width;
            var height = textSizeBarcode.Height + textSizeCaption.Height;

            // release graphics object

            // now let's make the label
            img = new Bitmap((int)width, (int)height);
            drawing = Graphics.FromImage(img);

            // set the background
            // and the text color
            Brush textBrush = new SolidBrush(textColor);

            var codeX = 0f;
            var captionX = 0f;
            // if the caption is smaller than the barcode we center the caption
            // otherwise we need to center the barcode
            if (textSizeCaption.Width < width)
                // center the caption below the barcode
                captionX = width / 2 - textSizeCaption.Width / 2;
                // center the barcode above the caption
                codeX = width / 2 - textSizeBarcode.Width / 2;

            // draw the barcode
            drawing.DrawString(barcode, font, textBrush, codeX, 0);
            // draw the caption
            drawing.DrawString(caption, captionFont, textBrush, captionX, textSizeBarcode.Height);



            return img;

The application can be used either interactively or you can pass it a file on the command line with contents in the format BARCODE[,CAPTION]

In interactive mode, whatever text is supplied is used to generate the barcode and the caption. The command line mode offers greater flexibility in that the caption can be different from the barcode or omitted altogether. If you make use of this app, I would appreciate hearing from you.

[*] Figuring out how to attache a label to the intangible is left as an exercise for the reader.
[1] http://www.squaregear.net/fonts/free3of9.shtml

Positive and negative, huh? You're a Bit, aren't you?

In which we are strung along with a string of 1s and 0s.

Have you ever had occasion to generate all possible bit combinations of a certain length?  Perhaps you are working through Jeffrey Ullman's Automata class on Coursera or futzing about with crypto/encodings/etc. and need all the possible combinations for testing some brilliant algorithm you have been working on.  I do not remember what I was working on the last two times I implemented this (the above are logical guesses); however, I am posting it so I will not have to do it a third time.

import itertools

def kbits(n, k):
    bit_patterns = []
    # generate all possible k positions to set to 1
    for bits in itertools.combinations(xrange(n), k):
        # create a string of all 0 of the proper length
        s = ['0'] * n
        # set the chosen positions to 1
        for bit in bits:
            s[bit] = '1'
        # rinse and repeat for all possible positions
    return bit_patterns

def generate_bit_patterns(length):
    bit_patterns = []
    # starting with no bits set to 1
    # create strings with an increasing number of bits set to 1
    # until all bits are set to 1
    for i in xrange(length + 1):
        bit_patterns.extend(kbits(length, i))
    return bit_patterns

Usage is simple.


which will result in

['000', '100', '010', '001', '110', '101', '011', '111']

Note: the implementation of kbits is similar to the one here, and it is entirely likely to be what I started with.

[UPDATE 16APR2014]

As pointed out by grubernd in the comments, if you are generating long bit patterns, you should probably do it with a generator to conserve resources.  Here is my original and perhaps harder to understand attempt.

def bit_pattern(length, set_bits):
    pattern = ['0'] * length
    for i in set_bits:
        pattern[i] = '1'
    return ''.join(pattern)
def bit_range(length):
    return (bit_pattern(length, bits) 
        for i in xrange(length + 1) 
        for bits in itertools.combinations(xrange(length), i))

To test this try

for b in bit_range(3):
    print b

which will result in


This negative energy just makes me stronger

In which we do things we never wanted to do with VBA - merge sheets from several Excel workbooks into a single workbook.

A coworker came to me a few days ago needing to merge a directory full of Excel workbooks into a single workbook to provide to a 3rd party.   Apparently each workbook is an extract/report of some sort, but the process that generates the files is unable to consolidate the report output into individual sheets in a single workbook.  What we have below is a quick and dirty macro I wrote to get the job done.


You have a directory full of Excel workbooks

Each workbook has a single sheet containing data

You would like to consolidate the sheets into a single workbook


Ensure all the workbooks are in the same directory

Open Excel

Press Alt + F11 to open Microsoft Visual Basic for Applications

Double click on ThisWorkbook on the left side under Projet - VBAProject

Paste the code below into the editor

Update the line path = "PATH TO YOUR EXCEL WORKBOOKS" by typing the path to your workbooks inside the quotes

Place your cursor anywhere inside the mergeWorkbooks() subroutine

Press F5 or click the green play button

When complete, you can close Microsoft Visual Basic for Applications

The open workbook will contain your merged data


The comments indicate the variables you can change to adjust the range of columns to copy data from.

The final row is set to 65536 because I was working with workbooks in legacy format - adjust if you are using workbooks with more rows.

removeBlankSheets() is used to remove the original 3 sheets inserted by Excel when a workbook is opened as well as any other blank sheets.  It does so by checking if the first cell is blank.  You may need to adjust/remove this depending on your data.

Sub mergeWorkbooks()
    Dim path As String
    Dim startCol As String
    Dim startCell As Integer
    Dim endCol As String
    ' Path to currentWorkbook
    ' Set to Start Column
    startCol = "A"
    ' Set to Start Cell
    startCell = 1
    ' Set to End Column
    endCol = "IV"
    ' Keep a reference to the workbook we want to merge into
    Dim mainBook As Workbook
    Set mainBook = ThisWorkbook
    Dim currentWorkbook As Workbook
    Dim fileSystem As Object
    Dim directory As Object
    Dim files As Object
    Dim file As Object

    Application.ScreenUpdating = False
    Set fileSystem = CreateObject("Scripting.FileSystemObject")

    Set directory = fileSystem.Getfolder(path)
    Set files = directory.files
    For Each file In files
        Set currentWorkbook = Workbooks.Open(file)
        Range(startCol & startCell & ":" & endCol & Range(startCol & "65536").End(xlUp).Row).Copy

        ' Add a worksheet to paste into
        ' Who thought the syntax for after was a good idea?
        mainBook.Worksheets.Add After:=Worksheets(mainBook.Worksheets.Count)

        Range(startCol & "65536").End(xlUp).PasteSpecial

        Application.CutCopyMode = False
    ' Cleanup the sheets
End Sub

Sub removeBlankSheets()
    ' Don't bother us
    Application.DisplayAlerts = False
    Dim sheet As Worksheet
    For Each sheet In Worksheets
        If sheet.Range("A1").Value = "" Then
        End If
    Next sheet
    Application.DisplayAlerts = True
End Sub

Sub renumberSheets()
    ' Don't bother us
    Application.DisplayAlerts = False
    For i = 1 To ThisWorkbook.Worksheets.Count
        Sheets(i).Name = "Sheet" & i
    Next i
    Application.DisplayAlerts = True
End Sub

You are not a beautiful or unique snowflake.

In which we see that not everyone understands what it is to be unique.

I am currently wrapped up in an EDI implementation, and while I take issue with a great many things in the EDI world, this post will be about just one of the many flaws in a piece of software used to help manage an EDI workflow.

Setting the stage.

Let us talk in somewhat generic, higher level terms so that we do not get bogged down in the complexities of EDI TSETS (transmissions).

Suppose in our database we have a simple parent/child relationship.  For a given parent, you may have one or more children.  Now let us also suppose each child has a unique identifier used to link the child record to records related to it so that we have something like this:

On the EDI side, suppose the TSET has some ‘header’ data that represents the Parent, that there is a group of segments we can loop over that represent the children, and that some of the data for a child is optional.  When the optional child data exists, we would like to insert it into the ChildData table.

Thus far everything seems pretty straight forward and easy.

Enter the EDI software.

As it turns out, the EDI software does not allow you to insert a record and retrieve the Id assigned by the database (at least there is neither an obvious way nor one that does not involve the occult).  Not to worry, the EDI software has a way generate unique ids that we can use when we insert child records as well as child data records.  This is accomplished by an operator called “Replace With Unique String”.

So we wire everything up, load a test TSET, and tell the software to import it.  Tada!  Several of the children have the same Id.  At this point, frustrated with how soul crushingly difficult everything thing is with this software, you sigh because you know you have been let down again.

Back in the interface for building the map, you dig up the Help Info for the “Replace With Unique String” operator and find the following:

Oh lovely you think; It generates a unique 12 character strings.  Twelve seems a bit short to be unique in space and time, but hey we paid good money for this software.  We are sure they know what they are doing.  And it is guaranteed to be unique even if called from separate instances in the same millisecond!

Yes.  Yes they did.  The developers who implemented this software used a timestamp encoded in some special string format as a unique identifier.

And you sigh again because you know the poor developers must have been forced to work on ancient computers where millisecond accuracy is good enough.  Their hardware was not fast enough to loop over items any quicker.  Alas, our server is.  It would seem our server can process several records in a millisecond as is evident by the fact that many records share the same id.  Even more tragic, sometimes if you click the import button at just the right time with the right load on the server, they all get unique ids.

For anyone out there thinking they will solve their unique Id requirement with a timestamp, please stop.  Processors are fast, your computer is fast, many things may happen during the time span measurable by the resolution of your typical Date.getTime() call (or your language’s equivalent).


In which we enable WiFi on the Wandboard Dual and Quad.

I have come to the time in a project where my glorious BeagleBone Black is sturggling to keep up.  What to do?  Fire up the Wandboard Quad!  As with the BeagleBone Black, I prefer to run Arch Linux Arm. And as before, WiFi does not immediately work (despite WiFi being built into the Wandboard Quad).  Luckily getting WiFi running is much easier than on the BeagleBone Black.

The WiFi chip on the Wandboard Dual and Quad is a Broadcom BCM4329 connected via SDIO and requires both a firmware and nvram in order to work.

First, we will load the nvram.  The command below will fetch the nvram from Freescale's github (Freescale is the maker of the processor used on the Wandboard) and place it in the proper directory.

wget -c https://raw.github.com/Freescale/meta-fsl-arm-extra/master/recipes-bsp/broadcom-nvram-config/files/wandboard/nvram.txt
sudo mv -v nvram.txt /lib/firmware/brcm/brcmfmac-sdio.txt

Next we need to rename the firmware already present in Arch Linux Arm.  This is required because the kernel Arch uses for the Wanboard is v3.0.35-3 and for kernels older than v3.13, the SDIO driver used generic firmware names[2].

cp -v /lib/firmware/brcm/brcmfmac4329-sdio.bin /lib/firmware/brcm/brcmfmac-sdio.bin

Now reboot the Wandboard.  When it comes back up, the output of ip link should list wlan0 as an option.

With the WiFi adapter recognized, we can connect to the router.

Install WPA

pacman -S wpa_actiond

Create a base config file

cp -v /etc/netctl/examples/wireless-wpa-configsection /etc/netctl/wireless_wpa_configsection

Generate the required wpa_supplicant config data

wpa_passphrase SSID PASSWORD

Insert the config section into your config file

nano /etc/netctl/wireless_wpa_configsection

Now test out the connection

netctl start wireless_wpa_configsection

Assuming no errors, set WiFi to load at boot

systemctl enable netctl-auto@wlan0.service

Note 1: Loading the firmware and nvram is not necessarily specific to Arch Linux Arm. If your flavor of Linux does not recognize the WiFi adapter, give it a go.

Note 2: Since the adapter is connected using SDIO, it will not show up in the output of lspci -k or lsusb -v

[1] Wandboard
[2] Broadcom Linux drivers

You can see it when you look out your window...

In which we make OpenCV play nice with Anaconda under Windows and roll libjpeg-turbo in along the way.

You may have noticed, if you have been reading my blog, that I like to prototype in Python.  I find Python to be almost as fun as Lisp when it comes to building up a solution without knowing exactly what the solution is going to look like.  The reason I typically start in Python is the wealth of libraries; however, managing a Python install can become a real pain.  Thanks to the people over at Continuum Analytics, you can be spared most of it.

Continuum Analytics produces a fantastic distribution known as Anaconda (or Miniconda depending on the route you choose).  I won't go into all the reasons I think you should give it a try, but if you program in Python, you should check them out.  The only shortcoming I have come across is the Windows build does not include OpenCV, but we will remedy that today.  Along the way, I will also show you how to build OpenCV against libjpeg-turbo for better jpeg performance.

Things you will need:

  1. CMake: Used to generate makefiles/solutions to build OpenCV.
  2. Visual Studio: This article is geared towards VS, but you should be able to follow roughly the same steps for other compilers.
  3. OpenCV: The reason for this excursion.  I downloaded the self extracting Windows package.  You can just grab the source if you like.
  4. libjpeg-turbo: An optimized crossplatform library for working with jpegs.
  5. Anaconda (or Miniconda): My (and soon to be yours) favorite Python distribution.



Make sure your versions match, i.e. if you install 64 bit Anaconda, install the 64 bit version of libjpeg-turbo and build OpenCV for 64 bits.

If you use Miniconda to make your default Python installation Python 3 (as I have done) you will need to make a Python 2 environment.  Please see here.

 To the Bat Cave!

  1. Run the self extracting OpenCV archive if necessary, and extract OpenCV to a place you will remember, e.g. C:\OpenCV
  2. Run the installer for libjpeg-turbo
  3. If your default Python install is Python 3
    1. Right click on the cmake/bin directory while holding down Shift
    2. Click "Open command window here"
    3. Activate your Python 2 environment with
      activate ENV_NAME
  4. Launch cmake-gui (from the command prompt in step 3 if necessary)
  5. Set "Where is the source code:" to the path in step 1
  6. Set "Where to build the binaries:" to where you normally build libs, e.g. C:\Libs\OpenCV
  7. Click Configure and wait for it to complete
  8. Check the Advanced checkbox
  9. Locate the option BUILD_JPEG and uncheck it
  10. Click the Add Entry button and add
    • Name: JPEG_LIBRARY
    • Type: FILEPATH
    • Value: path to the static libjpeg-turbo64 e.g. C:/libjpeg-turbo64/lib/jpeg-static.lib
  11. Click the Add Entry button and add
    • Type: PATH
    • Value: path to the libjpeg-turbo64 include e.g. C:/libjpeg-turbo64/include
  12. If you see a message in the Configure output that says PYTHON_INCLUDE_DIR and/or PYTHON_LIBRARY are not found, you will need to set them by hand by clicking in the Value column and entering the path. On my system (Windows 8.1 Pro) these are C:/Users/lemoneer/Miniconda3/envs/python-2/include and C:/Users/lemoneer/Miniconda3/envs/python-2/libs/python27.lib respectively.
  13. Click Configure again and wait for it to complete
  14. Verify all is well in the output of Configure
  15. Click Generate
  16. Open the generated solution file in your build directory from step 6
  17. Change the build to Release
  18. Build the Solution
  19. Expand CMakeTargets in the Solution Explorer
  20. Right click on INSTALL
  21. Click Build
  22. Once the build is complete, update your path to include the OpenCV dlls
  23. You may Clean the Build at the solution level to remove intermediate files you do not need (this wont affect the install).


It is worth nothing that there are other build options worth considering like building the examples.  Enable/disable to your heart's content then click Configure followed by Generate and Build.

If all goes well, you will have OpenCV dlls you can use from C/C++ as well as from your Python 2 environment.

An elegant weapon, for a more civilized age.

In which we assemble a portable, Emacs based LISP development environment under Windows.  This being good for the soul.

For Carlos, happy birthday.  You will come to understand... they all do.
For Josh, you were right about Lisp.

Before we begin, let me say if you are new to Lisp, you should check out Quicklisp.  Quicklisp provides a convenient way to setup and maintain a Lisp development environment with a large set of libraries.  However, if you would like a portable Lisp development environment and/or would like to have more control over (or a better understanding of) your Emacs based setup, please read on.

What you will need:

  1. The latest version of Emacs for Windows.  Download the zip.
  2. The latest CVS snapshot of SLIME.  Download the tar archive.
  3. One or more of the following Lisp implementations:
  • CCL: Checkout the latest release version from SVN.
  • ECL: You will have to compile from source.


  • Par Edit: helps **keep parentheses balanced**

And we are off!

  1. Extract Emacs to a location of your choice, e.g. emacs-24.3 (tip include the version in the directory names so you can tell which version you are running.)
  2. Create a directory named 'home' in the root of your emacs directory, e.g. emacs-24.3/home
  3. Create a directory named 'bin' in the root of 'home', e.g. emacs-24.3/home/bin
  4. Deploy Lisp implementation(s)
    • CLISP
      • Extract to a directory in bin, e.g. emacs-24.3/home/bin/clisp-2.49
    • ECL
      • Compile from source and install to a directory in bin, e.g. emacs-24.3/home/bin/ecl-13.5.1
    • CCL
      • Checkout from SVN to a directory in bin, e.g. emacs-24.3/home/bin/ccl
      • Tip: keep CCL up to date between version releases with SVN's update command.
      • Modify ccl/level-1/linux-files.lisp line 939 #+windows-target to set HOME from the 'HOME' environment variable.  This allows us to control where CCL writes files reducing debris and making it portable.
        ;; Matthew Witherwax (lemoneer) 01MAY2013
        ;; Modified the Windows target to use the value of the HOME environment
        ;; variable if it exists; otherwise, it falls back to the OS user database.
        ;; This allows CCL to be run portably under Windows without leaving debris
        ;; behind.
         (getenv "HOME")
         (dolist (k '(#||"HOME"||# "USERPROFILE"))
           (with-native-utf-16-cstrs ((key k))
             (let* ((p (#__wgetenv key)))
           (unless (%null-ptr-p p)
             (return (get-foreign-namestring p)))))))
  5. Create the file site-start.el in site-lisp in the root of your Emacs directory, e.g. emacs-24.3/, and insert the code below. This allows us to set the ‘HOME’ environment variable when we launch Emacs.
    ; remove etc from the path
        (defvar %~dp0
            (substring data-directory 0
            (- (length data-directory) 4)))
        ; create appened home/
        (defvar home-dir (concat %~dp0 "home/"))
        (setenv "HOME" home-dir)
  6. Extract the latest slime csv snapshot to site-lisp, e.g. emacs-24.3/site-lisp/slime-2013-12-12
  7. Optional: Copy paredit to /lisp in the root of your Emacs directory, e.g. emacs-24.3/lisp
  8. Launch Emacs by double clicking runemacs.exe in the bin directory of the Emacs directory, e.g. emacs-24.3/bin, and type C-x C-f ~/.emacs to open the Emacs config file.   Insert the following:
    (set-language-environment "utf-8")
    ;;; Add slime
    (add-to-list 'load-path "~/site-lisp/slime-2013-12-12")  ;or wherever you put it
    ;;; CCL Note that if you save a heap image, the character
    ;;; encoding specified on the command line will be preserved,
    ;;; and you won't have to specify the -K utf-8 any more.
    ;; SLIME and CLISP will complain about a missing temp directory without this
    (setq temporary-file-directory (expand-file-name "~/temp"))
    ;; Path to clisp full so we can build the call
    (setq path-clisp (expand-file-name "~/bin/clisp-2.49/full/"))
    ;;; Configure available Lisp
    ;;; Leave out the one(s) you did not install
    (setq slime-lisp-implementations
    		(ccl (,(expand-file-name "~/bin/ccl/wx86cl64.exe") "-K utf-8"))
    		(clisp (,(replace-regexp-in-string "@" path-clisp "@lisp.exe") 
    			,(replace-regexp-in-string "@" path-clisp "-B@") 
    			,(replace-regexp-in-string "@" path-clisp "-M@lispinit.mem")
    		(ecl (,(expand-file-name "~/bin/ecl-13.5.1/ecl.exe")))))
    ;;; setup slime
    (require 'slime)
    (setq slime-net-coding-system 'utf-8-unix)
    (slime-setup '(slime-fancy))
    ;;; setup paredit if you installed it
    (autoload 'paredit-mode "paredit"
      "Minor mode for pseudo-structurally editing Lisp code." t)
    (add-hook 'emacs-lisp-mode-hook       (lambda () (paredit-mode +1)))
    (add-hook 'lisp-mode-hook             (lambda () (paredit-mode +1)))
    (add-hook 'lisp-interaction-mode-hook (lambda () (paredit-mode +1)))
    (add-hook 'scheme-mode-hook           (lambda () (paredit-mode +1)))
    ;;; This is disabled to stop paredit from running in the REPL
    ;;; Feel free to uncomment this to turn it back on
    ;;;; (add-hook 'slime-repl-mode-hook (lambda () (paredit-mode +1)))
    ;;;; ;; Stop SLIME's REPL from grabbing DEL,
    ;;;; ;; which is annoying when backspacing over a '('
    ;;;; (defun override-slime-repl-bindings-with-paredit ()
    ;;;; (define-key slime-repl-mode-map
    ;;;; 	(read-kbd-macro paredit-backward-delete-key) nil))
    ;;;; (add-hook 'slime-repl-mode-hook 'override-slime-repl-bindings-with-paredit)
    ;;; bonus split the screen in to two side by side windows
    (defun 2-windows-vertical-to-horizontal ()
      (if (and
           (= (length (window-list)) 2) ; only split if there are 2 windows
           (window-combined-p)) ; and they are vertical
    (add-hook 'emacs-startup-hook '2-windows-vertical-to-horizontal)
    (when (display-graphic-p)
      (add-to-list 'default-frame-alist '(left . 0))
      (add-to-list 'default-frame-alist '(top . 0))
    (w32-send-sys-command 61488)
  9. Create a batch file called runslime.bat and insert the following command.  This will allow us to launch Emacs with slime and your default Lisp running.
    C:\emacs-24.3\bin\runemacs.exe --eval=(slime)

 If you followed all the steps, when you run runslime.bat, you should see something like this

Take this REPL, brother, and may it serve you well.

‘Cause I got something for you. It is shiny, it is clean.

In which we use OpenCV to track a laser.

Building on my article about using HSV thresholding to isolate features in video frames, below is a video showing the use of OpenCV to track a red laser.  The majority of the algorithm is the same as presented in the previous article; the only addition is actually determining the area of interest after thresholding.  In order to preventing spoiling anyone's fun, I am not going to post the code as of now.

Hint: think about how to find the largest area in white after thresholding.

If you would like to compare algorithms or just want to know what I did, drop me a note via the contact form.

In preliminary testing, the algorithm seems robust enough to handle changing lighting conditions as well as differing background colors; however, the angle of reflection of the laser can cause issues.  For best results, the laser should be relatively perpendicular to the image plane.  As the laser approaches being parallel to the image plane, less laser light is reflected back to the camera.  The same applies to irregular objects or surfaces that scatter the laser at angles away from the camera.  This can be seen at the end of the video when the laser is reflected off the lamp shade.

Testing was done using a Logitech C920.  Depending on the laser used and ambient lighting conditions, turning off auto white balance may prevent the intensity of the laser from being interpreted as white.

As far as applications, I leave that to the reader.

Come together, right now...

In which we join files line by line.

Working in IT, there is no shortage of "grunt work" that is amenable to one off scripts or applications.  In an effort to prevent others (and possibly myself) from having to reimplement these, I am publishing these as useLESS tools.

useLESS tools: scripts or programs written to paper over the shortcomings or deficiencies in applications or processes.  You could use less, and they would be made useless, if you could reimplement everything in light of the experience gained by doing it the last time.

Today I give you our first useLESS tool - fjoin

At work I am currently involved in integrating with an external vendor.  As part of the process, I have been provided with pseudo realistic test data to process.  Being pseudo realistic, it doesn't really match with what our system expects and must be "massaged".  That is to say, nothing works.

In particular, I have two text files.  One text file has the IDs used by the vendor, and the other has the IDs our system expects.  I need a way to turn these into SQL update statements, and it needs to be repeatable so I can rerun it as testing progresses.  What to do?

Using notepad++ or you favorite editor, it is fairly easy to record a macro that inserts


before each line of one file and


at the beginning of each line in the other so that if you took line x from the first file and line x from the second file and joined them, you would get something like


The real trick is merging the two files line by line.  fjoin does just this.  It takes two or more files and joins them line by line stopping at the end of the shortest file.  The result is sent to the terminal so you can pipe this into another program or redirect to a file.

If you would like to create an executable from this code, you can use pyinstaller

pysinstaller -F fjoin.py

This will create a single (rather large) executable for you.

import sys

if len(sys.argv) == 1:
    print 'No files specified'

files = []
# skip the name of the program
for arg in sys.argv[1:]:
    except IOError:
        print 'ERROR:', arg, 'does not exist'

reading = True
composite = []

while reading:
    for f in files:
        line = f.readline()
        # stop as soon as a file comes up short
        if not line:
            reading = False
            # remove newline
    # only output if we joined across all files
    # in other words, no partial lines
    if len(composite) == len(files):
        print ''.join(composite)
    composite = []

for f in files:

fjoin.py (2.36 kb)