3D Scanning - Digitizing on The Cheap (2 of 2)

3D Scanning - Digitizing on The Cheap (2 of 2)

December 27, 2014

 

Intro & Recap

In attempt 1 I reconstructed a 3-dimensional digital model of a plastic skull from a series of webcam images. Since my optimization criteria was cost, I wrote custom computer vision scripts in python and used a variety of tools/materials I had lying around my apartment. These included a Black & Decker laser/level, a Logitech webcam, a guitar slide, a piece of paper, and some patience.

In attempt 2 I'll be scanning the same skull, exploiting the same basic principles/mathematics, but with slightly modified code and scanning hardware. To complete the experiment, I'll also be printing the resulting scan on an XYZ Da Vinci 1.0 3D printer. If you'd like to follow along at home, I'll be including a link at the bottom of this blog to download all the digital resources required to duplicate this work. The effort required to compile/organize/describe these resources was almost equal to the effort required to design/actualize the experiment in the first place. For this reason, I'm requesting that you make a financial contribution to download the files. Don't worry, you cheap bastards! The minimum price is only 50 cents (I think). I prefer this over a fixed-cost model in that you can enter whatever amount you personally feel is fair and affordable.

For brevity, I'm assuming that the reader has an understanding of a few basic concepts -- namely,

  • downloading/installing python modules
  • properly reading data sheets
  • applying ohms law
  • navigating a Linux environment

The internet is flooded with resources on these topics. Having said that, please let me know if any of them give you trouble. If you feel I skipped something crucial, just email me with the Get In Touch button above.

Attempt 2

Before delving into the details, let's take a look at the the results:

A: The resulting mesh. Using computer vision algorithms, a point cloud was reconstructed from the scanned images. Meshlab was used to clean noise artifacts from the scan, apply poisson subsampling, and ball-pivot mesh generation. Laplacian smoothing was also applied to fill missing areas (such as the deepest parts of the orbits, which weren't reached by the laser).

B: The mesh (A) was then split into halves and imported into XYZWare -- this is freeware that accompanies the XYZ line of 3D printers. No editing was done in XYZWare, only slicing/scaling/positioning.

C: The two halves printed with the XYZ 1.0 (printed with 30% density and 100 μm layer thickness).

D: The scaled/printed skull (right) sits next to its scanned master (left). The replication succeeded with fairly high fidelity! Notable features which were preserved include the suture along the side of the skull (under the parietal bone?), the ridge between the eyes, and the separation of each individual tooth.

 

Background

The fundamental principles of operation in "attempt 2" are the same as those in "attempt 1." Both methods require that a line laser and camera be positioned at a known distance/angle from one another and aimed at the longitudinal/vertical axis of an object to be scanned. The object then rotates about this axis as the camera grabs frames and saves them to memory. The frames are then consumed by a computer vision script (written in python), which constructs a point-cloud representation of the object based upon how the laser "falls" on its surface.

The differences between attempt 1 and 2 are mostly in how I rotate the object being scanned. In attempt 1, I manually rotated the object -- 1 degree at a time -- and clicked a "photograph" button after each rotation. This was tedious, but very low-cost. In attempt 2 I use an embedded linux platform to drive a stepper motor in 0.225 degree increments, while simultaneously grabbing/saving frames from the camera. This allows a higher angular resolution (1600 images per frame). It also allows me to click "scan" and walk away.

 

Hardware

The slideshow/captions below describe briefly the hardware used in automating the scan process.

A: BeagleBone Black Rev C (4G) Single Board Computer Development Board -- embedded linux development platform. This computer (which fits in an Altoids box) can boot into a fully functional linux enviornment in under 10 seconds. Its functionality/usefulness has been covered by bloggers the world over, so I won't describe it thoroughly here. If you're new to embedded linux, or to this particular product, I highly recommmend browsing Dr. Derek Molloy's notes/videos on the topic -- http://derekmolloy.ie/tag/beaglebone-black/.

I wrote/executed code on this device in python to control the stepper motor/camera. The scripts can be accessed/controlled over wifi, which makes development simple. There are resources available online which describe how to configure an Eclipse environment for cross-compiling to embedded linux platforms through an SSH connection. If you're new to embedded linux/programming, this may be overwhelming, However, I think it's worth the effort.

B: EasyDriver Stepper Motor Driver -- this convenient little piece of hardware allows the embedded linux device (A) to communicate with the stepper motor (C) through the 3.3 V logic native to the Beaglebone's (A) GPIO pins. It also provides microstep modes which allow you to gear down the minimum step-size of the motor (C) to as small as 1/8 its native resolution.

C: RioRand 1.8 °39mm-Hybrid-Stepper-Motor-NEMA16(JK39HY34-0404) -- stepper motors are convenient in that the error terms associated with each angular step do not accumulate across steps. That is, they allow a high accuracy discrimination of angular position. The motor pictured natively supports 200 turns per revolution, yielding 1.8 degree step sizes. With the easydriver (B), a 1/8 microstep mode can be enabled which gears down the step increments to (1/8)*1.8 degrees, or 0.225 degrees.

D: Black & Decker BDL100AV All-In-One SureGrip Laser Level -- the line laser used here is a black and decker sure-grip laser/level. I picked this up for $10 U.S. on clearance at Lowe's. I believe they typically retail for something around $35 U.S. In place of this device, a simpler line laser can be used, which are available for less than $15 online, from various retailers.

E: Logitech HD Pro Webcam C920, 1080p Widescreen Video Calling and Recording -- I'm using a logitech c920. It's not required that you use such a high quality camera. In fact, I'm compressing each frame upon grabbing images, so I'm not fully utilizing the features of this camera that make it expensive. A quick google search will yield a list of HD webcams that have been confirmed to work with the Beaglebone.

F: Stage -- there are a variety of methods with which you could attach a stage to the stepper motor shaft. Googling "stepper motor shaft coupler" will guide you in the right direction. If you'd like to 3D print a stage (as shown in the image above), you can download the STL file at the bottom of this blog. I designed this file in sketchup, specifically to fit the keyed shaft on the pictured motor -- it may require adjustments for your use.

The arrangement of hardware components. The embedded linux device, stepper motor driver, and stepper motor are placed in an upside-down box lid, and the camera/laser are resting on a stack of textbooks. Choose a camera/laser height which allows the laser to best fill the cavities of the device being scanned. Ө (Theta) defines the angle between the center of the camera's field of vision and the plane of the laser.

Software

I wrote two scripts to facilitate one scan. The first script is executed on the beaglebone and it does the physical heavy lifting -- rotating the object and collecting the images. The second script contains the machine vision algorithms and does all the computational  heavy lifting -- generating a single point-cloud from a directory of images. I chose to segment the scripts into two steps for modularity.

If you'd like to collect a directory of images in your own fashion (e.g., with a different embedded linux device, or by manually rotating/photographing), you can still use the second script to do the post processing. It only requires that the images are stored in a single directory and are numbered 0.png, 1.png, ..., 500.png, etc.

Script 1  (executed on the beaglebone black): automation.py

This script requires three non-standard python libraries:

  • Adafruit_BBIO
  • cv2
  • numpy

Upon execution,  two objects are instantiated (motor, and camera) which have methods that can be called directly, or indirectly. Scroll through the source code and read the comments to get an idea of how these are used. After loading modules and instantiating camera/motor objects, a function called scan_steps() will be called which initializes the scan process with default setting (taking photos at 1/4 the stepper motor's native step-size and saving them to an '/imgs' directory). Since the stepper motor pictured has a native stepsize of 1.8 degrees, the default settings would yield 0.45 degree increments. If your camera is properly detected, it should look something like this:

Screen Shot 2016-04-09 at 5.43.51 PM.png

The "Invalid argument" statements can be ignored -- I haven't looked into these thoroughly, but they're benign and don't influence the functionality of the script. Each time "Rotating. . ." is printed, a new image is being saved in the '/imgs' directory which resides in the same directory as automation.py:

Screen Shot 2016-04-09 at 5.44.56 PM.png

Upon completion of a scan, you can transfer your '/imgs' directory to your windows/linux/apple machine for postprocessing. One simple way of transferring from the BBB to your PC is via sftp:

Script 2 (executed on your workstation): post_process.py

Execute post_processing.py in a directory of your choosing. It will open a graphical interface that should look something like the image below. It contains two axes which display the "raw images" and "reconstruction images," respectively, as they're being processed. Settings can be configured in the left-most pane. Upon first execution they will be populated with default values. At a minimum, you'll need to change the two settings labeled A and B below. Theta (A) is the angle between the camera's center of view and the laser plane. Its units are degrees. Step size (B) is the rotational increment between individual photographs. Its units are also degrees. If you're doing the manual rotate/photograph methodyou may want to choose 1 degree increments, resulting in 360 frames.

Step 1: Load Image directory -- you should be prompted with a dialog box with which you can select a directory of scanned images. Again, these images should be ordered/named as 0.png, 1.png, 2.png, etc. (jpg is also supported). You should see something like "Imported X images" appear in the message window if the files were succesfully consumed by the script.

Step 2: Verify images -- click the "Flipbook" button to scan through the loaded images. This is important to ensure the images are properly loaded and are in the correct order. It also allows you to select appropriate upper and lower z cutoff values. Everything above and below these values will not be considered during post-processing. This can increase the processing speed and reduce noise artifacts.

flipbook.gif

Once you're satisfied with your upper and lower z-threshold, and you've concluded that your images are properly loaded and in the correct order. Click "Begin." This will step through each frame and extract point cloud data. This data is written to a file in the same directory as post_processing.py which is titled point_cloud.txt. To view point_cloud.txt, I recommend downloading a free program called MeshLab.

http://meshlab.sourceforge.net/

Upon opening meshlab, click ctrl+I to import a mesh. Navigate to the directory containing point_cloud.txt:

Here, you should be prompted with "Pre-Open Options." The default settings should work justfine. But to be safe, ensure that they look like the image below:

Click "OK and you should be met with something like this (possibly buried in noise, depending on how conservative you were with the upper and lower z-limits):

There is much to be written about how to turn this mesh into a printable model. I'm not very experienced in Meshlab, so I won't comment on best practices. However, as a point of reference, It took me about 30 minutes to turn the above mesh into the below mesh, primarily with the use of these tools:

  • surface reconstruciton: ball pivoting
  • surface reconstruction: poisson
  • surface reconstruction: VCG
  • poisson disk subsampling

These are all accessible in the "Filter" menu.

4.png

Not bad!

With the added hardware, I've now upped my investment about $80 U.S, but that's really a tiny amount of money given the usefulness of this setup. I have a few ideas on how to improve this design further, but for now I think I'll call it a day. Whether you're interested in building a satellite, designing an advanced prosthetic, or just scanning stupid Halloween decorations, the ability to scan/modify/print real-life objects is a great skill set to have in your wheelhouse. I hope this post motivates some of you to do something interesting.

Happy researching,

Will

Download Instructions

Available for download is an archive with the software/scripts/notes mentioned above. The file size is quite large, because I've included a many-frame example scan that you can run through the post-processing GUI for debug/testing purposes. This code is licensed under GPL. For commercial/multi-licensing inquiries, shoot me an email.

To download, enter a donation below (I'm not a registered 501(c), therefore, donations are not tax deductible). This form will take you to paypal's website to complete your donation transaction. Upon completion, you will receive an email link with download instructions.