Stationary 3d scanner using a standard camera

In summary, a stationary 3d scanner using a robotic hand can be made cheaper by using a camera instead of a expensive stationary scanner.f
  • #1
1
0
I know there are some stationary 3d scanners like this, but they are really expensive.

When I saw, well an addition to handheld 3d scanner - a robotic hand, I came up with an Idea that same can be done with a simple camera, and photogrammetry. Found a nice article about photogrammetry on the same site of this robot. It seems that it's the cheapest way, I already have a good camera.

The Idea is to use a mechanized turntable to rotate an object, and a robo-hand to move the camera up and down, so it can capture all the sides of an object. Ideally the photos should automatically be sent to a special software which then makes a 3d model.

I'd like to make such project if a future 3-4 mount, and I would describe the whole process here, if you guys are interested. It motivates me if I can share my work with other people and get feedback.
 
  • #2
I would think the Robotic hand would not be cheap (?) .
If cost is an issue the biggest bargains are cameras by far. I have built real time triangulating systems using multiple cameras to allow real time aperture synthesis of ultrasound. Were I to contemplate such a low cost system I would use multiple rigidly fixed cameras who can accurately self-locate using a dedicated target(s) before analysing the test object.
Of course the software is then entirely home brew and not trivial. But fun and challenging
 
  • #3
Surface texture could confound analysis. It would be good if the surface being analysed could have a random pattern that was independent of the view angle.

Surface resolution will be dependent on the pitch of the surface stipple pattern, too fine, and it will be sub-pixel and grey, too coarse and the surface will be high contrast, rough and gritty.

There is a need to rationalise swarms of data points to simple geometric surfaces. I would want algorithms to extract geometric surfaces from 3D data files. The algorithm would classify points as members of different surfaces. Then identify the intersections or boundaries of those surfaces. Some points could be rejected as noisy outliers.

It would be good with multiple fixed cameras, if the other camera positions could be in shot.

Maybe consider a fixed camera with a movable mirror. The mirror position and the reflected image contain the required information. What if the camera and mirror moved independently on a precise circular track around the object?
 
  • #4
Tangential, Windows 10 struggles to distinguish between multiple 'identical' USB web-cams. Doesn't auto-number as (1), (2) as they connect, just calls each 'WEBCAM'. If you dig deep enough into the 'system' etc, you'll find each has a different port / address, but there's no obvious way to allocate / map a 'name' to each. Which makes assembling a 'surround' rig with eg 4~6+ budget web-cams via a powered USB hub just another exasperating step harder...

FWIW, has any-one managed to get a USB web-cam working with a 'budget' printer-server ? PC can see thumb-drives, printers etc in the networked widget's USB port, but not a web-cam. Or, um, a USB GPS dongle...
 
  • #5
It would be good with multiple fixed cameras, if the other camera positions could be in shot.
One of the nice things about fixed cameras and a known calibrating target is that the camera positions need to be stable but not at all precise, so the mounts can be very crude (but solid). Only the calibrator need be true. The method of using one camera to look at another camera is also slightly complicated by needing precise characterization of the optic axis and frame direction (meaning camera orientation) for every camera.
Tangential, Windows 10 struggles to distinguish between multiple 'identical' USB web-cams. Doesn't auto-number as (1), (2) as they connect, just calls each 'WEBCAM'. If you dig deep enough into the 'system' etc, you'll find each has a different port / address, but there's no obvious way to allocate / map a 'name' to each. Which makes assembling a 'surround' rig with eg 4~6+ budget web-cams via a powered USB hub just another exasperating step harder...
This can (I think) be done when using the camera locating target . Put little scotchbrite retroreflectors on the target and an LED on each camera. The target will show up very bright in only the requested camera. So the port correspondence would be part of set-up/calibration and trivial. Just a bit more software. Of course I never write the software.
Or one could also "roll your own" control board for the cameras
 

Suggested for: Stationary 3d scanner using a standard camera

Replies
5
Views
468
Replies
6
Views
980
Replies
2
Views
1K
Replies
9
Views
1K
Replies
10
Views
2K
Replies
5
Views
2K
Replies
16
Views
3K
Replies
3
Views
2K
Replies
23
Views
3K
Replies
49
Views
11K
Back
Top