Stationary 3d scanner using a standard camera

Click For Summary

Discussion Overview

The discussion revolves around the concept of creating a stationary 3D scanner using a standard camera and photogrammetry techniques. Participants explore various approaches to building such a system, including the use of robotic components and multiple cameras, while considering cost-effectiveness and technical challenges.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant suggests using a mechanized turntable and a robotic hand to capture images from different angles, proposing to automate the process of creating a 3D model from the photos.
  • Another participant questions the cost-effectiveness of a robotic hand, suggesting that multiple fixed cameras could be a more affordable solution, while also noting the complexity of developing custom software for the system.
  • Concerns are raised about how surface texture might affect the analysis, with a suggestion that a random pattern on the surface could help mitigate issues related to view angle dependency.
  • There is a discussion about the need for algorithms to process 3D data files, classifying points into different surfaces and identifying boundaries, with a focus on managing data points effectively.
  • One participant proposes the idea of using a fixed camera with a movable mirror to capture images, suggesting that this could provide the necessary information while simplifying the setup.
  • Technical challenges related to using multiple identical USB webcams are highlighted, particularly regarding Windows 10's difficulty in distinguishing between them, complicating the assembly of a multi-camera rig.
  • Another participant reiterates the advantages of using fixed cameras with a known calibration target, emphasizing that the camera mounts do not need to be precisely aligned as long as the calibration is accurate.
  • There is a suggestion to enhance camera identification by using retroreflectors and LEDs on the calibration target, which could simplify the setup and calibration process.

Areas of Agreement / Disagreement

Participants express a mix of ideas and approaches, with no clear consensus on the best method for creating a stationary 3D scanner. Various competing views on the use of robotic hands versus multiple fixed cameras, as well as differing opinions on technical challenges, remain unresolved.

Contextual Notes

Participants note limitations related to software development, camera identification issues, and the need for precise calibration in multi-camera setups. These factors contribute to the complexity of building an effective stationary 3D scanning system.

Nikkie
Messages
1
Reaction score
0
I know there are some stationary 3d scanners like this, but they are really expensive.

When I saw, well an addition to handheld 3d scanner - a robotic hand, I came up with an Idea that same can be done with a simple camera, and photogrammetry. Found a nice article about photogrammetry on the same site of this robot. It seems that it's the cheapest way, I already have a good camera.

The Idea is to use a mechanized turntable to rotate an object, and a robo-hand to move the camera up and down, so it can capture all the sides of an object. Ideally the photos should automatically be sent to a special software which then makes a 3d model.

I'd like to make such project if a future 3-4 mount, and I would describe the whole process here, if you guys are interested. It motivates me if I can share my work with other people and get feedback.
 
Physics news on Phys.org
I would think the Robotic hand would not be cheap (?) .
If cost is an issue the biggest bargains are cameras by far. I have built real time triangulating systems using multiple cameras to allow real time aperture synthesis of ultrasound. Were I to contemplate such a low cost system I would use multiple rigidly fixed cameras who can accurately self-locate using a dedicated target(s) before analysing the test object.
Of course the software is then entirely home brew and not trivial. But fun and challenging
 
  • Like
Likes   Reactions: berkeman
Surface texture could confound analysis. It would be good if the surface being analysed could have a random pattern that was independent of the view angle.

Surface resolution will be dependent on the pitch of the surface stipple pattern, too fine, and it will be sub-pixel and grey, too coarse and the surface will be high contrast, rough and gritty.

There is a need to rationalise swarms of data points to simple geometric surfaces. I would want algorithms to extract geometric surfaces from 3D data files. The algorithm would classify points as members of different surfaces. Then identify the intersections or boundaries of those surfaces. Some points could be rejected as noisy outliers.

It would be good with multiple fixed cameras, if the other camera positions could be in shot.

Maybe consider a fixed camera with a movable mirror. The mirror position and the reflected image contain the required information. What if the camera and mirror moved independently on a precise circular track around the object?
 
Tangential, Windows 10 struggles to distinguish between multiple 'identical' USB web-cams. Doesn't auto-number as (1), (2) as they connect, just calls each 'WEBCAM'. If you dig deep enough into the 'system' etc, you'll find each has a different port / address, but there's no obvious way to allocate / map a 'name' to each. Which makes assembling a 'surround' rig with eg 4~6+ budget web-cams via a powered USB hub just another exasperating step harder...

FWIW, has any-one managed to get a USB web-cam working with a 'budget' printer-server ? PC can see thumb-drives, printers etc in the networked widget's USB port, but not a web-cam. Or, um, a USB GPS dongle...
 
Baluncore said:
It would be good with multiple fixed cameras, if the other camera positions could be in shot.
One of the nice things about fixed cameras and a known calibrating target is that the camera positions need to be stable but not at all precise, so the mounts can be very crude (but solid). Only the calibrator need be true. The method of using one camera to look at another camera is also slightly complicated by needing precise characterization of the optic axis and frame direction (meaning camera orientation) for every camera.
Nik_2213 said:
Tangential, Windows 10 struggles to distinguish between multiple 'identical' USB web-cams. Doesn't auto-number as (1), (2) as they connect, just calls each 'WEBCAM'. If you dig deep enough into the 'system' etc, you'll find each has a different port / address, but there's no obvious way to allocate / map a 'name' to each. Which makes assembling a 'surround' rig with eg 4~6+ budget web-cams via a powered USB hub just another exasperating step harder...
This can (I think) be done when using the camera locating target . Put little scotchbrite retroreflectors on the target and an LED on each camera. The target will show up very bright in only the requested camera. So the port correspondence would be part of set-up/calibration and trivial. Just a bit more software. Of course I never write the software.
Or one could also "roll your own" control board for the cameras
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
5
Views
7K
  • · Replies 152 ·
6
Replies
152
Views
11K