# Synthetic Schlieren Imaging

I tried the method of
Synthetic Schlieren -- application to the visualization and characterization of air convection
Instead of their "checkerboard printed on a transparent plastic sheet and back illuminated by a led panel" I made a checkerboard image and displayed it on my monitor. I took a picture (Panasonic DMC-FZ50, tripod) of the monitor from about 3m and again with a lighter flame halfway between. After subtracting the images as described in their supplementary materials (at the end of the paper), I get:

I accidentally subtracted two flame images:

I tried to upload my checkerboard images (1920x1200 and 1440x900) but PF munged them into 800 wide. Is there a way to do this? The PNG files are < 50k.

#### Attachments

• 367.5 KB Views: 263
• 1.1 MB Views: 249
• 367.5 KB Views: 717
• 1.1 MB Views: 724

Related Other Physics Topics News on Phys.org
sophiecentaur
Gold Member
That's a really cool method for displaying the way convection currents deflect the light. Clever idea that seems to be based on sampling, rather than straight phase differences. (?)
Imagine that the pitch of the chequerboard needs to be optimal because you would want the deflection always to be less than the pitch or you could get aliasing. Your results are pretty impressive - no doubt that the waves are there. Excellent idea to use a TV screen as your source as you can easily change the pitch of the board and the display costs nothing.
I only skimmed the article (of course!!) but I wonder if you could improve contrast by varying the amplitude of the 'reference' picture and then increasing the gain. I have a tiny bit of experience of astrophotography, which has quite a lot in common with your experiment and there is a lot of altering gains, subtracting images etc. involved. Also, I would imagine that the letting the temperature of the test equipment settle down with no natural convection currents and then making the images fairly quickly. Did you try displaying a negative image? I have seen Schlieren Photos displayed both ways. Photoshop (others are available) can be your friend here.

I can make the checkerboard image with 3x3 squares using ImageMagick in Ubuntu Linux command line by:

convert -size 3x3 xc:white white.jpg; convert -size 3x3 xc:black black.jpg; montage white.jpg black.jpg black.jpg white.jpg -geometry 3x3 cell.jpg

and then:

display -size 1920x1200 tile:cell.jpg

(insert your own monitor resolution) and then save the file using the ImageMagick menu.

(I couldn't figure out how to create the file directly on the command line as described here.)

sophiecentaur
Gold Member
(I couldn't figure out how to create the file directly on the command line as described here.)
Simple Image files really are not complicated things. They are just 2D arrays (pixel values) into which you can put what you like. All you need to do is to do the code the pixel address the n,m array elements and insert the data. The file would need to be uncompressed (ie not JPEG) which is much easier to deal with and you would need to put the appropriate few kB of header data on the file which states the format, size and a few other things.
Initially, it's a steep mountain to climb if you are not into coding but it would be so useful for the next project and the next and the next. What's you level of experience with coding?

it's a steep mountain to climb
It was a much smaller mountain when I did that sort of thing.
For now I will just post images on Imgur (with 3x3 pixel squares) for some popular resolutions.

Standard Aspect ratio Width (px) Height (px) % of Steam users (August 2018) % of web users (August 2018)
HD ~16:9 1366 768 13.33 27.16
FHD 16:9 1920 1080 63.72 19.57
WXGA+ 16:10 1440 900 3.37 6.61
HD+ 16:9 1600 900 3.55 5.58
WUXGA 16:10 1920 1200 0.84 1.3

BTW, the method is also called "Background-oriented schlieren (BOS)".

sophiecentaur
This is air blowing out of a vacuum cleaner ("crevice tool" connected to the exhaust end):

At the bottom is a knife blade poking into the air stream. Camera-screen distance ~ 5m. The camera must be focused on the screen, so the vac and knife are out of focus.

This was done with a lesser camera (Samsung D60), distance 2m:

#### Attachments

• 777.8 KB Views: 206
• 970.2 KB Views: 230
• 970.2 KB Views: 690
• 777.8 KB Views: 577
sophiecentaur
Gold Member
This is air blowing out of a vacuum cleaner ("crevice tool" connected to the exhaust end):
View attachment 238425
At the bottom is a knife blade poking into the air stream. Camera-screen distance ~ 5m. The camera must be focused on the screen, so the vac and knife are out of focus.

This was done with a lesser camera (Samsung D60), distance 2m:
View attachment 238426
I can see some features in those very dark images and they look encouraging. If you tinker with the contrast and brightness of those pictures, you (and we) may see more detail. You are sure to have access to a photo processing package - it doesn't have to be £10 per month for Photoshop!

I can see some features in those very dark images and they look encouraging. If you tinker with the contrast and brightness of those pictures, you (and we) may see more detail. You are sure to have access to a photo processing package - it doesn't have to be £10 per month for Photoshop!
There are a number of issues here:
1) It's supposed to be dark. If I put two identical images into the "subtract" process it comes out black. I'm not even sure what the "subtract" is mathematically (I found FIJI's explanation, but it was late ).
2) Detail is limited by the focus issue. With the better camera I can reduce the aperture to get more depth of field, but the trade-off is longer exposure, which means motion blur. With a printed checkerboard as in the paper I could use photoflash to "freeze" the motion, but that won't work for the monitor image.
3) I was using "AUTO" setting, so the flame probably influenced the exposure. I could use some non-luminous heat source or use manual settings (on the better camera).

Last edited:
sophiecentaur
Gold Member
It's supposed to be dark.
I am not criticising in any way and you can do what you want with your pictures but why are the images supposed to be dark?
I would say that the essential thing is that the pattern is visible. Perhaps it's important that the black should be a good black but, within the constraints of noise visibility, I would recommend the same approach as in Astrophotography - that the images should be as visible as possible. "Subtracting" the grid pattern from the image with the heat would tend to reduce the contribution of the grid. This is very much like the process of taking "dark" frames, which contain just the irregularities of the image sensor and which can be subtracted from a feeble image of the sky, leaving only the wanted pattern (not just stars but diffuse nebulae and galaxies. Pretty well every astro image that's published will have been treated this way. As Fiiji describes.
Detail is limited by the focus issue.
Is it likely that the pattern of currents would be finer than the focus of the camera? Low pass spatial filtering is useful here because it loses the unwanted pattern of the original grid pattern. (SImilar to the low pass filter that follows the DAC processing)
I was using "AUTO" setting,
AUTO leaves it to the camera and the camera will do its best and assume that the scene is a regular one; your image is not a regular one. I am surprised that the images are not 'cranked up to grey' in the same way that night time scenes with low lighting are

davenn
The cited paper (in the Supplementary Materials) suggests a contrast adjustment and Gaussian blur but I didn't think it was an improvement.
I am surprised that the images are not 'cranked up to grey' in the same way that night time scenes with low lighting are
The actual photos are bright - it is the subtracted image that is dark.

sophiecentaur
Gold Member
The actual photos are bright - it is the subtracted image that is dark.
YEs that makes sense. But what makes you say that the processing of resulting image should stop with the basic 'difference' image? It's already not the original so where's the objection to making the features stand out?
It won't cost you anything to try changing the contrast and brightness to give a dark background and brighter features.
BTW If, on your posher camera, you have the option to use a RAW or TIFF output file, the processing will probably be more likely to give better and more visible results. jpeg files are compressed and can contain artefacts. You should try focusing on the patterned screen and doing the difference process between two sharp images. Then you should find the Gaussian Blur would nicely low pass filter the result. I think that's in fact why the Gaussian Blur is suggested in your resource.
It's a good project you're doing and it would be worth while getting the best out of it.

try changing the contrast and brightness
This is with contrast adjustment and Gaussian blur.

#### Attachments

• 885.3 KB Views: 221
• 885.3 KB Views: 404
davenn
Gold Member
2019 Award
I tried to upload my checkerboard images (1920x1200 and 1440x900) but PF munged them into 800 wide. Is there a way to do this? The PNG files are < 50k.

upload them to an image hosting www site ... photobucket etc
then post a link here to the image location

you really don't want to be posting 800kb ++ files up to PF, as you have done
( consider who has to pay for the storage space )

ohhh and convert you .png to .jpg .... png files are horribly bloated in size

sophiecentaur
Gold Member
ohhh and convert you .png to .jpg .... png files are horribly bloated in size
BUT only after processing!! For critical images, using another picture storage site is probably a better strategy - even if it's more trouble.

This is with contrast adjustment and Gaussian blur.
Ahh. That's much easier to see. It's getting to be impressive. Your project encompasses several different disciplines; you have AstroPhotography and Sampling methods to deal with. All good stuff though and worth getting into.

I can see hints of the grid in the image which could be eliminated / reduced by increasing the radius of the Gaussian blur control. Can you post an image of the grid and also the effect of the blur in it? That filtering can be experimented with before you do the Schlieren stuff. It's a compromise between getting rid of the grid pattern without degrading the wanted pattern and you have to suck it and see.
You haven't commented about what you focus on; is it the grid itself, rather than the knife edge? The processing really needs the sharpest grid to work on, I think. The knife edge may turn out a bit fuzzy but the contrast of the Schlieren pattern could be improved that way. Astro Photographers do a lot of 'massaging' of their best images and it's common to do a montage, to enhance / suppress some features. If you have an image processing app that allows you to mask, you could get the best of both worlds by laying the schlieren image on top of a sharp knife edge. All's fair in love and photography!!

PS For the subtraction to work well, you need the two images to be perfectly registered. That means firm tripod mounting unless your image processor allows you to nudge one image to register with the other. Also, afaics, the contrast of the original i pictures should be high (and equal for the two); the processor has plenty of information about the relative phases of the two grids. You are taking the difference between two large quantities and they need to be different only in respect of the relative phases of the grids; it's a bit of a balancing act unless you can find some software to do it for you. There are a number of Astro 'Stacking' applications that would do it and drag the maximum useful info out of your measurements.

Last edited:
That means firm tripod mounting
I used a tripod and the shutter timer or remote shutter to avoid jiggling it.
This is a picture of the monitor with checkerboard

and with a flame:

To see the checkerboard you need to download the image and use an "image viewer" program to zoom in (the squares are 3x3 pixels).
The Samsung image posted above is the result of subtracting these.

#### Attachments

• 190.7 KB Views: 402
• 191.7 KB Views: 373
Last edited:
sophiecentaur
Gold Member
Keith. So far so good. It's very interesting / encouraging. Presumably you can look at your individual image pixels with a 'dropper' tool. It would be useful to know the peak to peak luminance values of the grid and the element width in pixels. That will tell you the maximum result you will get from the subtraction process when the displacement is one grid cell. That is when (ideal waveform) 000111000111 is subtracted from 111000111000 (giving 111111111111) if you see what I mean. ignoring the peak luminance (it doesn't matter if it limits at the actual flame), the peak to peak for the grid is not very high and it's that pk-pk value that gives the maximum possible. I can't tell from the jpeg image whether the picture is just not in focus, the grid is low pk-pk or the jpeg is filling in the grid squares. You could try to find a way to increase that original grid contrast, which can only benefit your final result.
Sorry if I'm going over some stuff that you already know but no harm done if an idea turns up twice. There is another point here. If the actual amount of refraction is a small fraction of the grid spacing then the output from the 'phase detection process' will be small.
Reading your earlier posts, I see that the actual grid source is a jpeg image. Does it look a lot sharper than the images you have posted? I would imagine that you could get hold of a better source grid image. Is there, by any chance, an alternative format for the grid image in your software? That could improve things a lot because you really need sharp edges on the grid to make it very phase sensitive. That is why I think focussing on the monitor grid is very important. What does the picture look like through a magnifying lens? Is it sharpish?

the actual grid source is a jpeg image. Does it look a lot sharper than the images you have posted?
They are in post 5. You can download and display in full screen mode. If your screen resolution is not included I can make it.

sophiecentaur
Gold Member
They are in post 5. You can download and display in full screen mode. If your screen resolution is not included I can make it.
Sorry - I kept scrolling past that post 5 and I may have been wasting your time about that problem. Eventually, I got it right and you have good 3X3 squares with 000 255 255 255 in your original image. One question is do you actually see that with a lens on your screen? If the resolution isn't just right, you get all sorts of odd patterns - which is what I can see on your posted images. But what I see there is not relevant because the phase detection will have been done already. It is also necessary to have the sensor pixels correct I think or you can get aliasing on the images you are trying to process. The anti aliasing filter over the sensor will be losing some of that detail. What happens if you use a much coarser grid for the exercise, initially? The image sensor may be able to handle it better. OR you could see what camera images you get when you move in and out from the monitor screen. I may have introduced another significant problem with this - or at least with getting the equipment set up for optimum.
PS Astro photography rears its head again because it is necessary to know the pixel size of the image sensor when using a guide scope.

Sorry - I kept scrolling past that post 5 and I may have been wasting your time about that problem. Eventually, I got it right and you have good 3X3 squares with 000 255 255 255 in your original image. One question is do you actually see that with a lens on your screen? If the resolution isn't just right, you get all sorts of odd patterns - which is what I can see on your posted images. But what I see there is not relevant because the phase detection will have been done already. It is also necessary to have the sensor pixels correct I think or you can get aliasing on the images you are trying to process. The anti aliasing filter over the sensor will be losing some of that detail. What happens if you use a much coarser grid for the exercise, initially? The image sensor may be able to handle it better. OR you could see what camera images you get when you move in and out from the monitor screen. I may have introduced another significant problem with this - or at least with getting the equipment set up for optimum.
PS Astro photography rears its head again because it is necessary to know the pixel size of the image sensor when using a guide scope.
With a "jewler's loupe" I can see the 3x3 pixel squares and the 3 RGB stripes on each pixel.
The camera photos are different resolutions (Samsung 2816x2112, Panasonic 3648x2736) which causes a Moire effect. Your image viewer may have a 100% setting, which will show only part of the photo on your screen (unless you have a 4HD monitor ) , but might remove some of the odd patterns.
If you zoom in on part of the Samsung image in post 6, it looks like this:

(Actually, this is from a "negative" version of that image, subtracted the wrong way.)
The colours are due to the RGB stripes on the monitor (and possibly also the arrangement of RGB sensors in the camera).

#### Attachments

• 388 KB Views: 418
sophiecentaur
Gold Member
I set my monitor resolution to 1920 X 1200 but I was not convinced by what I saw but I looked, eventually, at the images with Photoshop and, with the right zoom setting, I got a perfect chequer pattern with the computer generated pattern - so I assume that's what you get with your loupe, in real life. Photoshop is clever and seems to do the appropriate filtering to allow any zoom without visible aliasing. (Well, you have to expect something from all that rental money)
That image shows the effect of shifting elements of the image well. I don't know why those opposite diagonal stripes appear in the difference signal pattern. Some form of beat / alias pattern. It's very definite, though and there is a region where the subtraction is total. There is a possible problem, trying to look at such fine detail and that could be because the sensor (Bayer) filter may not coincide with the dots on the screen. Sensors can differ for different makes so there could be clashes when trying to work at almost pixel level. Unless you really want the visual field that you are using, you could perhaps use a coarser grid and put the screen further back.

I know these things often sort themselves out but I wonder if it would be worth switching the image to greyscale before doing the arithmetic. There are so many things you could vary that it's quite daunting. But your results are certainly going in the right direction. I really like the idea of an electronic source, compared with a printed screen - but I wonder if 'they' tried a TV screen first and found it was limiting the quality of their results. You could try emailing the workers and asking them.

I don't know why those opposite diagonal stripes appear in the difference signal pattern.
Here is a zoom of part of a subtracted image that should be black. The pattern is due to the camera moving slightly between shots.

The white horizontals and coloured verticals are because the RGB lines within each pixel are vertical.

#### Attachments

• 611.7 KB Views: 251
I find that the subtraction can be done by an ImageMagick command:
composite -compose minus_src ref.JPG flame.JPG output.JPG

On Linux, ImageMagick may be installed by default. It is available for Mac and Windows.

I made checkerboards for 4K resolution (3840x2160) with 3, 4 and 5 pixel squares in addition to those previosly posted:
Standard Aspect ratio Width (px) Height (px) % of Steam users (August 2018) % of web users (August 2018)
HD ~16:9 1366 768 13.33 27.16
FHD 16:9 1920 1080 63.72 19.57
WXGA+ 16:10 1440 900 3.37 6.61
HD+ 16:9 1600 900 3.55 5.58
WUXGA 16:10 1920 1200 0.84 1.3

sophiecentaur
sophiecentaur
Gold Member
@Keith_McClary good to hear from you again and that you are still at it!!

Also 4K resolution (3840x2160) with 1 and 2 pixel squares.

sophiecentaur