Overlaying light beams by one replacing the other where they intersect

In summary, it is possible to combine beams by having one replacing the other where they intersect rather than have the beams be superimposed. This can be done by the "combiner" OR later in the setup by the "projection screen".
  • #36
Are you talking about tilt or shift lens?
Shifting lens allows to not have a keystone distortion to beign with as far as I know becaus you don't have to have the projector angled to have it project down, up, left or rigth from "center".
My bigger projectors have lens shift.
 
Science news on Phys.org
  • #37
I'm not here for an argument. Keystone distortion can be corrected for optically and electronically. I thought we both knew that.
When the image is being scanned over the screen, the correction should be dynamic to get the best result. That would need to be electronic unless you want to start an optics development branch to your project. If you want a result then I suggest you use as much established technology as possible and software can do the vast majority of the clever stuff that you need.
 
  • #38
Am I here to argue?

I am using video projectors and moving a projection beam with mirrors, there's already optical part to this project.
Not to mention with digital keystone correction you are adding extra work to the software part of the project instead.

Keystone correction is not done optically in your example, it's prevented from happening. You may say this is just schemantics but you do sound like you are arguing then.

I Have already explained to you that digital keystone correction is lossy and degrades resolution and is not preferrable here.

Moreover, this was not even the subject of my topic.
Nor was using LCD shutters but you dragged that for few posts.
Can we please get back to the original question?

this is what I want: https://i.imgur.com/F6HIYBl.png

this is what I do not want: https://i.imgur.com/ylhMeEa.png

This kind of beam combining can be either achieved by the "combiner" OR later in the setup by the "projection screen". Or in other words, having the original beam not reaching the projection surface is not the only solution as somehow not displaying the original beam on the projection surface even if it has reached it will also work.

The colors of the beams are for illustration purposes, so are their positions. Position of the smaller beam can change.

Is there any way at all to achieve this. Brightness not an issue so things such as polarizers can be used.
 
  • #39
wosoka said:
Keystone correction is not done optically in your example, it's prevented from happening.
wosoka said:
Am I here to argue?
It could be looked on that way.
wosoka said:
Can we please get back to the original question?
Doing it the way you want needs technology that I haven't seen off the shelf. The 'non linear' optical switch is an unknown quantity as far as I can see and I wouldn't personally go for that (possibly non-existent) solution. If you know different then go ahead. Let PF know when you get a result.
 
  • #40
I haven't seen a sheet or film made of a material which can change its transmittance, refractive index or reflectivity when reacting with IR,UV light or light of specific polarization myself, but doesn't mean it doesn't exist and that's why I'm asking others if they have.

Maybe I should be asking people from other fields, or wait for an answer here.

Someone from reddit mentioned using polarizers and "saturable absorbers" as possible solution but didn't go into details and hasn't responded yet.
 
  • #41
How about make the higher resolution fill in the jagged edges of the low resolution background. In the example you showed a circle inside the pixelated background. Fill in the partial empty pixels from the background with the foreground.

BoB
 
  • Like
Likes sophiecentaur
  • #42
rbelli1 said:
How about make the higher resolution fill in the jagged edges of the low resolution background. In the example you showed a circle inside the pixelated background. Fill in the partial empty pixels from the background with the foreground.

BoB
I made that point (in my own fashion) before. The fact that it's the same picture makes interpolating between the coarse and fine resolution very convenient. AN overlap of only a couple of coarse pixels would help to deal with registration errors and phase response for the two resolutions.
 
  • #43
rbelli1 said:
How about make the higher resolution fill in the jagged edges of the low resolution background. In the example you showed a circle inside the pixelated background. Fill in the partial empty pixels from the background with the foreground.

BoB
I think there are two issues.

We assume the pixels of the detailed section are small enough to fill in a rectangular size axactly the size of the bigger lower-res section pixel, not more, not less. And we also have to position the optics exactly where those pixels would align with the other bigger pixels, not more or less to not create gaps or overlaps. This would require to move the shifting mirrors/lenses not in a smooth way but steps, each step being a pixel diameter wide. Not an issue with saccadic eye movement because of saccadic blindness but noticeable with smooth eye movements when say following a slow moving object. The bigger pixels popping in and out of existense while being replaced with smaller pixels would be eye catching. This would result in jitter/popping unlike with optical/physical masking.

I mentioned before that it is easy to blend two similar size (resolution) projections and this is the main reason.What about optically active dyes ? Anyone heard of em? Someone said there are two kinds, ones photo bleach and decompose and others that don't decompose but need constant source of light as those bleach for only few nanoseconds. Haven't heard from him yet.
If they are what I think maybe could be used for making UV/IR beam controlled shutters.
 
Last edited:
  • #44
wosoka said:
The only issues is we assume the pixels of the detailed section are small enough to fill in a rectangular size axactly the size of the bigger lower-res section pixel, not more, not less. And we also have to position the optics exactly where those pixels would align with the other bigger pixels, not more or less to not create gaps or overlaps. This would require to move the lenses not in a smooth way but steps, each step being a pixel diameter wide. Not an issue with saccadic eye movement because of saccadic blindness but noticeable with smooth eye movements when say following a slow moving object. The bigger pixels popping in and out of existense while being replaced with smaller pixels would be eye catching. This would result in jitter/popping unlike with optical/physical masking.
There need be no worries about the 'pixel' problem. Moving around within the coarseness of the low res pixels is taken care of by my blanket term "interpolation". There is no earthly reason for step changes of one pixel is there is an overlap of the insert over the background, the high res picture can be moved about on the hd projector to match the position on the LD pixels. Registration between the two images would need to be accurate, whatever the slotting-in mechanism. I think you are underestimating the power of DSP of images. You'd need co compare the problem of getting the scanning mirror to give sub pixel positioning with and without digital adjustment of the HD image. I think you are making assumptions about the simplicity of mechanical positioning of the HD image. If, indeed, you can get a mechanism to place the HD image accurately enough then how could there be any of the pixel jittering which you are hinting at? After all, it would only involve 'soft edge' techniques. Are you assuming that images can only be moved digitally by a minimum of one whole pixel? If that were so, Photoshop couldn't do distortion and infinite zooming without jagged edges.
 
  • #45
I'm sorry but it's hard for me to follow some of your post because of some terms I'm not sure what mean in this context.

sophiecentaur said:
Are you assuming that images can only be moved digitally by a minimum of one whole pixel?
No, because we are not moving them digitally.

You have physical, fixed position pixels in the lower resolution projection. Those pixels are fixed, you can't move or resize those. You have to respect their position and size, only thing you can do is hide or show them. To respect their position you have to hide a whole big pixel when you need to fill its area with the smaller projection beam's pixels. Which in turn means as a single big pixel is hidden same amount of smaller pixels always have to fill its space. Not only that but those pixels also have to have pre-determined positions, so they will correctly fill that empty area. And that means you can only move the smaller projection a pixel distance at a time, otherwise you overlap with bigger pixels or have a gap.
 
Last edited:
  • #46
wosoka said:
No, because we are not moving them digitally.
I already pointed out that doing it digitally does not involve whole pixel movement. If you disagree or just don't know about that then try a simple 'rotate' of an image in any half decent image processing app and look for the 'whole pixel' displacement. Do you see it?
wosoka said:
To respect their position you have to hide a whole big pixel when you need to fill its area with the smaller projection beam's pixels. Which in turn means as a single big pixel is hidden same amount of smaller pixels always have to fill its space
What I think you are saying is not true. In fact, the hd image can be part covering a row or column of pixels and it can replace the covered parts with a number of its own pixels. This is just the same as if you (with your, until now, unknown system) place the HD image in a general position on the LD image. The only stipulation is that the electronic version requires a border of pixels that will duplicate up to one whole row or column of the HD display pixels.
You do not need an overlap or gap anywhere. In all systems, you need to know where the hd image is.
I must say, you seem to think that your expertise is all aspects of this project is too great to allow you to take on board any other ideas. It seems to me that you should have ended this thread when there seemed to be no way, at present, of doing the thing your way. If you aren't prepared to consider alternative strategies, you may never get a result. You seem surprised that you could have gaps in your existing knowledge but don't we all have them?
Everything that I have suggested has already been used in digital video processes. The only difference is that you are avoiding the need for full definition of the whole image.
 
  • Like
Likes davenn
  • #47
sophiecentaur said:
I already pointed out that doing it digitally does not involve whole pixel movement. If you disagree or just don't know about that then try a simple 'rotate' of an image in any half decent image processing app and look for the 'whole pixel' displacement. Do you see it?
...
Everything that I have suggested has already been used in digital video processes.
As I said once already,
we are not moving them digitally.
This is not comparable to a digital program like Photoshop that works with digital pixels and only when finished with the digital filters on the digital pixels converts the results to be viewed on fixed size physical pixels.
You are not merging two images with varying resolution digitally, but trying to merge two projected images in the real world with different, un-changeable resolutions.

Look,
https://i.imgur.com/X2o4l11.gif
Try to show how you can move the "small pixel" here smoothly instead of steps without creating gaps or overlaps.
Not only that, but as a small pixel moves into the space of one of the big pixels, the big pixel has to hide and its area filled with small pixels, again with specific positions and sizes.
No you can't add anything in between, because this isn't a digital program but illustration of "phsyical" pixels projected on a surface.

I must say, you seem to think that your expertise is all aspects of this project is too great to allow you to take on board any other ideas. It seems to me that you should have ended this thread when there seemed to be no way, at present, of doing the thing your way. If you aren't prepared to consider alternative strategies, you may never get a result. You seem surprised that you could have gaps in your existing knowledge but don't we all have them?
Pot, kettle, black. You are not my mind reader nor am I yours. This has nothing to do with the topic, try sticking to it. I don't have time to read what you wrote to a straw man.
I just challenged your idea above. There, I wasted time making an animated illustration. Show me a solution to that and we'll call it a day, I'll even apologize for not seeing the solution sooner.

It seems to me that you should have ended this thread when there seemed to be no way, at present, of doing the thing your way.
Why do you think there is no way? because you have expertise in all aspects of this project? Just because you aren't aware of a solution means there isn't any? Do you see the irony? I literally just posted this and you completely ignored it and are making that statement:

wosoka said:
What about optically active dyes ? Anyone heard of em? Someone said there are two kinds, ones photo bleach and decompose and others that don't decompose but need constant source of light as those bleach for only few nanoseconds. Haven't heard from him yet.
If they are what I think maybe could be used for making UV/IR beam controlled shutters.

Guess what? This wasn't my idea, someone from another forum suggested it. What did I do with his idea? Took it on board.
 
Last edited:
  • #48
wosoka said:
As I said once already,
we are not moving them digitally.
"We" are not moving them at all because there isn't an optometrist mechanical way on the table at the moment. "We" could move the image about on the HD display by fractions of a pixel. Do you understand that that process can work. (Clue- I mentioned Photoshop'
 
  • #49
Save your sarcasm.
You don't move the "image", you move the projection which displays the image, and the projection has "physical" pixels, not digital.
I've explained all there is and even shown an animated diagram. If you mention Photoshop its clear you don't get that simple thing, the same way you didn't get I wasn't proposing to use LCDs for several posts. Maybe you will after few more posts but I'm done explaining.
 
  • #50
wosoka said:
you move the projection which displays the image,
So are you scanning the same HD image to any place over the LD background? I understood that foveal imaging aimed the HD area at different parts of an LD image. If that is the case, the image has to move over the small HD display as the display slot is moved over the LD background. If you are trying to achieve foveal imaging, you actually need to be doing exactly what I have been describing. The only difference between your opto-mechanical and my digital-mechanical is that you need a mechanism that you haven't found yet to slot the HD image into the background and mine uses a hole in the LD image. In my system there is no overlap or gap as the LD image is updated with HD information as the foveal area scans over it.
If my understanding of your basic requirement is flawed then I am sorry - I only read a few references to tell me what a foveal display is supposed to do.
 
  • #51
(trying to interrupt the ego storm here)

Just brainstorming. Another probably-not-ideal concept that might lead to other ideas or be modified for improvement.

How about if the static image is rear projected on a screen and the high resolution moving image is front projected, or vice versa? Perhaps the relative brightness of the two could be adjusted for a visually acceptable result. Even different polarizations might be useful somehow.

Addendum:
Further thought. Can the screen with the low resolution image be an LCD display with a partialy transparent front surface upon which the Hi-res image is projected? Then further projected onto a large conventional screen if needed.
 
  • #52
Tom.G said:
(trying to interrupt the ego storm here)
Sorry about that.
Tom.G said:
Even different polarizations might be useful somehow.
Would that be better than just blanking out a slot in the LD image? Cross polarisation methods work well for some 3D video systems.
 
  • Like
Likes Tom.G
  • #53
Tom.G said:
How about if the static image is rear projected on a screen and the high resolution moving image is front projected, or vice versa? Perhaps the relative brightness of the two could be adjusted for a visually acceptable result. Even different polarizations might be useful somehow.
Go on. I don't understand what difference it will make to combine rear and front projection.
I can alo have them be different polarization, but not sure what to do with that either.

Further thought. Can the screen with the low resolution image be an LCD display with a partialy transparent front surface upon which the Hi-res image is projected? Then further projected onto a large conventional screen if needed.
What's an LCD with partially transparent front surface?
 
  • #54
For the relative brightness, I was thinking that you want the Hi-res image to be the center of attention. Some ways of doing that are to make it brighter, higher contrast, higher color saturation, and as you are doing, higher resolution.

The polarization comment was just a brainstorm comment in case someone could usefully incorporate it.
wosoka said:
What's an LCD with partially transparent front surface?
Not a commercial prodict that I am aware of. I was thinking in terms of the front surface being a half-silvered mirror (beam splitter style), or perhaps an overlay of matte surface Mylar film, or even a sparse coating of the glass beads used on an ordinary projection screen.
 
  • #55
hi-res image is not meant to be in the center of attention or standing out, quite the opposite, the brightness of foveal and peripheral displays have to be as close as possible to merge together as seamlessly as possible.
Your LCD approach is problematic the same way as projection because of low resolution not being enough to perform a good merging area with the hires area.
 
  • #56
wosoka said:
hi-res image is not meant to be in the center of attention or standing out, quite the opposite, the brightness of foveated and peripheral displays have to be as close as possible to merge together as seamlessly as possible.
Ahh! Another constraint. Since psycho-optic effects naturally draw attention to a Hi-res part of an image, I assumed this was a desired effect. :frown: The combinations I suggested were meant to concentrate the observers attention so that edge artifacts would be ignored.

p.s. The more information we have available, the fewer the blind alleys that will be traversed. I also recognize that business reasons can be a restraining factor.
 
  • #57
wosoka said:
brightness of foveal and peripheral displays have to be as close as possible to merge together as seamlessly as possible.
This can be controlled with feedback. From what you've written about the constraints, it seem clear that the transition between HD and LD regions needs to be monitored closely by, presumably, a sampling camera. For best results, the camera would follow the hd scanning mirror. Not a problem but another thing to be thinking about.
To make the transition as invisible as possible, it would help to low pass spatial filter the edges of the hd area so there is no obvious change in resolution. If you see a diagonal line which suddenly develops 'jaggies' as it passes from hd to ld region, it could be attention grabbing. A smooth transition from hd to ld would obviate this. My point has been, all along, that sophisticated processes like that can be done much more conveniently by DSP. I can see how, intuitively, a sharp transition that has higher resolution than the coarse ld pixels could be attractive but the details of that transition tend to make the actual method of slotting in irrelevant. There is the question of how sharp the edge of an opto-mechanical system would be. Could it even be of sub-ld pixel size? We don't know as we don't have a device yet.
 
  • #58
I know it can be controlled in software, I was just replying to Tom that it is not what we want (because he was suggesting how we can get that effect) but rather something we want to avoid instead.

All along you don't seem to understand that unlike in software you cannot cut a physical pixel in half or into a shape you want or add a gradient to a single physical pixel, all you can do with physical pixels is change their color or hide them.
I don't understand how many times I have to give the same example. I even told you, what you describe now, is done in merging the edges of similar resolution video projectors, because pixels are pretty much the same size. It's a common technique in video mapping called "edge blending". I've mention this before and now you talk like I don't know about it when I've literally talked about it myself here.
But here one pixel is considerably larger than the other. if you add a gradient/fade to 180 pixels in the hi res screen you will only be able to add that fade to 3 pixels in the low res screen. This does not make it smooth, no matter how much "DSP" you throw at it.
I'm sorry but this is going nowhere. You are repeating the same suggestion I have responded to.
I got the answer I needed from another forum:

Saturated absorber's are dyes, ceramics, glasses that bleach when a high illumination threshold is reached. Mainly used in pulsed lasers for mode locking or q-switching at enormous energies.

A crude example is eyeglasses that turn into sunglasses when UV moves /changes charge on a silver ion implanted in the glass. In fact that is probably the only example I can think of that has a decay time longer then a millisecond or so.

With the exception of the photochromic sunglasses, and the electrochromic window glass in development, everything else needs such high energy density to impractical to bleach with a mere diode laser...."Optical Limiting" is an area undergoing much research for the "Holy Grail" but you have either too slow, too fast, or too much energy. There is a major need for a glass that does what you want on the battlefield, in aviation, and for laser safety, but so far nothing easy or cheap or well suited has appeared. Some progress has been made in home /building windows, but the contrast ratio is low
There. What I want exists but has too low transition time or needs too much power.
Sadly there is no way of making the transition smoother other than making the foveal region bigger to make the transition area less noticeable, which is what some companies like Varjo are doing.
This definitely does not stop my research but concludes this topic as I'm aware of software methods of minimizing the noticeable seam and there is no point in discussing in physics forum about software algorithms anyway.
 
Last edited:
  • #59
wosoka said:
All along you don't seem to understand that unlike in software you cannot cut a physical pixel in half or into a shape you want or add a gradient to a single physical pixel, all you can do with physical pixels is change their color or hide them.
I understand the basics of sampling theory and that is what we are discussing. I am not suggesting doing anything to change the shapes of pixels and their fine positioning is not an issue. You are implying that the 'hole' would have transitions that are less than an LD pixel size and that positioning it could be positioned with a similar accuracy. That's a big ask and would involve a very expensive mechanical solution if it were even possible. The system I have suggested would have fewer moving parts and much lower requirements. Linearity would not be an issue either. The
If you are prepared to have an significant overlap around the HD portion, there can be a progressively weighted sum of HD and LD samples across the edge which can be made invisible. The pixel size is not relevant because the error need never be worse than the LD quality.
But, of course, this requires a tapered response for the opto mechanical gate which is another demand for your proposed system. This would be trivial if the hole were generated digitally in the LD image.
I think it's true to say that simple mechanical registration is unlikely to be easy to achieve to a high enough degree but that 'positional correction' of the HD image could be achieved relatively easily . Whatever the mechanical error is likely to be, the HD image position can be adjusted to put it in the right place. If the HD image is rectangular, the correction would be easier to achieve but an irregular cut-out shape or jitter would be much less visible. Again, this is something that could be achieved with software.
Your quote from another forum suggests that you should not hold your breath and I agree there will be a long wait before anything turns up - if ever. I guess there needs to be a number of possible well funded applications for the idea.
 
  • #60
sophiecentaur said:
I understand the basics of sampling theory and that is what we are discussing. I am not suggesting doing anything to change the shapes of pixels and their fine positioning is not an issue.
You are not but the way you suggest to do it digitally can only be done optically/physically by doing those steps. I tried to explain why but either I can't explain it as I've tried several times or I'm wrong, but you are yet to be able to explain to me how I'm wrong here as well, for because of the same one of two possible reasons.
Honestly, I would be glad to be proven wrong as it would make my life way easier to do this digitally, but from my persepctive and understanding you are repeating the same points I believe I have addressed several times by now, so this is going nowhere.

The system I have suggested would have fewer moving parts and much lower requirements.
It would have the exact same amount of moving parts, two steppers or galvos.

If you are prepared to have an significant overlap around the HD portion, there can be a progressively weighted sum of HD and LD samples across the edge which can be made invisible.
Again, I have addressed this. No matter how big the transition is the bigger pixels don't have enough resolution to not create a popping effect as a constantly changing gradient.
Have you even tried simulating this with photoshop? I know its not easy to create an animation in photoshop for this but I don't have time to make more animated illustrations myself either. If you claim the gradient is smooth then test it by creating an animation where a spot of the image which is constantly moving is 20x higher resolution then the rest of the image and has a transition with the low res section where the transition gradient resolution and steps is the resolution of the low res pixels (x20 times less). Save the animation, load it in fullscreen and try to follow that spot around with your eyes at 100mm distance from the screen. You will get an effect of a foveated display but your foveal view will be following the spot instead of the other way round. You will notice the gradient creating a "pixel" popping effect from the low resolution of the gradient fade.
If you don't have time to make an animation, well, so don't I.

But, of course, this requires a tapered response for the opto mechanical gate which is another demand for your proposed system. This would be trivial if the hole were generated digitally in the LD image.
My proposed system requires an optical (not mechanical) gate to form a black spot on the low resolution projector via a photochromic layer and an UV/IR beam which is overlayed with the visible spectrum high resolution projection beam via a dichroic mirror before they reach the galvo or stepper motor mirrors. There is no added mechanical parts, only optical.

Again we are going around circles as I have tried explaining all this numerous times already.
 
Last edited:
  • #61
wosoka said:
You are not but the way you suggest to do it digitally can only be done optically/physically by doing those steps. I tried to explain why but either I can't explain it as I've tried several times or I'm wrong, but you are yet to be able to explain to me how I'm wrong here as well, for because of the same one of two possible reasons.
Let's establish some basics about what the user is actually going to be presented with. I have never come across a video display that is intended to be viewed close enough for the pixels to be highly visible and I have to assume that the hd pixels will be low pass filtered to eliminate the sampling (pixels per cm) rate component. Yes, the LD pixels may be visible but even that is not necessary with the right display. So I assume we are dealing with two displays with different cutoff spatial frequencies and not with a load of little squares. It's very hard work to look at a pixellated display and actually, the detail in such a display is harder to see than than when the correct LP filtering is done.
Do you agree with that and/or do you understand what I am talking about? If you have any doubts about pixellated displays then just take good quality picture and zoom in till the pixels are visible. How would you describe he 'viewing experience' as you zoom out in small steps?
 
  • #62
Thread closed temporarily for Moderation...
 
  • #63
wosoka said:
As I said once already,
we are not moving them digitally.
This is not comparable to a digital program like Photoshop that works with digital pixels and only when finished with the digital filters on the digital pixels converts the results to be viewed on fixed size physical pixels.
You are not merging two images with varying resolution digitally, but trying to merge two projected images in the real world with different, un-changeable resolutions.

Look,
https://i.imgur.com/X2o4l11.gif
Try to show how you can move the "small pixel" here smoothly instead of steps without creating gaps or overlaps.
Not only that, but as a small pixel moves into the space of one of the big pixels, the big pixel has to hide and its area filled with small pixels, again with specific positions and sizes.
No you can't add anything in between, because this isn't a digital program but illustration of "phsyical" pixels projected on a surface.Pot, kettle, black. You are not my mind reader nor am I yours. This has nothing to do with the topic, try sticking to it. I don't have time to read what you wrote to a straw man.
I just challenged your idea above. There, I wasted time making an animated illustration. Show me a solution to that and we'll call it a day, I'll even apologize for not seeing the solution sooner.Why do you think there is no way? because you have expertise in all aspects of this project? Just because you aren't aware of a solution means there isn't any? Do you see the irony? I literally just posted this and you completely ignored it and are making that statement:
Guess what? This wasn't my idea, someone from another forum suggested it. What did I do with his idea? Took it on board.
Thread will remain closed. The newbie OP has a 10-day vacation to reconsider how best to benefit from the PF.

Thank you everybody for trying your best to help the OP in this thread.
 
  • Like
Likes davenn
<h2>1. How do you overlay light beams?</h2><p>To overlay light beams, you need to have two light sources that emit beams of light. Place the two light sources at an angle to each other so that their beams intersect at a specific point.</p><h2>2. What happens when two light beams intersect?</h2><p>When two light beams intersect, their waves combine and create a new wave pattern. This is known as interference, and it can result in constructive or destructive interference depending on the phase of the waves.</p><h2>3. What is constructive interference?</h2><p>Constructive interference occurs when the two light beams have the same phase and their waves combine to create a larger amplitude. This results in a brighter spot at the point where the beams intersect.</p><h2>4. What is destructive interference?</h2><p>Destructive interference occurs when the two light beams have opposite phases and their waves cancel each other out. This results in a darker spot at the point where the beams intersect.</p><h2>5. Can you control the interference pattern created by overlaying light beams?</h2><p>Yes, the interference pattern can be controlled by adjusting the angle and phase of the light beams. This is the basis for many optical devices such as diffraction gratings and holograms.</p>

1. How do you overlay light beams?

To overlay light beams, you need to have two light sources that emit beams of light. Place the two light sources at an angle to each other so that their beams intersect at a specific point.

2. What happens when two light beams intersect?

When two light beams intersect, their waves combine and create a new wave pattern. This is known as interference, and it can result in constructive or destructive interference depending on the phase of the waves.

3. What is constructive interference?

Constructive interference occurs when the two light beams have the same phase and their waves combine to create a larger amplitude. This results in a brighter spot at the point where the beams intersect.

4. What is destructive interference?

Destructive interference occurs when the two light beams have opposite phases and their waves cancel each other out. This results in a darker spot at the point where the beams intersect.

5. Can you control the interference pattern created by overlaying light beams?

Yes, the interference pattern can be controlled by adjusting the angle and phase of the light beams. This is the basis for many optical devices such as diffraction gratings and holograms.

Similar threads

Replies
1
Views
583
Replies
1
Views
969
Replies
1
Views
817
Replies
1
Views
2K
  • Astronomy and Astrophysics
Replies
1
Views
991
Replies
4
Views
4K
  • Special and General Relativity
Replies
29
Views
5K
  • Quantum Physics
Replies
15
Views
3K
  • Beyond the Standard Models
2
Replies
39
Views
4K
Replies
6
Views
2K
Back
Top