Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Optical Touch Screen Code

  1. Jul 30, 2015 #1
    Hello, I am looking into building a touch screen. I have been considering a capacitive touch screen but I am also interested in optical touchscreen using infrared like the ones one this link: http://www.ledsmagazine.com/articles/print/volume-10/issue-9/features/optical-touchscreens-benefit-from-compact-high-power-infrared-leds-magazine.html [Broken]

    Since my programming skills are extremely limited (if I feel bold), or nonexistent (if I feel like comparing myself to the average programmer out there), I was wondering if anyone has tried to write some code for the two first types of optical touchscreens in the link (see images below), or knows of any free available open source code.
    1309ledsweb_design2.jpg 1309ledsweb_design3.jpg
    Thanks !
     
    Last edited by a moderator: May 7, 2017
  2. jcsd
  3. Jul 31, 2015 #2

    jedishrfu

    Staff: Mentor

    It looks like you'll need to understand some trigonometry in order to convert your scan data into x,y coordinates of the touch.

    Have you thought about how to do that?
     
  4. Jul 31, 2015 #3
    I have an idea, yes. My maths isn't to bad (especially trigonometry). I've written some code to handle data from a 3D model before. My concern is that all the code I've ever written was extremely unoptimised and took ages to run (with loops in loops in loops, combined to the very basic functions). I don't think I've got the coding skills to be able to code efficiently, especially if I have to do it using an Arduino which has limited memory. That's why I'm asking. If I'm provided with a code I'll be able to understand it with a bit of research and edit it to fit my needs.
     
  5. Jul 31, 2015 #4

    jedishrfu

    Staff: Mentor

    I don't think someone here will be able to help you with the specifics of coding this algorithm. However, we can help you develop it if you show your work on it.

    I'd start first with a diagram of the geometry. Basically you'd follow something like this:
    Step 1: Read each sensor convert data to an angle measure
    Step 2: Use the angle input to get the x,y positioning output

    Also I'd start with 0,0 being the top left corner as a common convention for screen display with increasing going from left to right and increasing y from top to bottom.
     
  6. Jul 31, 2015 #5
    Thanks,

    I'll post again once I've done a bit of work and I know for sure which method I want to use. I also have to reassess my needs, because multitouch is fun, but when using it with a Windows desktop it is not that necessary. I also need to pick the IR emitter/receiver combination I want to use (budget and all).

    Thanks for the conventions though. It will make it easier for people to understand.
     
  7. Aug 4, 2015 #6
    I apologies in advance for not using the convention but I thought it would be better to keep it similar to the paper I got it from: http://www.google.co.uk/url?sa=t&rc...=TjaSiO67H9-u0xpDT6zI_A&bvm=bv.99261572,d.d24

    Hi all so I plan on using the following method for my display. I might add a third camera to increase the accuracy and allow multitouch. I think that using scanning line cameras relatively low end should do the trick and avoid using a lot of wiring and connection unlike using IR-receivers.

    Is it possible to output the image of the camera as a binary signal... i.e areas where the finger can be detected would output 1 and background would output 0 ? If not I will have to do some image processing which I'm not sure is an easy thing to do.

    upload_2015-8-4_15-50-56.png
    Figure 1: Coordinate System for Pointer Locater Using Stereovision

    The original position of the pointer can be found by solving:


    upload_2015-8-4_15-51-19.png

    Where:
    upload_2015-8-4_15-51-34.png

    Where: d2x ; d2y are the coordinates of the focal point of the right camera

    Dividing the original position by the pixel size of the display yields the cursor position of the pointer.
     
  8. Aug 4, 2015 #7

    jedishrfu

    Staff: Mentor

    I think you need to display the image and use the x,y value you computed to get the pixel color value.
     
  9. Aug 6, 2015 #8
    I'm not clear about what you mean ? I was thinking of using a monochrome camera. Why is the pixel color value needed ? I'm just interested in its position.
     
  10. Aug 6, 2015 #9

    jedishrfu

    Staff: Mentor

    Sorry I thought you meant a color camera and I thought perhaps you were trying to locate something in an image.

    As an example, a camera records the road in front of a car and the computer processes the image using a combination of filters to isolate and locate the road line markers for steering.
     
  11. Aug 6, 2015 #10
    Ah okay sorry I miss explained.

    What I hope is to have a camera that "sees" a 1 pixel thick layer above the display. The field of view of my line scan camera is parallel to my display (side view). The idea is that after calibration the camera outputs a value of 0 for each pixel, say 00000000000000000. But when a finger touches the screen the pixel value change for some pixels, say 00135310000000000. I can therefore deduce that my finger is located where the 13531 is, i.e to the left of my image. The centre of my finger is located at the pixel with a value of 5. I can then work out the distance between the centre of my image (in the image above I've got 17 pixel, so the centre of my image is the 9th pixel , so my finger is offset by 3 pixels from the centre of my image (see N values in the sketch).
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Optical Touch Screen Code
  1. The Source Code! (Replies: 15)

Loading...