How can I write code for an optical touchscreen using infrared technology?

AI Thread Summary
The discussion revolves around building a touchscreen, with a focus on capacitive and optical touchscreens using infrared technology. The user expresses limited programming skills and seeks open-source code for implementing optical touchscreen algorithms. Key points include the need to understand trigonometry for converting sensor data into x,y coordinates and the importance of optimizing code for efficiency, especially when using devices like Arduino with limited memory.Participants suggest starting with a geometric diagram to outline the process, emphasizing the need to read sensor data and convert it into positional outputs. The user contemplates adding a third camera for improved accuracy and multitouch capabilities, while also exploring the possibility of outputting camera images as binary signals to detect touch points.The conversation touches on the technical aspects of using a line scan camera to detect touch by analyzing pixel values, with the goal of determining the finger's position based on changes in pixel data. Overall, the discussion highlights the challenges and considerations in developing an effective touchscreen interface.
Sorade
Messages
53
Reaction score
1
Hello, I am looking into building a touch screen. I have been considering a capacitive touch screen but I am also interested in optical touchscreen using infrared like the ones one this link: http://www.ledsmagazine.com/articles/print/volume-10/issue-9/features/optical-touchscreens-benefit-from-compact-high-power-infrared-leds-magazine.html

Since my programming skills are extremely limited (if I feel bold), or nonexistent (if I feel like comparing myself to the average programmer out there), I was wondering if anyone has tried to write some code for the two first types of optical touchscreens in the link (see images below), or knows of any free available open source code.
1309ledsweb_design2.jpg
1309ledsweb_design3.jpg

Thanks !
 
Last edited by a moderator:
Technology news on Phys.org
It looks like you'll need to understand some trigonometry in order to convert your scan data into x,y coordinates of the touch.

Have you thought about how to do that?
 
I have an idea, yes. My maths isn't to bad (especially trigonometry). I've written some code to handle data from a 3D model before. My concern is that all the code I've ever written was extremely unoptimised and took ages to run (with loops in loops in loops, combined to the very basic functions). I don't think I've got the coding skills to be able to code efficiently, especially if I have to do it using an Arduino which has limited memory. That's why I'm asking. If I'm provided with a code I'll be able to understand it with a bit of research and edit it to fit my needs.
 
I don't think someone here will be able to help you with the specifics of coding this algorithm. However, we can help you develop it if you show your work on it.

I'd start first with a diagram of the geometry. Basically you'd follow something like this:
Step 1: Read each sensor convert data to an angle measure
Step 2: Use the angle input to get the x,y positioning output

Also I'd start with 0,0 being the top left corner as a common convention for screen display with increasing going from left to right and increasing y from top to bottom.
 
Thanks,

I'll post again once I've done a bit of work and I know for sure which method I want to use. I also have to reassess my needs, because multitouch is fun, but when using it with a Windows desktop it is not that necessary. I also need to pick the IR emitter/receiver combination I want to use (budget and all).

Thanks for the conventions though. It will make it easier for people to understand.
 
I apologies in advance for not using the convention but I thought it would be better to keep it similar to the paper I got it from: http://www.google.co.uk/url?sa=t&rc...=TjaSiO67H9-u0xpDT6zI_A&bvm=bv.99261572,d.d24

Hi all so I plan on using the following method for my display. I might add a third camera to increase the accuracy and allow multitouch. I think that using scanning line cameras relatively low end should do the trick and avoid using a lot of wiring and connection unlike using IR-receivers.

Is it possible to output the image of the camera as a binary signal... i.e areas where the finger can be detected would output 1 and background would output 0 ? If not I will have to do some image processing which I'm not sure is an easy thing to do.

upload_2015-8-4_15-50-56.png

Figure 1: Coordinate System for Pointer Locater Using Stereovision

The original position of the pointer can be found by solving:
upload_2015-8-4_15-51-19.png


Where:
upload_2015-8-4_15-51-34.png


Where: d2x ; d2y are the coordinates of the focal point of the right camera

Dividing the original position by the pixel size of the display yields the cursor position of the pointer.
 
I think you need to display the image and use the x,y value you computed to get the pixel color value.
 
jedishrfu said:
I think you need to display the image and use the x,y value you computed to get the pixel color value.

I'm not clear about what you mean ? I was thinking of using a monochrome camera. Why is the pixel color value needed ? I'm just interested in its position.
 
Sorry I thought you meant a color camera and I thought perhaps you were trying to locate something in an image.

As an example, a camera records the road in front of a car and the computer processes the image using a combination of filters to isolate and locate the road line markers for steering.
 
  • #10
Ah okay sorry I miss explained.

What I hope is to have a camera that "sees" a 1 pixel thick layer above the display. The field of view of my line scan camera is parallel to my display (side view). The idea is that after calibration the camera outputs a value of 0 for each pixel, say 00000000000000000. But when a finger touches the screen the pixel value change for some pixels, say 00135310000000000. I can therefore deduce that my finger is located where the 13531 is, i.e to the left of my image. The centre of my finger is located at the pixel with a value of 5. I can then work out the distance between the centre of my image (in the image above I've got 17 pixel, so the centre of my image is the 9th pixel , so my finger is offset by 3 pixels from the centre of my image (see N values in the sketch).
 
Back
Top