Register to reply

Image array processing, need function ideas

by Alex_Sanders
Tags: array, function, ideas, image, processing
Share this thread:
Alex_Sanders
#1
Mar9-12, 12:05 AM
P: 80
I wanna write a code for a MCU, the MCU will output an image to a TFT screen, and here is the tricky part:

I wanna make the final display to mimic old CRT, which means it will shrink, skew, and out of sync randomly, and the image will be constantly shaking.

I hope the FPS would be 60, but that's not the important part, the important part is how to write those functions, for example, regarding the "out of sync" effect, like the image constantly loops from top to bottom (you know what I'm talking about, sometimes you hit the old TV in the back and it stops rolling the image) I can just make the pointer to point to a random line of image (the display-scan mode will be left to right, top to bottom), then output it via the 16bit IO as the first line of image.

Now I believe you have the idea of what I'm getting at, but I really do not have much clue on what to do for the rest of the functions, for example, the shrinking effect, I want the image to be horizontally shrinked,looks slimmer how to implement that? Certain members of array will be "lost", that's for sure.

So can anyone please give me your thoughts or some link on this matter? Thanks a lot.
Phys.Org News Partner Science news on Phys.org
Security CTO to detail Android Fake ID flaw at Black Hat
Huge waves measured for first time in Arctic Ocean
Mysterious molecules in space
chiro
#2
Mar9-12, 12:24 AM
P: 4,572
Quote Quote by Alex_Sanders View Post
I wanna write a code for a MCU, the MCU will output an image to a TFT screen, and here is the tricky part:

I wanna make the final display to mimic old CRT, which means it will shrink, skew, and out of sync randomly, and the image will be constantly shaking.

I hope the FPS would be 60, but that's not the important part, the important part is how to write those functions, for example, regarding the "out of sync" effect, like the image constantly loops from top to bottom (you know what I'm talking about, sometimes you hit the old TV in the back and it stops rolling the image) I can just make the pointer to point to a random line of image (the display-scan mode will be left to right, top to bottom), then output it via the 16bit IO as the first line of image.

Now I believe you have the idea of what I'm getting at, but I really do not have much clue on what to do for the rest of the functions, for example, the shrinking effect, I want the image to be horizontally shrinked,looks slimmer — how to implement that? Certain members of array will be "lost", that's for sure.

So can anyone please give me your thoughts or some link on this matter? Thanks a lot.
Hey Alex_Sanders.

The first question I want to ask is you do want to do all of this in software or do you want to make use of the kinds of GPU's we find in modern computers?

The reason I ask this is if you use hardware then you have to use the kind of structures imposed by library standards like DirectX and OpenGL. If you choose to use software, then you can create custom structures that are specifically optimized for your application.

Usually the way to do this kinds of effects is to use the texture or fragment pipeline of your GPU. You load multiple textures on the video card, and then through a texture effect you can do things like modulate textures (useful for introducing static), change texture co-ordinates (get the looping effect like you see in the old broadcasts found in 40's to 60's black and white) and for other stuff.

If you don't have any fancy GPU (remember you can use the GPU to process the picture and then get the framebuffer from the GPU and export this to your TFT monitor), then you will probably be forced to use a framebuffer and a software renderer, but given the fact that every computer comes with a decent GPU anyway (even notebooks) this should not be an issue.
jedishrfu
#3
Mar9-12, 12:28 AM
P: 2,805
Check out this website

http://www.noiseindustries.com/downl...tort/analogtv/

Alex_Sanders
#4
Mar10-12, 06:05 AM
P: 80
Image array processing, need function ideas

Quote Quote by chiro View Post
Hey Alex_Sanders.

The first question I want to ask is you do want to do all of this in software or do you want to make use of the kinds of GPU's we find in modern computers?

The reason I ask this is if you use hardware then you have to use the kind of structures imposed by library standards like DirectX and OpenGL. If you choose to use software, then you can create custom structures that are specifically optimized for your application.

Usually the way to do this kinds of effects is to use the texture or fragment pipeline of your GPU. You load multiple textures on the video card, and then through a texture effect you can do things like modulate textures (useful for introducing static), change texture co-ordinates (get the looping effect like you see in the old broadcasts found in 40's to 60's black and white) and for other stuff.

If you don't have any fancy GPU (remember you can use the GPU to process the picture and then get the framebuffer from the GPU and export this to your TFT monitor), then you will probably be forced to use a framebuffer and a software renderer, but given the fact that every computer comes with a decent GPU anyway (even notebooks) this should not be an issue.

Nah, no GPU at all, it's gonna be just a MCU, and output-ed to a small tft screen, not even a monitor.

So yea, everything will be implemented softwarely.
So first, I'll convert the dds or bmp file to pure 0x arrays, then use a function to process them.

So the real problem is I dunno how to write those function... don even know where to start.
chiro
#5
Mar10-12, 06:30 AM
P: 4,572
Quote Quote by Alex_Sanders View Post
Nah, no GPU at all, it's gonna be just a MCU, and output-ed to a small tft screen, not even a monitor.

So yea, everything will be implemented softwarely.
So first, I'll convert the dds or bmp file to pure 0x arrays, then use a function to process them.

So the real problem is I dunno how to write those function... don even know where to start.
Well as you have suggested, you will need some kind of frame-buffer of some sort: probably a 2D matrix with each element being some fixed vector.

The size of the vector will depend on color information. For this you will need to study the different color spaces and their advantages. When you do filtering, you may find that by converting to another color space gives you an advantage of applying a specific kind of filter.

So with this in mind you will need numerical routines: optimized matrix and vector routines to convert between color spaces. The mathematics for this is basic linear algebra: this will help you for example to take a bitmap and convert to a broadcast standard like black and white NTSC, other NTSC, SECAM, PAL and so on. The matrices for these and other color spaces are well documented.

In terms of the filtering mechanism, what should help you in this regard is to understand the texture pipeline in a modern GPU and implement some of these things in software. You can then create optimized routines for standard CPU's to do this as quickly as possible.

This means that on top of the framebuffer structure with each entity have a vector for color information that can be transformed into a different space through a matrix, you add the concept of texture coordinates and texture operations that take multiple framebuffers and do 'stuff' with them.

The most important texture operation you will probably have is modulation. Modulation basically multiplies one texel by another and stores the result. When you do these operations, the values must be in the interval [0,1] but for some operations (like dot product) you may want different ranges like [-1,1]

From this you will define what is called a 'shader'. A shader is a short program that defines a small bit of code that says how you will create your final image or texture.

If you only want to do one or two programs then you probably won't implement the shader functionality, but if you extend it, then you most likely will.

The above is basically how things like Renderman (the stuff that creates movies like Toy Story and Finding Nemo) work. It's not exactly the same of course but the idea is the same.

I can give you a bit more advice but it will have to be in response to a more specific question since this is quite a big subject.
Alex_Sanders
#6
Mar10-12, 10:41 PM
P: 80
Quote Quote by chiro View Post
Well as you have suggested, you will need some kind of frame-buffer of some sort: probably a 2D matrix with each element being some fixed vector.

The size of the vector will depend on color information. For this you will need to study the different color spaces and their advantages. When you do filtering, you may find that by converting to another color space gives you an advantage of applying a specific kind of filter.

So with this in mind you will need numerical routines: optimized matrix and vector routines to convert between color spaces. The mathematics for this is basic linear algebra: this will help you for example to take a bitmap and convert to a broadcast standard like black and white NTSC, other NTSC, SECAM, PAL and so on. The matrices for these and other color spaces are well documented.

In terms of the filtering mechanism, what should help you in this regard is to understand the texture pipeline in a modern GPU and implement some of these things in software. You can then create optimized routines for standard CPU's to do this as quickly as possible.

This means that on top of the framebuffer structure with each entity have a vector for color information that can be transformed into a different space through a matrix, you add the concept of texture coordinates and texture operations that take multiple framebuffers and do 'stuff' with them.

The most important texture operation you will probably have is modulation. Modulation basically multiplies one texel by another and stores the result. When you do these operations, the values must be in the interval [0,1] but for some operations (like dot product) you may want different ranges like [-1,1]

From this you will define what is called a 'shader'. A shader is a short program that defines a small bit of code that says how you will create your final image or texture.

If you only want to do one or two programs then you probably won't implement the shader functionality, but if you extend it, then you most likely will.

The above is basically how things like Renderman (the stuff that creates movies like Toy Story and Finding Nemo) work. It's not exactly the same of course but the idea is the same.

I can give you a bit more advice but it will have to be in response to a more specific question since this is quite a big subject.


Thanks a lot for your detailed information. But you might have over-complicated the matter.

First, no buffer is needed, all image data will be output-ed to the 16 bit data port of a TFT in real time, for a 320*240 TFT screen, a single (image) "line" will contain 320 dots, one dots, suppose the TFT support 16 bit true color, will contain an array element like "0xFFFF" (total black I think).

So what I need is, if I were to shrink an image (horizontally), the actual line of data would be going through a function, that throw away certain array member, while stuff in white blank dots (such as 0x0000) on both sides.


I understand you might have thought I wanted to implement certain advanced SFX like letting only one color through the alpha channel or something, no, not that complicated. At bottom of this problem, it's just linear algebra, matrix transformation.

Problem is, I dunno how to get things done in the most professional, efficient way. I think the "pluck members out of the matrix" method might be the worst and rookiest one can think of.


This is what I'd like to be done:

Original image:

1111
1111
1111


Horizontally shrinked:
0110
0110
0110


Hope my explanation didn't make things worse.
Alex_Sanders
#7
Mar10-12, 10:44 PM
P: 80
And feel free to fill me in how things are being done in the industry, I'm always eager to learn.

I've read things about shading, very complex though.
chiro
#8
Mar11-12, 04:45 AM
P: 4,572
Quote Quote by Alex_Sanders View Post
And feel free to fill me in how things are being done in the industry, I'm always eager to learn.

I've read things about shading, very complex though.
I'll answer your previous question later on, but I thought I'd answer this one first.

If you want the basics of 3D rendering in terms of the 'mainstream' technique, the basic idea is that you use a shader.

A shader is just a program that takes inputs like geometries, textures, texture coordinates, texture operations and so on which help produce an output frame that is saved to a BMP (for an animated movie) or is dumped to the graphic card screen framebuffer like a video game.

In video games we have two pipelines: one works on geometric data (vertices) and one works on texture data (texture maps).

That is the basic version but of course, software renderers provide more flexibility in terms of operations than hardware varieties.

If you want specifics I would read the OpenGL specification document about shaders and vertex and fragment programs: these basically give you the ideas and the specifics for how this is done in the GPU. You don't have to read all the programming specifics, but if you read the relevant outline you'll see how it works pretty quickly conceptually.

Also read the about the Renderman language if you are keen.


Register to reply

Related Discussions
Image Processing Science & Math Textbooks 2
Image Processing Programming & Computer Science 4
Image/video processing project ideas? Academic Guidance 6
Matlab Image Processing Help (Image Segmentation) Math & Science Software 1
Image processing matrix image rotation Engineering, Comp Sci, & Technology Homework 1