Image array processing, need function ideas

In summary, the conversation discusses the desire to write code for a MCU that will output an image to a TFT screen with old CRT effects such as shrinking, skewing, and random out of sync movements. The focus is on how to implement these effects through various functions and using hardware or software. It is ultimately determined that the effects can be achieved through software using custom structures and functions, but there is uncertainty on how to write these functions.
  • #1
Alex_Sanders
73
0
I want to write a code for a MCU, the MCU will output an image to a TFT screen, and here is the tricky part:

I want to make the final display to mimic old CRT, which means it will shrink, skew, and out of sync randomly, and the image will be constantly shaking.

I hope the FPS would be 60, but that's not the important part, the important part is how to write those functions, for example, regarding the "out of sync" effect, like the image constantly loops from top to bottom (you know what I'm talking about, sometimes you hit the old TV in the back and it stops rolling the image) I can just make the pointer to point to a random line of image (the display-scan mode will be left to right, top to bottom), then output it via the 16bit IO as the first line of image.

Now I believe you have the idea of what I'm getting at, but I really do not have much clue on what to do for the rest of the functions, for example, the shrinking effect, I want the image to be horizontally shrinked,looks slimmer — how to implement that? Certain members of array will be "lost", that's for sure.

So can anyone please give me your thoughts or some link on this matter? Thanks a lot.
 
Technology news on Phys.org
  • #2
Alex_Sanders said:
I want to write a code for a MCU, the MCU will output an image to a TFT screen, and here is the tricky part:

I want to make the final display to mimic old CRT, which means it will shrink, skew, and out of sync randomly, and the image will be constantly shaking.

I hope the FPS would be 60, but that's not the important part, the important part is how to write those functions, for example, regarding the "out of sync" effect, like the image constantly loops from top to bottom (you know what I'm talking about, sometimes you hit the old TV in the back and it stops rolling the image) I can just make the pointer to point to a random line of image (the display-scan mode will be left to right, top to bottom), then output it via the 16bit IO as the first line of image.

Now I believe you have the idea of what I'm getting at, but I really do not have much clue on what to do for the rest of the functions, for example, the shrinking effect, I want the image to be horizontally shrinked,looks slimmer — how to implement that? Certain members of array will be "lost", that's for sure.

So can anyone please give me your thoughts or some link on this matter? Thanks a lot.

Hey Alex_Sanders.

The first question I want to ask is you do want to do all of this in software or do you want to make use of the kinds of GPU's we find in modern computers?

The reason I ask this is if you use hardware then you have to use the kind of structures imposed by library standards like DirectX and OpenGL. If you choose to use software, then you can create custom structures that are specifically optimized for your application.

Usually the way to do this kinds of effects is to use the texture or fragment pipeline of your GPU. You load multiple textures on the video card, and then through a texture effect you can do things like modulate textures (useful for introducing static), change texture co-ordinates (get the looping effect like you see in the old broadcasts found in 40's to 60's black and white) and for other stuff.

If you don't have any fancy GPU (remember you can use the GPU to process the picture and then get the framebuffer from the GPU and export this to your TFT monitor), then you will probably be forced to use a framebuffer and a software renderer, but given the fact that every computer comes with a decent GPU anyway (even notebooks) this should not be an issue.
 
  • #3
Check out this website

http://www.noiseindustries.com/downloads/misc/FxFactoryHelp/pluginguide/distort/analogtv/
 
  • #4
chiro said:
Hey Alex_Sanders.

The first question I want to ask is you do want to do all of this in software or do you want to make use of the kinds of GPU's we find in modern computers?

The reason I ask this is if you use hardware then you have to use the kind of structures imposed by library standards like DirectX and OpenGL. If you choose to use software, then you can create custom structures that are specifically optimized for your application.

Usually the way to do this kinds of effects is to use the texture or fragment pipeline of your GPU. You load multiple textures on the video card, and then through a texture effect you can do things like modulate textures (useful for introducing static), change texture co-ordinates (get the looping effect like you see in the old broadcasts found in 40's to 60's black and white) and for other stuff.

If you don't have any fancy GPU (remember you can use the GPU to process the picture and then get the framebuffer from the GPU and export this to your TFT monitor), then you will probably be forced to use a framebuffer and a software renderer, but given the fact that every computer comes with a decent GPU anyway (even notebooks) this should not be an issue.


Nah, no GPU at all, it's going to be just a MCU, and output-ed to a small tft screen, not even a monitor.

So yea, everything will be implemented softwarely.
So first, I'll convert the dds or bmp file to pure 0x arrays, then use a function to process them.

So the real problem is I don't know how to write those function... don even know where to start.
 
  • #5
Alex_Sanders said:
Nah, no GPU at all, it's going to be just a MCU, and output-ed to a small tft screen, not even a monitor.

So yea, everything will be implemented softwarely.
So first, I'll convert the dds or bmp file to pure 0x arrays, then use a function to process them.

So the real problem is I don't know how to write those function... don even know where to start.

Well as you have suggested, you will need some kind of frame-buffer of some sort: probably a 2D matrix with each element being some fixed vector.

The size of the vector will depend on color information. For this you will need to study the different color spaces and their advantages. When you do filtering, you may find that by converting to another color space gives you an advantage of applying a specific kind of filter.

So with this in mind you will need numerical routines: optimized matrix and vector routines to convert between color spaces. The mathematics for this is basic linear algebra: this will help you for example to take a bitmap and convert to a broadcast standard like black and white NTSC, other NTSC, SECAM, PAL and so on. The matrices for these and other color spaces are well documented.

In terms of the filtering mechanism, what should help you in this regard is to understand the texture pipeline in a modern GPU and implement some of these things in software. You can then create optimized routines for standard CPU's to do this as quickly as possible.

This means that on top of the framebuffer structure with each entity have a vector for color information that can be transformed into a different space through a matrix, you add the concept of texture coordinates and texture operations that take multiple framebuffers and do 'stuff' with them.

The most important texture operation you will probably have is modulation. Modulation basically multiplies one texel by another and stores the result. When you do these operations, the values must be in the interval [0,1] but for some operations (like dot product) you may want different ranges like [-1,1]

From this you will define what is called a 'shader'. A shader is a short program that defines a small bit of code that says how you will create your final image or texture.

If you only want to do one or two programs then you probably won't implement the shader functionality, but if you extend it, then you most likely will.

The above is basically how things like Renderman (the stuff that creates movies like Toy Story and Finding Nemo) work. It's not exactly the same of course but the idea is the same.

I can give you a bit more advice but it will have to be in response to a more specific question since this is quite a big subject.
 
  • #6
chiro said:
Well as you have suggested, you will need some kind of frame-buffer of some sort: probably a 2D matrix with each element being some fixed vector.

The size of the vector will depend on color information. For this you will need to study the different color spaces and their advantages. When you do filtering, you may find that by converting to another color space gives you an advantage of applying a specific kind of filter.

So with this in mind you will need numerical routines: optimized matrix and vector routines to convert between color spaces. The mathematics for this is basic linear algebra: this will help you for example to take a bitmap and convert to a broadcast standard like black and white NTSC, other NTSC, SECAM, PAL and so on. The matrices for these and other color spaces are well documented.

In terms of the filtering mechanism, what should help you in this regard is to understand the texture pipeline in a modern GPU and implement some of these things in software. You can then create optimized routines for standard CPU's to do this as quickly as possible.

This means that on top of the framebuffer structure with each entity have a vector for color information that can be transformed into a different space through a matrix, you add the concept of texture coordinates and texture operations that take multiple framebuffers and do 'stuff' with them.

The most important texture operation you will probably have is modulation. Modulation basically multiplies one texel by another and stores the result. When you do these operations, the values must be in the interval [0,1] but for some operations (like dot product) you may want different ranges like [-1,1]

From this you will define what is called a 'shader'. A shader is a short program that defines a small bit of code that says how you will create your final image or texture.

If you only want to do one or two programs then you probably won't implement the shader functionality, but if you extend it, then you most likely will.

The above is basically how things like Renderman (the stuff that creates movies like Toy Story and Finding Nemo) work. It's not exactly the same of course but the idea is the same.

I can give you a bit more advice but it will have to be in response to a more specific question since this is quite a big subject.



Thanks a lot for your detailed information. But you might have over-complicated the matter.

First, no buffer is needed, all image data will be output-ed to the 16 bit data port of a TFT in real time, for a 320*240 TFT screen, a single (image) "line" will contain 320 dots, one dots, suppose the TFT support 16 bit true color, will contain an array element like "0xFFFF" (total black I think).

So what I need is, if I were to shrink an image (horizontally), the actual line of data would be going through a function, that throw away certain array member, while stuff in white blank dots (such as 0x0000) on both sides.


I understand you might have thought I wanted to implement certain advanced SFX like letting only one color through the alpha channel or something, no, not that complicated. At bottom of this problem, it's just linear algebra, matrix transformation.

Problem is, I don't know how to get things done in the most professional, efficient way. I think the "pluck members out of the matrix" method might be the worst and rookiest one can think of.


This is what I'd like to be done:

Original image:

1111
1111
1111


Horizontally shrinked:
0110
0110
0110


Hope my explanation didn't make things worse.:shy:
 
  • #7
And feel free to fill me in how things are being done in the industry, I'm always eager to learn.

I've read things about shading, very complex though.
 
  • #8
Alex_Sanders said:
And feel free to fill me in how things are being done in the industry, I'm always eager to learn.

I've read things about shading, very complex though.

I'll answer your previous question later on, but I thought I'd answer this one first.

If you want the basics of 3D rendering in terms of the 'mainstream' technique, the basic idea is that you use a shader.

A shader is just a program that takes inputs like geometries, textures, texture coordinates, texture operations and so on which help produce an output frame that is saved to a BMP (for an animated movie) or is dumped to the graphic card screen framebuffer like a video game.

In video games we have two pipelines: one works on geometric data (vertices) and one works on texture data (texture maps).

That is the basic version but of course, software renderers provide more flexibility in terms of operations than hardware varieties.

If you want specifics I would read the OpenGL specification document about shaders and vertex and fragment programs: these basically give you the ideas and the specifics for how this is done in the GPU. You don't have to read all the programming specifics, but if you read the relevant outline you'll see how it works pretty quickly conceptually.

Also read the about the Renderman language if you are keen.
 

1. What is image array processing?

Image array processing is a type of digital image processing that involves manipulating and analyzing images using mathematical algorithms and computer-based operations. It is used to enhance or extract information from images, such as removing noise, detecting objects, or improving image quality.

2. What are some common applications of image array processing?

Image array processing has a wide range of applications, including medical imaging, satellite imagery, facial recognition, video surveillance, and computer vision. It is also used in industries such as agriculture, robotics, and self-driving cars.

3. What are some common functions used in image array processing?

There are many functions used in image array processing, including filters, transforms, edge detection, morphological operations, and feature extraction. These functions can be combined in various ways to achieve different objectives, depending on the specific application.

4. How does image array processing differ from traditional image processing?

Image array processing is a subset of traditional image processing, which focuses on processing images using algorithms and techniques that are specifically designed for digital images. Traditional image processing, on the other hand, involves techniques that were developed for processing physical photographs or film.

5. What are some challenges in image array processing?

One of the main challenges in image array processing is dealing with noise and artifacts that can affect image quality. Another challenge is finding the right combination of functions and parameters to achieve the desired results. Additionally, processing large datasets and real-time applications can also be challenging and require efficient algorithms and hardware.

Similar threads

  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
17
Views
2K
  • Programming and Computer Science
Replies
1
Views
680
  • Engineering and Comp Sci Homework Help
Replies
1
Views
816
  • MATLAB, Maple, Mathematica, LaTeX
Replies
10
Views
241
  • Programming and Computer Science
Replies
1
Views
1K
  • Computing and Technology
Replies
2
Views
716
  • MATLAB, Maple, Mathematica, LaTeX
Replies
2
Views
2K
  • Programming and Computer Science
Replies
31
Views
2K
Back
Top