Image processing with Sobel Edge detection

In summary: X-axis _kernel = new int[9] { -1, 0, 1, -2, 0, 2, -1, 0, 1 }; resultX[i, x] = _kernel.Reshape(x, 0, resultX.Length); // convolution for the Y-axis _kernel = new int[9] { -1, -2, -
  • #1
btb4198
572
10
I am coding Sobel Edges detection in C#.
I have a method that converts my image to grayscale.

it adds the R G and B values and divides by 3. and replaces the R G and B value out with that same number. That seems to be working fine.
GrayScale.PNG


Then I added a gaussian blur to it by using this :
_kernel = new int [9]{1,2,1,2,4,2,1,2,1};
and doing a convolution to my Image that is in grayscale:

gaussian blur.PNG


I guess that is working ...

At this point I do two more convolutions. One for the x-axis and one for the y axis. I perform both separately on the same image that now has a grayscale and gaussian blur applied to it.
I do a convolution for X using this :
_kernel = new int[9] { -1, 0, 1, -2, 0, 2, -1, 0, 1 };

with my Image and store that result in a new array called resultX[,]

and then I do another convolution for Y
using this:
_kernel = new int[9] { -1, -2, -1, 0, 0, 0, 1, 2, 1 };

and store that result in another array resultY[,]

Mentor note: added code tags and fixed the problem of disappearing array index that caused subsequent text to be rendered in italics.
Then I do this:
C:
int i = 0;
for (int x = 0; x < Row; x++)
{
     for (int y = 0; y < Column; y++)
     {
         byte value = (byte) Math.Sqrt(Math.Pow(resultX[x, y], 2) + Math.Pow(resultY[x, y], 2));
         result[ i] = value;
         result[ i + 1] = value;
         result[ i + 2] = value;
         result[ i + 3] = 0;

         Angle[x, y] = Math.Atan(resultY[x, y] / resultX[x, y]);
         i += BitDepth;
      }
}
return result;
but My image is black :
Black.PNG


any Idea of what I am doing wrong?
I checked my convolution method and it is work just like how this guy says it should:
[/i]
 
Last edited by a moderator:
Technology news on Phys.org
  • #2
I'm not completely sure without seeing the graphics statements, but"

I don't see what the result[] variable is, or what you are doing with it.
Shouldn't you store the absolute gradients in a two-dimensional array?

The Angle[x,y] will contain values from -π/2 to +π/2. You'll want to add, another π if the x-gradient is negative. If you use this directly for an image, the values with a max of 1.57 will all look black.
If you want something like in the video, you'll need to use the magnitudes of the gradients as the brightness, and convert the angles into a color value.
 
  • #3
willem2 said:
I'm not completely sure without seeing the graphics statements, but"

I don't see what the result[] variable is, or what you are doing with it.
Shouldn't you store the absolute gradients in a two-dimensional array?

The Angle[x,y] will contain values from -π/2 to +π/2. You'll want to add, another π if the x-gradient is negative. If you use this directly for an image, the values with a max of 1.57 will all look black.
If you want something like in the video, you'll need to use the magnitudes of the gradients as the brightness, and convert the angles into a color value.

Question 1)
Mentor note: added code tags and fixed the problem of disappearing array index that caused subsequent text to be rendered in italics.
Sorry it should be:
C:
        private byte[] RunEdgeDetectionForByteArray()
        {
        
        
            byte[] result = new byte[Size];
            double[,] Angle = new double[Row, Column];

            SodelEdgeDetectionX();
            double[,] resultX = RunConvolution();
            SodelEdgeDetectionY();
            double[,] resultY = RunConvolution();

            int i = 0;
            for (int x = 0; x < Row; x++)
            {
                for (int y = 0; y < Column; y++)
                {
                  
                   byte value = (byte) Math.Sqrt(Math.Pow(resultX[x, y], 2) + Math.Pow(resultY[x, y], 2));
                    result[ i] = value;
                    result[i + 1] = value;
                    result[i + 2] = value;
                    result[i + 3] = 0;

                    Angle[x, y] = Math.Atan(resultY[x, y] / resultX[x, y]);
                    i += BitDepth;
                }
            }
            return result;
        }
question 2)
I have another function that convert result[] into a bitmapImage. that is why I do not keep it in a 2D array.

I am sorry but I do not understand the second part :
"The Angle[x,y] will contain values from -π/2 to +π/2. You'll want to add, another π if the x-gradient is negative. If you use this directly for an image, the values with a max of 1.57 will all look black.
If you want something like in the video, you'll need to use the magnitudes of the gradients as the brightness, and convert the angles into a color value."

what do you mean?
 
Last edited by a moderator:
  • #4
@btb4198, please use code tags around your code. I have added them in your two preceding posts. A problem you have encountered is with arrays with an index of i. An expression such as "result[i]" renders everything after "result" in italics, which causes your code to lose some information.

I have repaired the problem in the preceding posts. There is a sticky at the beginning of this forum section that explains how to use the code tags.
 
  • Like
Likes berkeman
  • #5
willem2 said:
I don't see what the result[] variable is, or what you are doing with it.
Take a look at the code again. Because of an array index with i inside brackets, the code consumed the brackets and index, and converted everything following to italics.
 
  • #6
btb4198 said:
If you want something like in the video, you'll need to use the magnitudes of the gradients as the brightness, and convert the angles into a color value."

what do you mean?
You have a grayscale image in result[]. Doesn't that display anything?
The gaussian blur will make the gradient values you get lower, so you might have to multiply that by a constant (and clip values at the maximum value of a byte)
And then you have an array of gradient angles as doubles with values between -1.57... and +1.57... in Angle[x,y]. I don't think that will display as an image in any library you might be using. In the video they convert this angle to the colour used for displaying what is in result[]
 
  • #7
willem2 said:
You have a grayscale image in result[]. Doesn't that display anything?
The gaussian blur will make the gradient values you get lower, so you might have to multiply that by a constant (and clip values at the maximum value of a byte)
And then you have an array of gradient angles as doubles with values between -1.57... and +1.57... in Angle[x,y]. I don't think that will display as an image in any library you might be using. In the video they convert this angle to the colour used for displaying what is in result[]
1) yes I sent a picture of the grayscale.
What do you meant in the video they covert this angle to the colour used for displaying? I don't remember he saying anything like that ?

What time of the video are you referring to?
 
  • #8
In the video he talks about gray scale.
Also, I am not doing anything with the Array of angles. Because he never says what to do with them.
 
  • #9
btb4198 said:
In the video he talks about gray scale.
Also, I am not doing anything with the Array of angles. Because he never says what to do with them.
The only thing I can think of is that the values of the gradients are less then you expect. Anti-aliasing in the original image will often put pixels with intermediate values on edges, and the gaussian blur will decrease contrast more. The range of values of a byte is 0-255, if your result values are 0..20, it might all look black.

If that doesn't work, I think there's something wrong with the format of result, or the method you use to display it, but I can't tell it without knowing what graphics/image processing library you are using.

If you look at 6:00 in the video you can see where the angles are used to get a colour from the colour wheel to colour the edges.

Are you also quite sure that resultX and resultY are actually filled? I can't see the class declarations and the implementations of the other class methods, because none of them have explicit parameters.
 
  • #10
Ok I compared my gaussian blur to adobe illustrator and mines is not working. I did everything the guy said in the video for do Convolution
here is my code:
Code:
     public BitmapSource ApplyGaussianBlur(BitmapImage bitmapImage)
        {
            if (bitmapImage == null) return null;
            Image = RunGrayScale(bitmapImage);
            GaussianBlur25();
            double[,] resultBlur = RunConvolution();

            BitmapSource bitmapSource = MakeBitmapSourceAfterNewFilter(resultBlur);
            return bitmapSource;
        }

   void GaussianBlur9()
        {
            _kernel = null;
            _kernel = new double [9]{1,2,1,2,4,2,1,2,1};
            ConvolutionRangeX = 1;
            ConvolutionRangeY = 1;
            reciprocal = 0.0625D;
        }

        void GaussianBlur25()
        {
            _kernel = null;
            _kernel = new double[25] { 1, 4, 7, 4, 1, 4, 16, 26, 16, 4, 7, 26, 41, 26, 7, 4, 16, 26, 16, 4, 1, 4, 7, 4, 1 };
            ConvolutionRangeX = 2;
            ConvolutionRangeY = 2;
            reciprocal = 0.003663D;
        } 
 
 public double[,] RunConvolution()
        {

            if (Image == null || _kernel == null) return null;

            double[,] result = new double[Row, Column];
  

            for (int x = 0; x < Row; x++ )
            {
                for (int y = 0; y < Column; y++)
                {
                    result[x, y] =  ConvolutionMath(y,x);
                }
            }

            return result;
        }        private double ConvolutionMath(int yLocation, int xLocation)
        {
            double value = 0D;
            double valuetest = 0;
            int maxY = yLocation + ConvolutionRangeY;
            int minY = yLocation - ConvolutionRangeY;
            int maxX = xLocation + ConvolutionRangeX;
            int minX = xLocation - ConvolutionRangeX;
            int j = 0;
            for (int x = minX; x <= maxX; x++)
            {
                for (int y = minY; y <= maxY; y++)
                {
                    if (y >= 0 && y < Column && x >= 0 && x < Row)
                    {
                        valuetest = (_kernel[j] *Image[x, y]);
                        value = value + valuetest;
                    }
                    j++;
                }
            }
            value = value * reciprocal;

            return value;
        }

I tested this by hand, I am doing the same thing the guy from the video is doing
 

1. What is Sobel Edge detection?

Sobel Edge detection is an image processing technique used for detecting edges in digital images. It is based on the concept of gradient, which measures the changes in intensity values between neighboring pixels. The Sobel operator uses a 3x3 kernel to calculate the gradient in both horizontal and vertical directions, and then combines them to determine the magnitude and direction of the edge.

2. How does Sobel Edge detection work?

Sobel Edge detection works by convolving a 3x3 kernel with the image. The kernel contains values that represent the weights for each pixel in the neighborhood, with the center pixel having the highest weight. This process is repeated for every pixel in the image, resulting in a gradient image that highlights the edges by assigning high values to pixels with significant changes in intensity.

3. What are the advantages of using Sobel Edge detection?

One of the main advantages of using Sobel Edge detection is its ability to accurately detect edges and reduce noise in an image. It is also a simple and efficient technique that can be easily implemented in real-time applications. Additionally, Sobel Edge detection is less sensitive to lighting and contrast variations in the image compared to other edge detection methods.

4. What are the limitations of Sobel Edge detection?

One limitation of Sobel Edge detection is that it may not be able to detect edges in images with complex backgrounds or textures. It also tends to produce thick edges, which may not accurately represent the true edges in the image. Another limitation is that it cannot differentiate between different types of edges, such as sharp or blurry edges.

5. How is Sobel Edge detection used in real-world applications?

Sobel Edge detection is commonly used in various real-world applications, such as object detection and recognition, medical imaging, and self-driving cars. It is also used in video processing to track moving objects and detect motion. Additionally, Sobel Edge detection is often used as a pre-processing step in other image processing techniques, such as image segmentation and feature extraction.

Similar threads

  • Programming and Computer Science
Replies
1
Views
749
  • Programming and Computer Science
Replies
3
Views
1K
  • Programming and Computer Science
Replies
2
Views
1K
  • Programming and Computer Science
Replies
1
Views
869
  • Programming and Computer Science
Replies
19
Views
2K
  • Programming and Computer Science
Replies
4
Views
612
  • Programming and Computer Science
Replies
25
Views
2K
  • Programming and Computer Science
Replies
1
Views
3K
  • Programming and Computer Science
Replies
22
Views
3K
  • Programming and Computer Science
Replies
5
Views
2K
Back
Top