# How Does the Non-Equality of Kernels Imply Their Sum Equals the Vector Space?

• A
• Portuga
In summary, the author solved the following equation:Dimension theorem:Dimension theorem of the sum:n-1
Portuga
TL;DR Summary
Let ##F## and ##G## be two non-zero linear functionals over a vector space ##V## of dimension ##n##. Assuming ##ker (F ) \neq \ker (G)##, determine the dimensions of the following subspaces: ##\ker (F )##, ##\ker (G)##, ##\ker (F ) + \ker (G)##, and ##\ker (F ) \cap \ker (G)##.
This is actually a solved exercise from a Brazilian book on Linear Algebra. The author presented the following solution:

The kernel and image theorem tells us that dimension ##\dim V=n=\dim\ker\left(F\right)+\dim \text{im}\left(F\right)=\dim\ker\left(G\right)+\dim\text{im}\left(G\right)##. As ##\text{im}\left(F\right)\subset R##, ##\dim\mathbb{R}=1## and ##F\neq0##, then ##\dim\text{im}\left(F\right)=1##. Similarly ##\dim\text{im}\left(G\right)=1##. Therefore, ##\dim\ker\left(F\right)=\dim\ker\left(G\right)=n-1##. On the other hand, the dimension theorem of the sum assures us that

$$\dim\left(\ker\left(F\right)+\ker\left(G\right)\right)+\dim\left(\ker\left(F\right)\cap\ker\left(G\right)\right)=\dim\ker\left(F\right)+\dim\ker\left(G\right)=2n-2.$$

In general, ##\ker\left(G\right)\subset\ker\left(F\right)+\ker\left(G\right)## and due to the hypothesis ##\ker\left(F\right)\neq\ker\left(G\right)##, we will have ##\ker\left(F\right)\begin{array}{c}

\subset \\ \neq \end{array}\ker\left(F\right)+\ker\left(G\right)##; then necessarily ##\ker\left(F\right)+\ker\left(G\right)=V##. So ##\dim\left(\ker\left(F\right)+\ker\left(G\right)\right)=n## and hence

$$\dim\left(\ker\left(F\right)\cap\ker\left(G\right)\right)=\left(2n-2\right)=n-2.$$

I am ok with almost everything he presented, but couldn't understand why

the hypothesis ##\ker\left(F\right)\neq\ker\left(G\right)## implies that ##\ker\left(F\right)+\ker\left(G\right)=V.##

Any ideas?

Thanks in advance.

Last edited:
Do you understand why ##ker(F)\subsetneq ker(F)+ker(G)##?

Since the left hand side has dimension n-1, the right hand side must have dimension n or higher.

Office_Shredder said:
Do you understand why ##ker(F)\subsetneq ker(F)+ker(G)##?

Since the left hand side has dimension n-1, the right hand side must have dimension n or higher.
Oh my god! It was so simple!
Thank you very much!

## 1. How do you determine the appropriate kernel size for a specific dataset?

The appropriate kernel size for a specific dataset is typically determined through experimentation and trial-and-error. Generally, a smaller kernel size is better for capturing fine details in the data, while a larger kernel size is better for capturing overall trends and patterns. It is important to consider the complexity of the data and the desired level of smoothing when choosing a kernel size.

## 2. What is the relationship between kernel size and the level of smoothing in a dataset?

The kernel size directly affects the level of smoothing in a dataset. A larger kernel size will result in a smoother output, while a smaller kernel size will result in a more detailed and less smoothed output. It is important to strike a balance between smoothing and retaining important details in the data when choosing a kernel size.

## 3. Can the same kernel size be used for all types of data?

No, the same kernel size cannot be used for all types of data. The appropriate kernel size will vary depending on the complexity and characteristics of the data. For example, a dataset with a lot of noise or variability may require a larger kernel size for effective smoothing, while a dataset with less variability may be better suited for a smaller kernel size.

## 4. How does the shape of the kernel affect the results of data smoothing?

The shape of the kernel can greatly affect the results of data smoothing. A symmetrical kernel, such as a Gaussian or boxcar kernel, will result in a more evenly smoothed output. However, an asymmetric kernel, such as a triangular or exponential kernel, can be useful for capturing specific features or trends in the data.

## 5. Are there any drawbacks to using a larger kernel size for data smoothing?

While a larger kernel size can result in a smoother output, it can also lead to a loss of important details and nuances in the data. Additionally, using a larger kernel size can increase the computational time and resources required for data smoothing. It is important to carefully consider the trade-offs and choose a kernel size that best fits the needs of the data analysis.

### Similar threads

• Linear and Abstract Algebra
Replies
4
Views
1K
• Linear and Abstract Algebra
Replies
2
Views
1K
• Linear and Abstract Algebra
Replies
24
Views
3K
• Linear and Abstract Algebra
Replies
4
Views
1K
• Linear and Abstract Algebra
Replies
9
Views
2K
• Linear and Abstract Algebra
Replies
3
Views
1K
• Linear and Abstract Algebra
Replies
4
Views
1K
• Linear and Abstract Algebra
Replies
8
Views
1K
• Linear and Abstract Algebra
Replies
1
Views
1K
• Linear and Abstract Algebra
Replies
1
Views
2K