Combining feature vectors for a neural network

In summary, the conversation discusses the best way to combine feature vectors from two different datasets, one containing videos about cats and the other containing videos about houses, in order to input them into a classifier such as a neural network. The suggestion is to either concatenate the vectors or reshape them into a single vector, depending on the desired representation of the data. The potential issue of high dimensionality in the input vector is also mentioned, and solutions such as dimensionality reduction techniques are suggested. The use of a RBF-ANN as the classifier is also discussed.
  • #1
themagiciant95
57
5
Let's consider this scenario. I have two conceptually different video datasets, for example a dataset A composed of videos about cats and a dataset B composed of videos about houses. Now, I'm able to extract a feature vectors from both the samples of the datasets A and B, and I know that, each sample in the dataset A is related to one and only one sample in the dataset B and they belong to a specific class (there are only 2 classes).

For example:

Sample x1 AND sample y1 ---> Class 1
Sample x2 AND sample y2 ---> Class 2
Sample x3 AND sample y3 ---> Class 1
and so on...

If I extract the feature vectors from samples in both datasets , which is the best way to combine them in order to give a correct input to the classifier (for example a neural network) ?

feature vector v1 extracted from x1 + feature vector v1' extracted from y1 ---> input for classifier

I ask this because I suspect that neural networks only take one vector as input, while I have to combine two vectors
 
Technology news on Phys.org
  • #2
themagiciant95 said:
I ask this because I suspect that neural networks only take one vector as input
why do you think so?
 
  • #3
lomidrevo said:
why do you think so?
Hi, do you know neural networks that take more than 1 single vector as input ?
 
  • #4
Generally, the inputs to ANN can be seen as array of numbers, or vector if you wish. But on the other hand, any (finite) multidimensional array can be reshaped to a vector. So you can pass components of the two vectors "serialized" as components of a single vector, without loosing any information. If the size of your inputs vectors is the same, say N, the size of the input layer of your net will be just 2*N. In general, if you had M vectors of size N, your input layer would be of size M*N. For example, that's the case of ANNs that process grayscale images.
 
  • #5
Are you talking about concatenating vectors in order to create an appropriate input vector to the deep neural network ?
 
  • #6
Yes. Or reshape the matrix into vector.. it depends on how do you store the data.
It is useful to realize that at the beginning of training, when the network hasn't seen any instanes yet, the input features can be considered as "independent" variables, so the network doesn't really care about our representation of the data. It doesn't matter whether those are components of one vector or of M vectors, or of any other structure. It is the goal of the training to find the hidden relationships between data.
We talk about supervised learning, right?
 
  • Like
Likes Jarvis323
  • #7
do you work in some ML framework, or do you try to code it from scratch? I definitely recommend the first option
 
  • #8
I would like to try RBF-ANN as the classifier.

Let me explain better. For the moment ignore my original question.
I have a dataset A of videos. I've extracted the feature vector of each video (with a convolutional neural network, via transfer learning) creating a dataset B. Now, every vector of the dataset B has a high dimension (about 16000), and I would like to classify these vectors using an RBF-ANN (there are only 2 possible classes).

Is it a problem if the input vector to the RBF-ANN has a high dimension (16000) ? If yes, any way to deal with it ?
 
  • #9
Honestly I am not familiar with RBF-ANN. Just for my curiosity, do you have a specific reason to use it?

themagiciant95 said:
Is it a problem if the input vector to the RBF-ANN has a high dimension (16000) ?
If you haven't tried it to run yet, give it a chance. Maybe there won't be any problems.
Eventually, the training might become very slow, or you might have issues with insufficient memory. Also this phenomena could be a problem:
https://en.wikipedia.org/wiki/Curse_of_dimensionality
themagiciant95 said:
If yes, any way to deal with it ?
Maybe to try some of these techniques?:
https://en.wikipedia.org/wiki/Dimensionality_reduction
(no experience on my side...)
 
  • Like
Likes themagiciant95
  • #10
Thank you infinitely. I'll try :))
 
  • Like
Likes lomidrevo

Related to Combining feature vectors for a neural network

1. What is the purpose of combining feature vectors for a neural network?

The purpose of combining feature vectors for a neural network is to improve the performance and accuracy of the network. By combining multiple feature vectors, the network can learn more complex patterns and relationships between the input data, leading to better predictions and results.

2. How do you combine feature vectors for a neural network?

There are various techniques for combining feature vectors for a neural network, such as concatenation, averaging, and weighted averaging. These methods involve merging the vectors into one larger vector or calculating the average value of each feature across all vectors. The specific technique to use depends on the type of data and the desired outcome.

3. Can combining feature vectors lead to overfitting?

Yes, combining feature vectors can potentially lead to overfitting if the feature vectors are highly correlated or if the network is not properly regularized. Overfitting occurs when the network memorizes the training data instead of learning general patterns, resulting in poor performance on new data. To avoid overfitting, it is important to carefully select and preprocess the feature vectors and to use appropriate regularization techniques.

4. Are there any drawbacks to combining feature vectors for a neural network?

One potential drawback of combining feature vectors for a neural network is an increase in computational complexity. As the number of feature vectors increases, so does the size of the combined vector, which can make training and inference slower. Additionally, if the feature vectors are not properly aligned or normalized, combining them can lead to noisy or irrelevant data being introduced to the network, which can negatively impact performance.

5. Can feature vectors from different sources be combined?

Yes, feature vectors from different sources can be combined as long as they represent the same underlying data or concept. However, it is important to carefully consider the compatibility of the feature vectors and to properly preprocess and align them before combining. This can involve techniques such as data normalization, feature selection, and dimensionality reduction.

Similar threads

  • Programming and Computer Science
Replies
4
Views
1K
  • Programming and Computer Science
Replies
2
Views
1K
  • Programming and Computer Science
Replies
13
Views
1K
  • Programming and Computer Science
Replies
1
Views
865
  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
7
Views
6K
  • Programming and Computer Science
Replies
3
Views
1K
  • Programming and Computer Science
Replies
3
Views
947
  • Programming and Computer Science
Replies
3
Views
1K
  • Programming and Computer Science
Replies
2
Views
980
Back
Top