Is this a good word embedding model for text analysis?

  • Thread starter Thread starter Trollfaz
  • Start date Start date
  • Tags Tags
    Text
AI Thread Summary
The discussion revolves around building an AI model to group texts into topics using K clustering, focusing on embedding text into vectors. The proposed method involves tokenizing text into words, converting them to lowercase, and one-hot encoding characters into a 26-dimensional vector. The vectors are then processed sequentially through a Recurrent Neural Network (RNN) to maintain character sequence. The mathematical framework outlines how the RNN state updates with each character input, using a transformation matrix W. A key question raised is whether W can be initialized randomly or needs to be adjusted through training. However, it is clarified that the approach is character-based rather than word-based, indicating that it does not utilize traditional word embedding techniques. Additionally, the validity of the initial assumption about tokenization is questioned, highlighting that the proposed method lacks actual word tokenization.
Trollfaz
Messages
143
Reaction score
14
I am trying to build an AI to group texts into topics using K clustering so I must embed a text into a vector. Everyone knows that the conventional way is to first tokenize the text into words and stripping all the punctuation chars. I get a list of word from this. This is what I plan to do with each word.
1) Convert into to lower
2)One hot encode all characters so the inputs will be a 26 dim vector/array.
3)Feed the arrays/vectors into a RNN sequentially so that the sequence of the characters is known.
Okay this is the mathematical description. The RNN before processing the nth char is at state ##s_{n-1}##. The encoded vector of the nth char is ##c_n##.
$$s_0=0$$
$$s_n=s_{n-1}+Wc_n$$
W here is a 26 by 26 transformation matrix. Return ##s_n## when all chars have been processed. If one wants to emphasize the significance of capital letters we can assign a value of 2 instead of 1 at the index of the one hot encoded vector for the char
But my main problem is how to get W, can it be random or must the values be adjusted through training?
 
Computer science news on Phys.org
Trollfaz said:
Is this a good word embedding model for text analysis?
No, it's not a word embedding model at all - your model is character based, not word based.

Trollfaz said:
Everyone knows that the conventional way is to first tokenize the text into words and stripping all the punctuation chars.
Does everyone know that? I think I could probably find an exception.

Trollfaz said:
This is what I plan to do with each word...
There is no tokenization of words in the process you describe. There are many descriptions of tokenization on the interweb e.g. https://medium.com/@james.moody/preparing-text-for-rnn-language-modeling-in-python-6474c3eae66e
 
I came across a video regarding the use of AI/ML to work through complex datasets to determine complicated protein structures. It is a promising and beneficial use of AI/ML. AlphaFold - The Most Useful Thing AI Has Ever Done https://www.ebi.ac.uk/training/online/courses/alphafold/an-introductory-guide-to-its-strengths-and-limitations/what-is-alphafold/ https://en.wikipedia.org/wiki/AlphaFold https://deepmind.google/about/ Edit/update: The AlphaFold article in Nature John Jumper...
Thread 'Urgent: Physically repair - or bypass - power button on Asus laptop'
Asus Vivobook S14 flip. The power button is wrecked. Unable to turn it on AT ALL. We can get into how and why it got wrecked later, but suffice to say a kitchen knife was involved: These buttons do want to NOT come off, not like other lappies, where they can snap in and out. And they sure don't go back on. So, in the absence of a longer-term solution that might involve a replacement, is there any way I can activate the power button, like with a paperclip or wire or something? It looks...

Similar threads

Back
Top