Building a homemade Long Short Term Memory with FSMs

  • Context: Python 
  • Thread starter Thread starter Trollfaz
  • Start date Start date
  • Tags Tags
    Homemade
Click For Summary
SUMMARY

This discussion focuses on building a Long Short Term Memory (LSTM) algorithm from scratch using Finite State Machines (FSMs). The author describes how FSMs can retain memory of past inputs through state transitions, utilizing functions for input processing and output generation. The proposed model involves a network of 10,000 FSMs, each with assigned random weights, which aggregate outputs to form the final result. Training is conducted by minimizing the loss function using gradient descent, paralleling traditional neural network methodologies.

PREREQUISITES
  • Understanding of Long Short Term Memory (LSTM) algorithms
  • Familiarity with Finite State Machines (FSMs)
  • Knowledge of gradient descent optimization techniques
  • Basic concepts of neural networks and their components
NEXT STEPS
  • Research the implementation of LSTM algorithms in TensorFlow 2.x
  • Explore advanced techniques in training neural networks with PyTorch
  • Learn about the role of weights in neural network performance
  • Study the mathematical foundations of gradient descent and loss functions
USEFUL FOR

This discussion is beneficial for machine learning enthusiasts, algorithm developers, and researchers interested in the intersection of finite state machines and neural network architectures.

Trollfaz
Messages
144
Reaction score
14
I am doing a project to build a Long Short Term Memory algorithm from scratch. LSTMs are capable of retaining memory of the past inputs and carrying them for future operations thanks to Recurring Neural Networks to process a series of inputs such as sounds and text.

One possible way I can think of such methods is Finite State Machines (FSMs) . In the simplest model the FSM at any point in time can be in any state ##s \epsilon S ##. After reading an input at time t, the state of the node transits from ##s_{t-1}## to ##s_t## via a function ##f_{in}(s_{t-1},x_t)## for a valid input ##x\epsilon X##. The node then produces an output ##o_t=f_{out}(s_t)## while it will remain in the transited state for the next iteration. In this way it can retain some memory or information of the past input.

Now in complex modelling such as text, does a large numbers of FSMs build a good LSTM model?
 
Last edited by a moderator:
Technology news on Phys.org
I shall now elaborate on how the network of FSMs work. Allow the system to contain N FSMs for a large value N say ##10^4##. Each FSM has it's output assigned to a random weight and multiplied by it. Hence the aggregate output of the system gives
$$\sum_{i=1}^N w_i o_i= \textbf{w}^T\textbf{o}_t$$
where ##\textbf{w},\textbf{o}_t## is the vector of assigned weights and output of the nodes at t respectively. The weights are free to adjust when we teach the algorithm and are initially set to small random values. During training, we minimize the loss L=##\sum (predicted-actual)^2## by gradient descent with respect to the weights.
 
Sounds like what is (or was, see below) normally done with the terminology of 'gates', or 'neurons', being replaced by the words 'finite state machine'. 'Neural network' is another common term that seems to apply to the same general approach.

Disclaimer: I'm Not an expert by any means! I've only dabbled in the field out of curiousity, and that was many years ago.

Cheers,
Tom
 
Last edited: