Hidden Markov Modeling and Background needed for Dynamical Systems

Click For Summary

Discussion Overview

The discussion centers on the application of Hidden Markov Modeling (HMM) in describing neural networks through dynamical systems theory. Participants explore the background knowledge needed for this research, including statistics, stochastic processes, and dynamical systems, while also seeking recommendations for relevant literature.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Homework-related

Main Points Raised

  • One participant describes their task of assisting a professor in applying dynamical systems theory to neural networks, highlighting the significance of trial-to-trial variability in neuronal testing.
  • Another participant suggests that specialized texts in neuroscience, such as Dayan and Abbott's "Theoretical Neuroscience," may provide useful insights into the application of HMMs.
  • It is proposed that machine learning texts, like Bishop's book, could help in understanding HMMs before delving into more mathematical references.
  • A third participant recommends introductory materials on HMMs found in speech recognition literature, mentioning key topics like Markov Chains and various algorithms related to HMMs.

Areas of Agreement / Disagreement

Participants generally agree on the need for specialized literature in neuroscience and related fields, but there is no consensus on specific texts or approaches to learning the necessary material.

Contextual Notes

Participants express uncertainty about the feasibility of mastering the required material within a summer timeframe, indicating a potential limitation in the scope of the discussion.

KickerOfElves
Messages
2
Reaction score
0
I was recently commissioned by one of my neuroscience professors to help him develop a new gameplan for possibly describing neural networks with dynamical systems theory. In his most recent paper on the subject, he used Hidden Markov Modeling to detect coherent rate patterns in populations of simultaneously recorded neurons, demonstrating that trial to trial variability is significant in neuronal testing and the commonly used methods of analysis sush as peristimulus time histograms which rely on across trial averages overlook relevant connectivity between neurons. The HMM showed the connected neuron ensembles progressing through a series of three or four firing rate states, which he hopes to demonstrate are attractor states. Essentially he's looking for me to do some research this summer and help him figure out where to go for a follow up experiment.

I have a lot of time (I'm working on this all summer) and am looking for a place to really start learning the material i'd need to help develop some insights into the situation. I'm a third year undergrad math major and my background consists of undergraduate courses in multivariable Calculus up to Stoke's theorem, linear algebra, abstract algebra up to galois theory, real analysis, point-set topology, with a little bit of algebraic topology. It seems I'm looking to teach myself a good chunk of statistics and stochastic processes, but i think I'm also looking to understand more on dynamical systems in general. I'm not really sure exactly where to start and what course to plot so i was looking for your help and insights there, and any recommendations you might have with regards to readings would be most appreciated. Also if it is too much to learn in a summer (it may be, I'm not sure) a heads up would be well appreciated. I'm fairly bright, but hardly brilliant.
 
Physics news on Phys.org
Well I've studied all these things too. I'm more or less in the same field (I could hazard a guess as to who your professor is).

I can't think of any particularly good references for stochastic processes or dynamical systems though. You might want a text more specialized to Neuroscience. Try Dayan and Abbott's Theoretical Neuroscience book which has chapters that deal with all these topics from a more applied point of view. You also might want to try looking at a machine learning text for hidden markov models (Like Bishop's book). It may be easier to get the general concept from these applied books before jumping into the more mathematical references.

I don't know of any book that specifically treats Hidden Markov Models in the context of dynamical systems theory in a rigorous way. For this kind of thing I've found that you'll learn a whole lot just from reading the primary literature even without fully understanding all the details.
 
Neuroscience books are probably the way to go, but you can typically find good introductory material to the various aspects of HMMs in Speech Recognition books, for example Rabiner and Juang's book. Big topics to become familiar with are Markov Chains, maximum likelihood estimation, the Baum-Welch algorithm, and the Viterbi algorithm.
 
Thanks for the pointers, I appreciate it.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 45 ·
2
Replies
45
Views
7K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 26 ·
Replies
26
Views
6K
  • · Replies 11 ·
Replies
11
Views
3K
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 27 ·
Replies
27
Views
11K
  • · Replies 8 ·
Replies
8
Views
2K