Intro to computer/information science?

  • Thread starter Thread starter 11thHeaven
  • Start date Start date
  • Tags Tags
    Intro Science
Click For Summary
SUMMARY

This discussion centers on the foundational concepts of computer and information science, particularly its connections to mathematics, neuroscience, and machine learning. Key topics include the Turing model, automata, probability, statistics, and information theory, with a focus on entropy and Bayesian probability. The conversation emphasizes the importance of understanding inference and error-correcting codes in computational contexts. Suggested resources include introductory materials on data mining to bridge the gap between mathematics and computer science.

PREREQUISITES
  • Basic understanding of mathematical logic
  • Foundational knowledge of probability and statistics
  • Familiarity with the Turing model and automata theory
  • Introductory concepts in information theory, particularly entropy
NEXT STEPS
  • Study the Turing model and automata theory in depth
  • Learn about Bayesian probability and its applications in statistics
  • Explore error-correcting codes and their significance in data transmission
  • Read introductory texts on data mining to understand practical applications
USEFUL FOR

This discussion is beneficial for students considering a transition from mathematics to computer science, educators in interdisciplinary studies, and professionals interested in the foundational theories that underpin machine learning and data analysis.

11thHeaven
Messages
48
Reaction score
0
I plan to study a math degree but I'm somewhat intrigued by computer/information science, and it's links to things like neuroscience and machine learning. Eg. just flicking through a few pages of this:

http://www.amazon.com/dp/0521642981/?tag=pfamazon01-20

really fascinates me, though I don't pretend to comprehend much of it (you could ask how could I be fascinated by it if I can't comprehend it, I don't know is the answer!).

Can anyone suggest any basic introductory materials to this field, to see if it's something I may have a real interest in? With regards to technicality, I'm reasonably comfortable mathematically but I obviously haven't studied it to college/university level.
 
Technology news on Phys.org
11thHeaven said:
I plan to study a math degree but I'm somewhat intrigued by computer/information science, and it's links to things like neuroscience and machine learning. Eg. just flicking through a few pages of this:

http://www.amazon.com/dp/0521642981/?tag=pfamazon01-20

really fascinates me, though I don't pretend to comprehend much of it (you could ask how could I be fascinated by it if I can't comprehend it, I don't know is the answer!).

Can anyone suggest any basic introductory materials to this field, to see if it's something I may have a real interest in? With regards to technicality, I'm reasonably comfortable mathematically but I obviously haven't studied it to college/university level.

Hey 11thHeaven and welcome to the forums.

The above book is what you would call an 'interdisciplinary' one in that it looks at the application of a few different areas.

To understand computation, the common areas including the Turing model and Automata. These are theoretical models of computation, but the reason for doing this is to abstract it out enough so that general algorithms and computations can be analyzed in the way that we do this in mathematics with pure mathematics.

There are non-discrete ways of computation, but it is not considered in the same vein as the discrete models are. There are fields like analog computing and things like using differential equations to do things like sorting (check out Harvard robotics lab), but again a lot of the focus is using Turing machine type computation models.

In terms of inference in general, you should be acquainted with logic, probability and statistics at least at a foundational level (and probably even further).

If you study mathematical logic you will be able to see how inference is generalized with statements and quantifiers. It's probably easier to think of an inference as a chain of statements that take you from start to finish.

Probability though generalizes all of mathematics to factor in random-ness or at the very least the complexity that is present in the use of probabilistic formulations of variables (i.e. random variables).

After some basic probability, you learn about statistics and one area of statistics is called inference. One way to think about inference is that you are trying to make statements about something that is outside the scope of your data: in other words under uncertainty for something outside the scope of your data. If you had all the data, you wouldn't need to use probability or statistics.

For information theory, this is a subject in itself. You start off entropy and explore it in a variety of ways, as well as using it for applications like communications, data compression, thermodynamics and so on. Entropy is the bread-and-butter of information theory, and it can be thought as a kind of distribution-invariant density aspect of the actual information itself.

With probability you then can move on to the generalized approach known as Bayesian Probability, and subsequently from that Bayesian Statistics. This at the foundational level, allows parameters of distributions themselves to have a distribution. Typically in non-Bayesian probability, if we have a parameter: it's a constant, but in Bayesian it is a random variable.

Then you can look at all the results for normal statistics and add the Bayesian generalizations to them, and you get a lot of complex results which are very useful.

With error-correcting codes, one of the main goals is that you have say a channel which is noisy, but you know the properties of the noise. What you want to do is construct a code that even with the noise, you will always be able to reconstruct the code as long as the code with respect to the properties of the channel reduces the probability enough of the noise component so that it will be recoverable.

This kind of thinking is also used in testing for prime numbers in the Miller-Rabin test where if you get so many 'checks' that turn out true, then you know that the probability that it's prime is inversely proportional to the number of 'checks' in an exponential manner: in other words, make the probability small enough and you can stop checking. Same kind of idea with error-correcting codes: make the code good enough so that the noise is minimized to a point of being able to be dealt with.

With learning, the area is known as machine learning. You have supervised, semi-supervised, and unsupervised learning algorithms. These include things like neural networks, self-organizing maps, amongst other things.

Monte-Carlo is basically simulating random variables on a computer.

Pick up any book on introductory data mining to find out what all the stuff in this book means at a basic level if you wish.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
Replies
1
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
2
Views
2K