Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Intro to computer/information science?

  1. Jul 3, 2012 #1
    I plan to study a math degree but I'm somewhat intrigued by computer/information science, and it's links to things like neuroscience and machine learning. Eg. just flicking through a few pages of this:

    http://www.amazon.co.uk/Information...-Algorithms/dp/0521642981/ref=ntt_at_ep_dpt_2

    really fascinates me, though I don't pretend to comprehend much of it (you could ask how could I be fascinated by it if I can't comprehend it, I don't know is the answer!).

    Can anyone suggest any basic introductory materials to this field, to see if it's something I may have a real interest in? With regards to technicality, I'm reasonably comfortable mathematically but I obviously haven't studied it to college/university level.
     
  2. jcsd
  3. Jul 4, 2012 #2

    chiro

    User Avatar
    Science Advisor

    Hey 11thHeaven and welcome to the forums.

    The above book is what you would call an 'interdisciplinary' one in that it looks at the application of a few different areas.

    To understand computation, the common areas including the Turing model and Automata. These are theoretical models of computation, but the reason for doing this is to abstract it out enough so that general algorithms and computations can be analyzed in the way that we do this in mathematics with pure mathematics.

    There are non-discrete ways of computation, but it is not considered in the same vein as the discrete models are. There are fields like analog computing and things like using differential equations to do things like sorting (check out Harvard robotics lab), but again a lot of the focus is using Turing machine type computation models.

    In terms of inference in general, you should be acquainted with logic, probability and statistics at least at a foundational level (and probably even further).

    If you study mathematical logic you will be able to see how inference is generalized with statements and quantifiers. It's probably easier to think of an inference as a chain of statements that take you from start to finish.

    Probability though generalizes all of mathematics to factor in random-ness or at the very least the complexity that is present in the use of probabilistic formulations of variables (i.e. random variables).

    After some basic probability, you learn about statistics and one area of statistics is called inference. One way to think about inference is that you are trying to make statements about something that is outside the scope of your data: in other words under uncertainty for something outside the scope of your data. If you had all the data, you wouldn't need to use probability or statistics.

    For information theory, this is a subject in itself. You start off entropy and explore it in a variety of ways, as well as using it for applications like communications, data compression, thermodynamics and so on. Entropy is the bread-and-butter of information theory, and it can be thought as a kind of distribution-invariant density aspect of the actual information itself.

    With probability you then can move on to the generalized approach known as Bayesian Probability, and subsequently from that Bayesian Statistics. This at the foundational level, allows parameters of distributions themselves to have a distribution. Typically in non-Bayesian probability, if we have a parameter: it's a constant, but in Bayesian it is a random variable.

    Then you can look at all the results for normal statistics and add the Bayesian generalizations to them, and you get a lot of complex results which are very useful.

    With error-correcting codes, one of the main goals is that you have say a channel which is noisy, but you know the properties of the noise. What you want to do is construct a code that even with the noise, you will always be able to reconstruct the code as long as the code with respect to the properties of the channel reduces the probability enough of the noise component so that it will be recoverable.

    This kind of thinking is also used in testing for prime numbers in the Miller-Rabin test where if you get so many 'checks' that turn out true, then you know that the probability that it's prime is inversely proportional to the number of 'checks' in an exponential manner: in other words, make the probability small enough and you can stop checking. Same kind of idea with error-correcting codes: make the code good enough so that the noise is minimized to a point of being able to be dealt with.

    With learning, the area is known as machine learning. You have supervised, semi-supervised, and unsupervised learning algorithms. These include things like neural networks, self-organizing maps, amongst other things.

    Monte-Carlo is basically simulating random variables on a computer.

    Pick up any book on introductory data mining to find out what all the stuff in this book means at a basic level if you wish.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Intro to computer/information science?
  1. Computer Science (Replies: 8)

Loading...