Dimitri Terryn said:
Not much research in Computer Science? You do realize that ACTUAL computer science (as opposed to just learning to program) is basically applied mathematics?
I never thought of it that way. At most colleges, a bachelor's and masters degree in computer science is mostly about programming, with a some electronics to understand the basics of how computers are made.
The applied mathematics seems to me as just that, applied mathematics. The fact that computers are used, doesn't change the fact that what you're referring to is mostly about math, and math oriented algorithms.
Then again, I got my degree back in 1990, so maybe things have changed.
Most of what I think of as sub fields of computer science: how computers work, basic programming, various languages, distributed processing (multiple computers / multiple threads), networking, databases, windows programming, device driver programming, other operating systems, ...
Then there are some niche areas:
Artificial intelligence - most of this work seems to involve control systems, like computer controlled subway cars, robotics, but I suppose there is still more generic work trying to duplicate how living things respond to inputs.
Scheduling (of computer tasks) is a classic main frame excercise that few current students will ever get involved in, although I do remember a couple of questions about it when I took the GRE graduate subject test in computer science (back in 1990), I don't even know if this is included in the current tests.
Sort algorithms are another exercise that affect only a small percentage of students, and by now most would know that merge/radix (the classic method of sorting with multiple tape drives) sorts, are the fastest method. Again, the 1990 GRE test had only one or two questions about this.
Analog computing, is there any activity in this area anymore, or has it all been replaced with numerical integration? It was kind of cool to simply hook up x double dot (2nd derivative) to negative x on the front panel breadboard, setup an initial condition, let it rip and see a sine wave on the screen (I'm not that old, but got a change to spend some time with an analog computer back in the early 1970's).
Extendend precision math is getting very advanced. Using the concept of finite field math to limit the range of values in arrays of floating point numbers, optimizing algorithms to speed up the math, choosing the size of the sub-numbers before a calculation so that carries / borrows can be done just once after a series of math operations, ... but how often do you need thousands or millions of digits of accuracy for calculations?
Data compression. I've done some work with this, mostly LZ1 and LZ2 type algorithms.
Error correction codes / algorithms is a grey area. I've worked a lot with this. Is this math or computer science? I've never seen this or finite field math as a requirement for a computer science degree.
There are other grey areas as well. Signal processing, like transmission / reception of data via various methods, sound (remember the orignal remote controls for televisions?), radio waves, light. Recording and playback of data on various media types (my company and other companies sponsor research as UC San Diego for magnetic recording research).