In terms of visualization of data, I am assuming you mean things like two and three dimensional plots of some sort and the ability to transform these in ways of viewing angle, scaling, and so on. I will also add physical modelling of geometric data just to be safe.
In terms of understanding this at a deep level, its best to start with the basic of 3d modelling with areas of linear transforms and using those to create a framework with local models, world space, and camera models. You'll find nowadays that most of this is done with a few API calls to OpenGL or DirectX, whereas you had to do everything (and I mean literally everything) using super fast assembler routines or using bare bone C.
If you are dealing with a relatively small amount of data, it's probably useful to just use your graphics card z-buffer. If the geometry is really large, then you need to study spatial classification algorithms and how they are used in hidden surface removal.
For standard modelling where there are no fancy textures or lighting, you will able to set up the video card (again with OpenGL or DirectX) pretty quickly.
With regard to data analysis and data streaming, this will obvious depend on the type and the source of the data, which will dictate the tools that you will use. For example if its some kind of ODBC data source, then you will get an API and libraries for this so you can effectively import the data and then convert it to your data structure for processing.
In terms of streaming, between components of your program, if data is shared specifically in the process space, it will be a lot easier. If your data though is being shared between processes, then you will have to learn about objects like pipes where processes can share public data with each other.
In terms of networking, it's a good idea to see what application protocols are out there, because chances are the protocol's development had the same goal in mind as yourself. It may not be specifically written in mind for your domain, but chances are a lot of the ideas are directly related. As an example think about peer 2 peer software. When most people think of p2p they think of file sharing, but p2p is also used in video broadcast servers (I'm using one for a distance course on Bayesian Inference). So in this example you have the framework for p2p, and then the specific customization for the application you have in mind.
With regards to that comment I would look at the basics of a good p2p protocol and then think about the data structures you need in your application and use both of those as well as specific protocols that you need as a basis for designing a network platform for your application.
More specifically about the science, I'm interested in complex systems and nonlinear dynamics, which means i also have to be very away of floating point errors (since they are dramaticized in chaotic systems).
As for intuitive interfaces, that is a real ***** of a subject. For anything involving navigation through visualizations (in other words through a 3D virtual world or representation of some sort), you will need both a visualization system that has very specific control over the rendering and the detection of objects in the world. Typically we use the term "hit test" for the ability to cast a ray and return the object that was first hit.
Coding the basics for that requires collision detection. If you don't have many objects, then you could get away with something simpler, but if the complexity of your visualization grows, then you will need something more versatile.
One integrated way for interactive visualization is based on what games like Doom 3 do: they dynamically create a texture (think bitmap) and then they render that to some object. So in other words you create a flat quadrilateral with two triangles, then you create your output texture by computing it based on the representation and events executed by the user (user moves mouse, clicks on button, etc), and then using the output texture as the texture for your quadrilateral.
In the above scenario, the whole system is unified under a general 3D visualization engine. With this you could have many 2D interfaces within a 3D virtual world and you could give the interfaces all the fancy stuff that you see in your state of the art PC games because everything goes through the same pipeline, so you get everything from transparency to texture effects for free.
But doing this takes a lot of work (and I know from experience).
If you ever want to get into video games, the key things is finding the balance between flexibility and performance and nowadays you need high amounts of both. The performance criteria means that optimization is very important and understanding this in a variety of contexts from computation time to general playability. Hardware is getting better and better, but as a consequence, people want more and more detail and fireworks which means finding solutions that scale is critical.
So with some game engine, the important point is the ability to create content quickly and use it quickly. As a result of this game engines have their own scripting languages, geometry exporters, and numerous tools to create the worlds, network protocols, user interfaces, and game specific code, which are designed to be flexible and easy to get something up and running.
It can be a nightmare in a lot of ways, but having a finished game is pretty cool.