There are a couple of courses that deal with this, but it depends on the perspective you want. I'll list a few perspectives that you could use on this subject:
* Physics
The first layer is the physical layer and deals with the physics of computers, so basically how computers do what they do.
Typically if you're an engineer you will be taught that you can build things that make discrete decisions (ie set a gate to have a low or a high voltage based on its inputs) by using using transistors in certain configurations.
If you want to know about the properties of transistors then you'll be getting into more of the physics of semiconductors, but if we accept these devices (logical gates) as building blocks then we can see how logical operations are formed.
Once you build one type of gate (like say a NAND gate) then you use some boolean math to construct the other gates. Once this is done you go into boolean math again to build all of the arithmetic instructions for binary numbers and build a whole heap of hardware that does specific things (like add two numbers, multiply numbers, XOR numbers, hold a number in memory etc).
2) Architecture
So once you know how the gates work you look at how computers are structured and for this you'll look at the architecture, the instruction set, the memory model, the execution model and so on.
Typically each different computer platform will have its own operational codes and instruction set to do execute programs. I'll briefly talk about the x86 instruction set found on common PC's for 32-bit environments (could easily be extended to 64-bit or another arbitrary register size)
So basically in the x86 environment we have a) registers b) a flat memory space c) arithmetic operations (add,sub,mul,div,mod) d) logical operations (AND,OR,XOR) as well as SHIFTS,ROTATES d) Stack operations e) Flag operations f) CALL operations and RETURN operations e) NonConditional and Conditional JUMP operations e) Interrupt specific operations f) Hardware port operations g) Memory operations h) and Register operations. These are the main ones.
I'm going to simplify this as to focus on the main details.
Basically what happens is that you have a few main types of memory (I'm going to ignore models of protected memory, OS specific memory and assume that the system and its applications have access to everything)
a) Stack memory
b) Heap memory
c) Register memory
What happens is that the registers contain the information that each instruction deals with. So basically we put the specific values that we require (either from memory or another register) into the appropriate registers and then call an appropriate instruction that uses these registers and their values. That's the execution cycle in a nutshell.
With regards to how hardware and software interact, there are two main sorts of instructions that allow them to do so:
1) IN and OUT instructions
2) Interrupts
For IN and OUT basically you have a port (65536 for old x86 but could be more for current specs not sure) and you either check the ports state (IN) or write to it (OUT).
Back in the QBASIC days I wrote a few drivers (keyboard driver and SVGA driver) for old applications. One example is setting the palette for a 256 color VGA screen mode.
So what you would do to tell the hardware that I'm changing a palette value is to OUT 0x3C7, x where x is the index of the color and then you would do OUT 0x3C9,a ; OUT 0x3C9,b; OUT 0x3C9, c where (a,b,c) is the RGB colour for that palette.
If you want to directly communicate with hardware one way is the direct communication mechanism provided with IN and OUT.
The second method is an interrupt.
An interrupt can be either a software interrupt or a hardware interrupt. Software interrupts are generated by software and hardware interrupts are generated by hardware.
So let's say you want to know when the keyboard is communicating with the computer.
The first thing you do is find out given an architecture book and a standards book what interrupts are generated by what pieces of hardware.
Once you do this you write your bit of code to handle that device based on the parameters that are in memory.
So yeah in a nutshell If what I think you're looking for is correct then you can check out these kind of things. Also look into device driver SDK's for common platforms (Windows,Linux,UNIX etc) and you will get a first hand look at what these things look like.
Once you've written the code to handle this (for example keyboard interrupt might simply update an array that holds what key is currently pressed or might be more complex in tracking what keys have been pressed and released or double pressed etc) you then modify the systems interrupt vector by pointing the hardware interrupt to your code.
This is how things happened in the DOS days. Essentially the DOS system provided a library of software interrupts that allowed you to have Disk Access, A Clock, Video Access, MSCDEX (CD extensions on the multiplex interrupt) and so on.
Nowadays you don't get access to these things like you did in the DOS days. Applications don't get access to protected memory that the operating system uses.