To me, the theory of programming is very easy - it's applying your knowledge to useful applications that's the difficult part.
Remember - every programming language is very similar, you just need to learn the keywords and know how to apply logic. For example, to increment an integer:
ASM (lowest level you'll likely ever experience):
inc ax ; adds +1 to ax register (or memory location if you specify)
in BASIC:
ax% = ax% + 1 ' variable ax%, note difference from a register
in C/C++ (as well as javascript, PHP, higher levels of BASIC like VB.NET)
ax++;
(semi colon would error out in a BASIC compiler)I know it's pretty rudimentary on how to add +1 to a variable, or register, but this should give you an example on how simple syntax is used in various languages.
Also, if you don't use a commercial compiler, or a compiler that doesn't follow explicit notation, there will be differences, quite possibly on custom compilers that is based off of another common (or more industry ready) compiler.
One of the few languages I use on the Motorolla 68000 processor uses BASIC for the language. Of all the BASIC compilers I know, this is one of the only compilers I know (aside from VB.NET) to use C/C++ syntax for math operations (a%++, for example, to increment by 1).
As with anything else, the more practice you get with programming, the better and more efficient you will become with programming over all. All logic and theory will apply to all languages, and depending on how critical timing is, or how quickly every operation needs to be, will come in time.
Also, learn low level theory. By that, I mean how to work with bits, bit shifting, etc. Learn what your signed and unsigned integers are, floats, precisions, etc., and any function by a compiler uses that is faster than "traditional" methods.
An example would be division or multiplication:
(using the simplest to understand):
a = a * 2 ' using just a simple multiplication routine
Now, instead of doing a "simple" multiplication operand, you could do a bit shift:
a = a<<1
Bit shifting takes the current value in binary format, and shifts it over to the left or right (in the above example, by 1):
Consider this:
1100 in binary is 12. Want to double that? a = 12, a = a << 1: this makes the binary value 1100 to become 11000 (most integers are 16 bits or more these days, so don't focus on the 4 bits).
This would become 24: 11000 in binary = (16+8+0+0) = 24
And for division: 0110 = 6 in binary
a = a>>1 = (0110->0011) = 3
(bit shifting is native to ASM, for what it's worth)
I think I went a bit far in depth with the simplicity of things and may have over complicated a good portion (if not all) of this post, but I hope it gives some idea behind the theory and relations between languages. If not, I apologize!
Also, if you need any help with understanding anything with programming, I'd be more than happy to help assist (either with learning, or just understanding).
I'm very comfortable, and efficient, in Motorolla 68000 ASM (and some higher languages), Visual Basic/ASP, VB.NET/ASP.NET, C++, Perl, PHP, JavaScript, and SQL (MySQL, Microsoft SQL, Oracle SQL, or TSQL in general).
(I'm also Microsoft Certified in Microsoft: MCITP SQL DBA 2008 and MCIPT Visual Studios 2012)
My suggestion - find an application you like to use often, and try to create your own version of it. Get used to programming and learning the features and functions of the compiler. After exposure to 2+ language, you should be able to figure out how to use any language :)