Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

How does one become proficient in programming?

  1. Jul 10, 2011 #1
    Hello all,

    I'm currently taking a programming course using C, and I find programming to be pretty interesting. After I finish the course, I want to learn a couple other popular programming languages (C++, java etc) on my own.

    So, what should I do to REALLY become good at programming besides just reading the book
    and write some basic programs?

    Please be specific, don't just say "keep practicing", we all know that.

    Thanks for any advice!
     
  2. jcsd
  3. Jul 10, 2011 #2

    gb7nash

    User Avatar
    Homework Helper

    Take an algorithm theory class and study it to death. You'll learn about many tricks of the trade in order to write good programs. There's an endless number of ways to write a program to accomplish a goal, but knowing how to write good algorithms will vastly help you in any programming language and help you write efficient programs. Once you're exposed to enough theory and application, you'll be able to write pseudocode for what you want to do and apply it to the programming langauge that you want.
     
  4. Jul 10, 2011 #3
    Well, one important thing (particularly if you're self-taught) is to study the subject. For example, I recall a talented young programmer I worked with who didn't know how signed integer math worked with 2's complements and such. Took very little time to explain it to him, but that was quite a gap in his knowledge.

    If you're going to teach yourself, you will need to seek out such knowledge and learn it.

    Also, you would need to study data structures and algorithms. I would suggest at least a bit of discrete mathematics as well. I have a Discrete Math book that goes over tons of useful subjects, like combinatorics, format of floating point numbers, converting between different number bases, and so on.

    As for the practice of programming itself, that really is a matter of practice. Read a lot about the language you are currently learning. Read books about good coding practices. And you should try and build some non-trivial software. Come up with an idea for a program you would like to write (something that interests you will help), and write it. After a bit, perhaps get involved with an open source project. Work on getting that "10,000 hours" in if you want to be truly proficient. Make it quality practice. Like in music, if you practice poorly, you will learn to do it poorly. So don't think "Oh, it's just practice, I can cut corners here". Practice doing it right. As I like to say: "Practice doesn't make perfect. Practice makes permanent."

    Learning a few programming languages is, I find, very beneficial. But do get to know at least one very well. It's great if someone knows, like, 40 languages. But if they're terrible in all of them, it's just not very useful.
     
  5. Jul 10, 2011 #4
    Well there is little helpful advice beyond what's already stated.

    There are however some qualities which are extremely helpful in that regard.
    1. Critical thinking - questioning everything, always looking for a better solution. When writing something, always looking for ways to break it, crash it, make it spontaneously combust or whatnot(you know - so the guy using it doesn't get that chance...).

    2. Good grasp of abstraction - that one I feel is one of the most important, especially for OOP languages. Breaking down the task and structuring your code properly can make your life soooo much easier as a programmer. In a sense that's the basic idea behind all high-level languages.
     
  6. Jul 10, 2011 #5
    Good stuff guys, thanks!

    I'm an EE major, so in order for me to take a class on data structures and algorithm I would have to declare a minor in CS. A minor in CS in my school requires 16 credit hr worth of work. At the moment, I have no plan to do that.

    Would you say the subjects mentioned above can be self-studied, and what kind of books would you recommend?
     
    Last edited: Jul 10, 2011
  7. Jul 10, 2011 #6
    Very much so! Much of what I learned, I learned myself. I've come to the realization, however, that most people just are very bad at that. Auto-didacts are somewhat rare. Rare enough that the spell checker complains about the word "didacts". :biggrin:

    But yes, it's certainly possible. I will say, though, that it is very useful to have an expert around to ask questions from when you don't quite get it, or otherwise need help. A mentor, if you like. Most of the time, I have no problems, but when you get stuck, it's very useful to know such a person. I suppose this forum does serve as such, at least a bit.
     
  8. Jul 10, 2011 #7

    phinds

    User Avatar
    Gold Member
    2016 Award

    LEARN THE BASICS (at the hardware level where they really ARE basic)

    I'm always amazed at how much young programmers DON'T know about the very most fundamental concepts ... things like the first thing grep mentioned. Learn how data is stored in the computer. What is big endian and little endian?. Learn how data types are actually stored and manipulated AT THE HARDWARE LEVEL. Learn how radix number systems work. You can do a huge amount of programming without knowing any of this but the first time you get a funky result, you will most likely not have a clue what is going on.

    For example, I'm always astounded by how many programmers do not instantly understand why the following is false

    1.4 + 2.6 = 4.0

    [by the way, if anyone sees this and doubts my sanity and wants to argue about it, I am not interested. I have explained this for over 40 years and am tired of it. see here: http://www.vbforums.com/showthread.php?s=&threadid=211054 ]
     
    Last edited: Jul 10, 2011
  9. Jul 10, 2011 #8
    Very well said, phinds. My point exactly.
     
  10. Jul 10, 2011 #9
    Indeed. Never test equality on floats :)
    (has to do with how some numbers cannot be represented precisely in binary)
     
  11. Jul 10, 2011 #10

    phinds

    User Avatar
    Gold Member
    2016 Award

    Which is EXACTLY what I am talking about. Simply knowing that "it has something to do with ..." is not enough, Kingkong, you need to understand exactly WHY it is that way, and this is just one of many such fundamentals that good programmers understand and bad ones don't. So knowing, as matrix does, THAT there are certain "rules" you can follow doesn't help nearly as much as actually understanding WHY the rules exist.
     
  12. Jul 10, 2011 #11
    I agree with the comment that learning things at its most fundamental level is the way to start. However, the book I'm using for C doesn't have too much information about how computers actually store data (other than the fact they are stored in binary).

    If you know any good readings about programming or CS in general, please let me know.
     
  13. Jul 10, 2011 #12

    phinds

    User Avatar
    Gold Member
    2016 Award

    Wish I did but I can't help you with that. Been away from formal studies for much too long. I learned C from the White Book but I don't even remember where I learned hardware-oriented fundamentals.
     
  14. Jul 11, 2011 #13

    SixNein

    User Avatar
    Gold Member

    What do you mean by programming exactly?

    Programming is just a language, so practice is the real key to becoming proficient.

    On the other hand, if you are asking about the design of software, here are a few useful tips...

    1. Develop mathematical maturity.
    2. Don't design a program by simply programming it; instead, start with the input and outputs, and develop a stratigy for achieving the goals.
    3. comment in complete sentences.
    4. common often
    5. Learn to pull a design from existing code.
    6. Prove a design works before programming it.
    7. Test as many paths as possible for errors.
    8. Learn algorithms and their strengths and weaknesses. For example, what is the most efficient way to sort 1 million integers?
    9. Learn enough about software patents to become pissed off.
    10. Learn why open source is not necessarily better than closed source. Hint: how a project is designed is more important than closed/open ideologies.
     
  15. Jul 11, 2011 #14

    SixNein

    User Avatar
    Gold Member

    If a number library is used, the result could be true. So its a matter of implementation.
     
  16. Jul 11, 2011 #15

    phinds

    User Avatar
    Gold Member
    2016 Award

    Yes, there in fact used to be machines that were decimal at the hardware level, and for them it would be true. Also, some languages on some machines will do some automatic rounding which makes it LOOK true. Also, there are data constructs (e.g. CURRENCY) that will also make it true.

    None of that is my point. My point is that the way most machines store floating point numbers causes this result and it is important to understand why.

    In fact, your point as stated makes MY point, which is WHY does this occur? Why should some implementations show it as true and some not? Understanding the hardware and data constructs is what I'm driving at.
     
  17. Jul 11, 2011 #16

    rcgldr

    User Avatar
    Homework Helper


    There are still are machines that support decimal math. IBM system 390 and most business oriented mainframes fully support BCD (Binary Coded Decimal). So do Intel cpus on PC's if you bother to use or create a library based on their BCD instructions. Most business and almost all (if not all) bank accounting systems use decimal based math to avoid any issues with binary versus decimal floating point conversion issues.

    Getting back on topic, part of becoming a good programmer involves some form of specialization, for example writing interrupt driven device drivers versus writing math intensive code. There are some common generic aspects to being a good programmer, but a lot of this is the result of experience with a variety of program types and algorithms.
     
  18. Jul 12, 2011 #17
    Learning about data structures and algorithms as suggested above is essential. Learning other languages, particularly object oriented languages like python, ruby or c# makes these concepts much easier. Spend some time playing with some of these languages, many of which are available for free as open source, and find one you really like. Most professional programmers know more than one language, but frequently have a favorite (mine is python).

    Explore new features and study others codes. Take a program and study it so you fully understand how it works, then add new features or improve the algorithms to work like YOU want it to. Take your old programs and rewrite them to take advantage of object oriented features, exception handling or other techniques you're learning.

    I've been programming professionally (i.e. being paid) for over 30 years now (plus 8 years before I got paid) and am still learning new things. While you indicated "don't just say 'keep practicing'" you need to write a lot of code to become proficient. Read as much as you can about how to program new features, but you also need to practice. Once you've finished your program throw it away and do it better.
     
  19. Jul 17, 2011 #18
    I read that thread and I must say (perhaps for the benefit of the readers on this forum) you were a little hard on a bunch of Visual Basic programmers for not knowing IEEE 754. There's no reason to expect BASIC programmers to understand hardware implementation details...

    Incidentally, despite what you think, whether a number is irrational or not is independent of whatever number system you use to represent said number. It doesn't matter whether you represent the number in decimal, hexadecimal or binary, a rational number is rational and an irrational number is irrational...
     
  20. Jul 17, 2011 #19

    phinds

    User Avatar
    Gold Member
    2016 Award

    That is incorrect. Do the math.
     
  21. Jul 17, 2011 #20

    rcgldr

    User Avatar
    Homework Helper

    A bit off topic, maybe this should be split off into a separate thread.

    It's not false in languages like APL, which have a tolerance factor for comparasons.

    Am I missing something here, every rational number can be represented as the result of division of two finite sized integers regardless if the number is stored as decimal or binary (my calculator has a factional display option but the size of the integers is limited). An irrational or transcendental number can not be represented as the division of two finite sized integers. I don't see how a number is stored in a computer changes this.
     
    Last edited: Jul 17, 2011
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: How does one become proficient in programming?
  1. How to program? (Replies: 4)

  2. How To Program? (Replies: 7)

Loading...