Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

How do you handle algorithms (memorize vs. reference)?

  1. Mar 6, 2017 #1
    Hey everyone,

    I'm in my second semester of programming which covers an introduction to C++ (the prior course was Python). We just covered classes and arrays, and are moving onto intro. to algorithms and pointers (we're currently mid-semester).

    In the algorithms chapter, they covered 4 "basic" algorithms. Linear Search, Binary Search, Bubble Sort, and Selection Sort. My question is, how do you approach these? Not just from a professional standpoint, but academic, as well.

    Is it useful to memorize the pseudocode to be able to implement it quicker? Or do you tend to just reference it as you need it? Next semester I start an introductory to data structures course, and then the following semester there's a data structures and algorithm course that's mandatory. So with that in mind, how should I treat these?

    I'm sure some of them come more obvious as you program more, and some of them are fairly obvious how to code (like linear search). But when it comes to Binary Search and Selection sort, it doesn't come easy for me, and often I resort to looking up the pseudocode.

    I can usually sit down with pen and paper and write pseudocode for it, but I'll normally be missing a variable or something relatively minor that becomes obvious when coding it. However, this is less efficient to me than just looking it up to begin with.

    How do you handle algorithms yourself?
     
  2. jcsd
  3. Mar 6, 2017 #2

    phinds

    User Avatar
    Gold Member
    2016 Award

    I'm very lazy. I learned early on that the mantra of a good (that is, effective) programmer, is STEAL EVERYTHING and that very much includes you own notes from the first time you do an algorithm. I use a LOT of quicksort routines but I haven't actually looked seriously at one in decades. I just copy over the code and adapt it to the structure I'm sorting.
     
  4. Mar 6, 2017 #3

    jedishrfu

    Staff: Mentor

    Learn how to implement the simplest one, understand the others for when the need arises so you can borrow the code. A good plan is to modularize your program so your cheap algorithm can easily be replaced by a more robust one in the future if that need even ever comes.

    Remember Google is your friend. Be aware that when developing commercial software you can't alway borrow code from somewhere without proper attribution and verification that it can be used in the way intended from a legal standpoint.

    On one project I was on, we had to rewrite an API because we couldn't just take the code since it was under a copyright notice but the API itself was open source.
     
  5. Mar 6, 2017 #4

    jim mcnamara

    User Avatar

    Staff: Mentor

    C programmers - in self defense - have a large library of basic good algorithms - exactly what @phinds describes. Two points --
    1. do not rewrite a functioning library routine. If it works leave it alone. Unless misguided managers tell you otherwise.

    2. good algorithms contribute at least 80% of the speed of processing, minor tweaks do very little. Cool tweaks. You know, cool code, like an integer xor swap. Don't do it except for playtime. Somebody else will eventually have to work on your production code. That person will build voodoo dolls in your image and abuse them if you do pointless stuff like that. Compilers are VERY smart. Unnecessarily intricate code defeats them.

    If you want standard bit twiddling diddles that most people kind of accept (in C anyway) go to:

    https://graphics.stanford.edu/~seander/bithacks.html
     
  6. Mar 7, 2017 #5

    rcgldr

    User Avatar
    Homework Helper

    I doubt you'd have trouble implementing a linear search. There isn't much practical use for bubble sort or selection sort. Most "real world" sorts are variations of quicksort or merge sort. For some unknown to me reason, top down merge sort is taught much more often than bottom up merge sort, although most "real world" merge sorts are variations of bottom up merge sort.

    A lot of algorithms are complex enough that most programmers use documentation, pseudo code, or actual code instead of trying to code based on memorization for those algorithms. In some cases, programmers will work with specific algorithms long enough that they will memorize them to the point they can recognize errors in code just by looking at the code, but that's not the norm. Even if a programmer knows an algorithm well enough to write code from scratch, if it's not for a learning experience, it's somewhat a waste of time to not take advantage of existing code or example. Sometimes a company will be working on something that none of their employees know well, so they hire outside "experts" to assist with the algorithms and/or implementation, and often time teach some of the in house programmers about those algorithms. Some companies will also fund research to develop new algorithms or improve current algorithms. For example University of California in San Diego has a Center for Magnetic Recording Research (CMRR), that is funded by many companies, who occasionally will benefit from the results of that research.

    Probably more like 90+%, but there are cases where intricate code is involved or some complex algorithm is used to save gates on a hardware design. As an example of intricate code, an extremely optimized implementation of CRC generation using some special X86 instructions takes around 500 lines of assembly code. AES encryption involves finding 1/x in finite field math. In software this would be done using a 256 byte lookup table, but with 10 or more of these AES encoders on a hardware device, a complicated algorithm (the result of accumulated research by quite a few experts) that "maps" an 8 bit field into four 2 bit fields is used in order to reduce gate count.
     
    Last edited: Mar 8, 2017
  7. Mar 7, 2017 #6
    I'm also in the STEAL EVERYTHING camp.

    Usually, I implement something once, then forget how it works, but a basic overview. For example, I could explain a bubble sort, but the class I actually call bubble sort has gone through all sorts of evolutions.

    Usually, you tend to start to just know how these things work. If you use strings a lot, you'll become an expert at the low level algorithms because you'll probably write faster more specialized ones.

    There is an old joke that only one makefile has ever been written. Everyone else copies and erases.
     
  8. Mar 7, 2017 #7
    Memorizing the pseudocode will not work you will forget it in no time. Try to understand the algorithm instead and then recreate the code yourself, do this practice with each algorithm and you will become very good at programming.

    I am the type who likes to reinvent the wheel in programming (which is a disadvantage sometimes because it slows me down but It made coding super easy to me)
     
  9. Mar 7, 2017 #8

    QuantumQuest

    User Avatar
    Gold Member

    As has already been mentioned, linear search, bubble sort and selection sort algorithms are used mainly for educational purposes but this is good for the proper education in CS, as they are introduced early enough and give some context. There is no important professional standpoint about these. Binary search is a different story though, as learning it, serves many purposes and you will encounter it or something based on it many times, if you follow a career in programming.

    Now, about whether it is useful to memorize the pseudocode or not, it reminds me of similar concerns I had many years ago. As you'll progress into more advanced programming concepts and develop complex programs, you'll see that everything you tried to memorize simply faded out. So, there's no point to memorize pseudocode or code. Beginning from the simpler algorithms, like the ones you mention and others as well, try to figure out how they work and try to spot properties and differences among them from a mathematical standpoint, that are distinct to each. One thing that I personally did, is trying to follow the pseudocode in an exhaustive manner and see what it gives, using pen and paper.

    In order to see algorithms in action in a more realistic context, you definitely need to learn advanced data structures. So be patient and persistent to eventually reach that point.

    A final point is: try to implement as many algorithms as you can along the way, using at least more than one programming languages. This way you'll learn better programming and you'll also get a very useful view about strengths and weaknesses of programming languages.
     
    Last edited: Mar 7, 2017
  10. Mar 7, 2017 #9

    jedishrfu

    Staff: Mentor

    Rosettacode.org is a good resource for comparative programming of common algorithms and tasks written in a variety of languages.

    Note, they aren't necessarily the best solutions but can give you insight in how something can be done in another language.
     
  11. Mar 8, 2017 #10
    Something I often do is compile groups of similar functions into a big header-only, zero dependency library file.

    At first, all of those sorts would be modular and in their own header and probably a class file too. The thing is, after creating a dozen projects, I no longer want to have to link to all of those source files, or go through making them standalone libraries that have to be inported. Eventually, I'd take a day and combine all of that into sort.h and just #include that one file from then on out. It would probably have an outer "sort" namespace then probably some nested namespaces inside.

    IDEs are smart enough to use precompiled headers and compilers are smart enough to strip out unused code, so it shouldn't degrade development or bloat the executable size.
     
  12. Mar 9, 2017 #11
    Hi there,

    Nice to hear that people are still interested in programming and
    When coding the what is something you can copy or take from a book or from your memory if you feel confident enough. However in my opinion it is not the what but the need/purpose that should be the driving force behind algorithms. This is what you should remember and apply. A good programmer is the one that uses the right tools for the job and understands the consequences of what an algorithm will do to the whole.
    What I find lacking in more modern approaches of programming (assembling/ more than real coding) is the use of blocks of code and then glue them together. Usually the programmer has no idea what the blocks do or the limitations. The end result works but when things go awry it becomes quite a challenge to figure out what is going wrong.
    Now what I want to say is that using code has consequences and by understanding the code like sorting algorithms you open your mind in such a way that you start to see the usefulness and the limitations. This is a much appreciated skill and should be honed carefully. Understanding is the key.

    Being a programmer for more than 35 years I do remember the first steps which was on a VIC20 and C64. I got to understand multiplications and divisions up close and personal to improve 3D projections and matrix calculations. I analyzed the code step by step to see what could be improved or done in less steps. This also made it easier to remember algorithms later on. Remembering the exact implementation of a bubblesort comes with time. Just like playing the guitar or piano, it is practice and more practice. Some would say a useless skill to remember algorithms, I say it is the first step to understanding.

    Hope this helps.
     
  13. Mar 9, 2017 #12

    phinds

    User Avatar
    Gold Member
    2016 Award

    ??? "Still" interested in programming? That just strikes me as a very odd comment since computer programming is one of the largest occupation groups in the country today AND is done by a great many people who have other job titles (scientists for example).
     
  14. Mar 10, 2017 #13
    I was being coy :)
     
  15. Mar 10, 2017 #14
    In practice, the only important thing is knowing which ones are the most useful in different use cases, and the fundamental ideas they are based on. You should pay close attention when reading your book, or in your lecture, but for practical purposes not worry about having them all memorized.

    Having said that, there is an overriding reason to memorize them. When you interview for a job you will be grilled and tested on these issues, potentially intensely, and in many cases only the people who have had the foresight to memorize some potentially obscure algorithm/data structure will be competitive. Also you should begin practicing the types of coding challenges that are best solved through creative/clever use of data structures and algorithms (this is actually good for you in general).
     
    Last edited: Mar 10, 2017
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: How do you handle algorithms (memorize vs. reference)?
  1. How do you compile C? (Replies: 5)

Loading...