How do you handle algorithms (memorize vs. reference)?

In summary, the conversation discussed the coverage of classes and arrays in a programming course, and the upcoming topics of algorithms and pointers. The four basic algorithms covered in the algorithms chapter were also mentioned, and the question of how to approach them was raised. The idea of memorizing pseudocode for quicker implementation was discussed, as well as the usefulness of referencing code instead of memorizing it. It was suggested to learn the simplest algorithm and understand others for future use. The importance of using documentation and available code instead of relying on memorization was emphasized, and the use of outside experts for complex algorithms was also mentioned.
  • #1
MinusTheBear
22
0
Hey everyone,

I'm in my second semester of programming which covers an introduction to C++ (the prior course was Python). We just covered classes and arrays, and are moving onto intro. to algorithms and pointers (we're currently mid-semester).

In the algorithms chapter, they covered 4 "basic" algorithms. Linear Search, Binary Search, Bubble Sort, and Selection Sort. My question is, how do you approach these? Not just from a professional standpoint, but academic, as well.

Is it useful to memorize the pseudocode to be able to implement it quicker? Or do you tend to just reference it as you need it? Next semester I start an introductory to data structures course, and then the following semester there's a data structures and algorithm course that's mandatory. So with that in mind, how should I treat these?

I'm sure some of them come more obvious as you program more, and some of them are fairly obvious how to code (like linear search). But when it comes to Binary Search and Selection sort, it doesn't come easy for me, and often I resort to looking up the pseudocode.

I can usually sit down with pen and paper and write pseudocode for it, but I'll normally be missing a variable or something relatively minor that becomes obvious when coding it. However, this is less efficient to me than just looking it up to begin with.

How do you handle algorithms yourself?
 
Technology news on Phys.org
  • #2
I'm very lazy. I learned early on that the mantra of a good (that is, effective) programmer, is STEAL EVERYTHING and that very much includes you own notes from the first time you do an algorithm. I use a LOT of quicksort routines but I haven't actually looked seriously at one in decades. I just copy over the code and adapt it to the structure I'm sorting.
 
  • Like
Likes QuantumQuest and jedishrfu
  • #3
Learn how to implement the simplest one, understand the others for when the need arises so you can borrow the code. A good plan is to modularize your program so your cheap algorithm can easily be replaced by a more robust one in the future if that need even ever comes.

Remember Google is your friend. Be aware that when developing commercial software you can't alway borrow code from somewhere without proper attribution and verification that it can be used in the way intended from a legal standpoint.

On one project I was on, we had to rewrite an API because we couldn't just take the code since it was under a copyright notice but the API itself was open source.
 
  • Like
Likes QuantumQuest
  • #4
C programmers - in self defense - have a large library of basic good algorithms - exactly what @phinds describes. Two points --
1. do not rewrite a functioning library routine. If it works leave it alone. Unless misguided managers tell you otherwise.

2. good algorithms contribute at least 80% of the speed of processing, minor tweaks do very little. Cool tweaks. You know, cool code, like an integer xor swap. Don't do it except for playtime. Somebody else will eventually have to work on your production code. That person will build voodoo dolls in your image and abuse them if you do pointless stuff like that. Compilers are VERY smart. Unnecessarily intricate code defeats them.

If you want standard bit twiddling diddles that most people kind of accept (in C anyway) go to:

https://graphics.stanford.edu/~seander/bithacks.html
 
  • Like
Likes QuantumQuest and phinds
  • #5
MinusTheBear said:
In the algorithms chapter, they covered 4 "basic" algorithms. Linear Search, Binary Search, Bubble Sort, and Selection Sort. My question is, how do you approach these?
I doubt you'd have trouble implementing a linear search. There isn't much practical use for bubble sort or selection sort. Most "real world" sorts are variations of quicksort or merge sort. For some unknown to me reason, top down merge sort is taught much more often than bottom up merge sort, although most "real world" merge sorts are variations of bottom up merge sort.

A lot of algorithms are complex enough that most programmers use documentation, pseudo code, or actual code instead of trying to code based on memorization for those algorithms. In some cases, programmers will work with specific algorithms long enough that they will memorize them to the point they can recognize errors in code just by looking at the code, but that's not the norm. Even if a programmer knows an algorithm well enough to write code from scratch, if it's not for a learning experience, it's somewhat a waste of time to not take advantage of existing code or example. Sometimes a company will be working on something that none of their employees know well, so they hire outside "experts" to assist with the algorithms and/or implementation, and often time teach some of the in house programmers about those algorithms. Some companies will also fund research to develop new algorithms or improve current algorithms. For example University of California in San Diego has a Center for Magnetic Recording Research (CMRR), that is funded by many companies, who occasionally will benefit from the results of that research.

jim mcnamara said:
good algorithms contribute at least 80% of the speed of processing
Probably more like 90+%, but there are cases where intricate code is involved or some complex algorithm is used to save gates on a hardware design. As an example of intricate code, an extremely optimized implementation of CRC generation using some special X86 instructions takes around 500 lines of assembly code. AES encryption involves finding 1/x in finite field math. In software this would be done using a 256 byte lookup table, but with 10 or more of these AES encoders on a hardware device, a complicated algorithm (the result of accumulated research by quite a few experts) that "maps" an 8 bit field into four 2 bit fields is used in order to reduce gate count.
 
Last edited:
  • Like
Likes QuantumQuest
  • #6
I'm also in the STEAL EVERYTHING camp.

Usually, I implement something once, then forget how it works, but a basic overview. For example, I could explain a bubble sort, but the class I actually call bubble sort has gone through all sorts of evolutions.

Usually, you tend to start to just know how these things work. If you use strings a lot, you'll become an expert at the low level algorithms because you'll probably write faster more specialized ones.

There is an old joke that only one makefile has ever been written. Everyone else copies and erases.
 
  • Like
Likes jedishrfu
  • #7
Memorizing the pseudocode will not work you will forget it in no time. Try to understand the algorithm instead and then recreate the code yourself, do this practice with each algorithm and you will become very good at programming.

I am the type who likes to reinvent the wheel in programming (which is a disadvantage sometimes because it slows me down but It made coding super easy to me)
 
  • #8
MinusTheBear said:
Hey everyone,

I'm in my second semester of programming which covers an introduction to C++ (the prior course was Python). We just covered classes and arrays, and are moving onto intro. to algorithms and pointers (we're currently mid-semester).

In the algorithms chapter, they covered 4 "basic" algorithms. Linear Search, Binary Search, Bubble Sort, and Selection Sort. My question is, how do you approach these? Not just from a professional standpoint, but academic, as well.

Is it useful to memorize the pseudocode to be able to implement it quicker? Or do you tend to just reference it as you need it? Next semester I start an introductory to data structures course, and then the following semester there's a data structures and algorithm course that's mandatory. So with that in mind, how should I treat these?

I'm sure some of them come more obvious as you program more, and some of them are fairly obvious how to code (like linear search). But when it comes to Binary Search and Selection sort, it doesn't come easy for me, and often I resort to looking up the pseudocode.

I can usually sit down with pen and paper and write pseudocode for it, but I'll normally be missing a variable or something relatively minor that becomes obvious when coding it. However, this is less efficient to me than just looking it up to begin with.

How do you handle algorithms yourself?

As has already been mentioned, linear search, bubble sort and selection sort algorithms are used mainly for educational purposes but this is good for the proper education in CS, as they are introduced early enough and give some context. There is no important professional standpoint about these. Binary search is a different story though, as learning it, serves many purposes and you will encounter it or something based on it many times, if you follow a career in programming.

Now, about whether it is useful to memorize the pseudocode or not, it reminds me of similar concerns I had many years ago. As you'll progress into more advanced programming concepts and develop complex programs, you'll see that everything you tried to memorize simply faded out. So, there's no point to memorize pseudocode or code. Beginning from the simpler algorithms, like the ones you mention and others as well, try to figure out how they work and try to spot properties and differences among them from a mathematical standpoint, that are distinct to each. One thing that I personally did, is trying to follow the pseudocode in an exhaustive manner and see what it gives, using pen and paper.

In order to see algorithms in action in a more realistic context, you definitely need to learn advanced data structures. So be patient and persistent to eventually reach that point.

A final point is: try to implement as many algorithms as you can along the way, using at least more than one programming languages. This way you'll learn better programming and you'll also get a very useful view about strengths and weaknesses of programming languages.
 
Last edited:
  • #9
Rosettacode.org is a good resource for comparative programming of common algorithms and tasks written in a variety of languages.

Note, they aren't necessarily the best solutions but can give you insight in how something can be done in another language.
 
  • #10
Something I often do is compile groups of similar functions into a big header-only, zero dependency library file.

At first, all of those sorts would be modular and in their own header and probably a class file too. The thing is, after creating a dozen projects, I no longer want to have to link to all of those source files, or go through making them standalone libraries that have to be inported. Eventually, I'd take a day and combine all of that into sort.h and just #include that one file from then on out. It would probably have an outer "sort" namespace then probably some nested namespaces inside.

IDEs are smart enough to use precompiled headers and compilers are smart enough to strip out unused code, so it shouldn't degrade development or bloat the executable size.
 
  • #11
MinusTheBear said:
Hey everyone,

I'm sure some of them come more obvious as you program more, and some of them are fairly obvious how to code (like linear search). But when it comes to Binary Search and Selection sort, it doesn't come easy for me, and often I resort to looking up the pseudocode.

I can usually sit down with pen and paper and write pseudocode for it, but I'll normally be missing a variable or something relatively minor that becomes obvious when coding it. However, this is less efficient to me than just looking it up to begin with.

How do you handle algorithms yourself?
Hi there,

Nice to hear that people are still interested in programming and
When coding the what is something you can copy or take from a book or from your memory if you feel confident enough. However in my opinion it is not the what but the need/purpose that should be the driving force behind algorithms. This is what you should remember and apply. A good programmer is the one that uses the right tools for the job and understands the consequences of what an algorithm will do to the whole.
What I find lacking in more modern approaches of programming (assembling/ more than real coding) is the use of blocks of code and then glue them together. Usually the programmer has no idea what the blocks do or the limitations. The end result works but when things go awry it becomes quite a challenge to figure out what is going wrong.
Now what I want to say is that using code has consequences and by understanding the code like sorting algorithms you open your mind in such a way that you start to see the usefulness and the limitations. This is a much appreciated skill and should be honed carefully. Understanding is the key.

Being a programmer for more than 35 years I do remember the first steps which was on a VIC20 and C64. I got to understand multiplications and divisions up close and personal to improve 3D projections and matrix calculations. I analyzed the code step by step to see what could be improved or done in less steps. This also made it easier to remember algorithms later on. Remembering the exact implementation of a bubblesort comes with time. Just like playing the guitar or piano, it is practice and more practice. Some would say a useless skill to remember algorithms, I say it is the first step to understanding.

Hope this helps.
 
  • #12
Michael27 said:
Nice to hear that people are still interested in programming
? "Still" interested in programming? That just strikes me as a very odd comment since computer programming is one of the largest occupation groups in the country today AND is done by a great many people who have other job titles (scientists for example).
 
  • Like
Likes QuantumQuest
  • #13
phinds said:
? "Still" interested in programming? That just strikes me as a very odd comment since computer programming is one of the largest occupation groups in the country today AND is done by a great many people who have other job titles (scientists for example).

I was being coy :)
 
  • #14
In practice, the only important thing is knowing which ones are the most useful in different use cases, and the fundamental ideas they are based on. You should pay close attention when reading your book, or in your lecture, but for practical purposes not worry about having them all memorized.

Having said that, there is an overriding reason to memorize them. When you interview for a job you will be grilled and tested on these issues, potentially intensely, and in many cases only the people who have had the foresight to memorize some potentially obscure algorithm/data structure will be competitive. Also you should begin practicing the types of coding challenges that are best solved through creative/clever use of data structures and algorithms (this is actually good for you in general).
 
Last edited:
  • Like
Likes fluidistic

What is the difference between memorizing and referencing algorithms?

Memorizing algorithms means committing the steps and logic of an algorithm to memory, while referencing algorithms involves consulting a written or digital reference for the steps and logic.

Which method is more effective for handling algorithms?

The most effective method for handling algorithms will vary from person to person. Some individuals may find it easier to memorize algorithms, while others may prefer to reference them. It is important to experiment and find the method that works best for you.

Are there any downsides to memorizing algorithms?

Memorizing algorithms can be time-consuming and may lead to errors if the steps are not remembered correctly. Additionally, it may limit the ability to think creatively and find alternative solutions to a problem.

When is it appropriate to reference an algorithm instead of memorizing it?

Referencing an algorithm may be more appropriate when working on complex or unfamiliar problems, as it allows for easy access to the steps and logic. It may also be useful for algorithms that are rarely used or have many variations.

How can I determine which method is best for me?

The best way to determine which method works best for you is to practice both memorizing and referencing algorithms and see which one helps you understand and apply them more effectively. You may also seek advice from other experts in the field or consult with a mentor.

Similar threads

  • Programming and Computer Science
Replies
8
Views
1K
  • Programming and Computer Science
Replies
30
Views
4K
  • Programming and Computer Science
Replies
5
Views
1K
  • Programming and Computer Science
Replies
16
Views
2K
  • Programming and Computer Science
4
Replies
107
Views
5K
  • Programming and Computer Science
Replies
1
Views
732
  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
15
Views
2K
  • Programming and Computer Science
Replies
8
Views
881
  • Programming and Computer Science
Replies
27
Views
2K
Back
Top