# Intro to Algorithms for Programming

Many threads here at PF include some questions about how to learn to program. This is asked by Physics students who want to learn programming in order to help their study, work on some project or create some simulation among other things but also by students in other fields of science and by other members, including self-learners. It is obvious that exhaustive coverage of this subject cannot exist. There is a whole lot of textbooks and online resources covering various aspects of this subject, at various levels, from various points of view and from both theoretical and practical standpoint as well as in-between and then, a lot of practical aspects are learned only through experience. This insight about algorithms is the first part of a series that will try to give a number of fundamental notions both at a theoretical and practical level, as well as a number of explanations and tips, in order to address some needs of a beginning programmer to a (hopefully) good extent. Last but not least, the mathematics needed for anyone wishing to become a decent programmer, are the ones that are prerequisites in a formal Computer Science curriculum i.e. mainly Calculus, Linear Algebra, and Discrete Mathematics.

Table of Contents

**Introduction**

In the last fifty years or so,* programming* has been recognized as a field providing fundamental and critical knowledge which helps towards the success of many jobs, endeavors, and tasks and one that can be treated and stated in a scientific manner. The field of programming has evolved from craftsmanship to academic activity. The first important contributions to this end were made by *E.W. Dijkstra* and *C.A.R. Hoare*. Dijkstra’s paper “Notes on Structured Programming” ##^{[1]}## opened a new way for programming as a scientific subject and an intellectual challenge and Hoare’s paper “An axiomatic Basis of Computer Programming”##^{[2]}##showed that programs accept precise analysis based on mathematical reasoning. These two papers focused on the development and analysis of programs. More specifically, they focused on the structure of the *algorithms* that live into these programs. Also, it became evident that the scientific approach to the development of programs affects mostly big complicated programs with complicated data sets. This way, a new methodology for programming, capable of solving the issues of data structuring, was delineated. A very important contribution that put the confusing terminology and the notions of data structures in some order was Hoare’s “Notes on Data Structuring”##^{[3]}##. Through this paper, it became evident that no decision can be made on data structuring, if we don’t know the algorithms that will work with these data, and conversely, the choice and the structure of algorithms depend heavily on the choice of data structures. So, algorithms and data structures are two notions intimately related.

After this brief history, we will first deal with these two fundamental and extremely important notions: algorithms and data structures.

**Algorithms**

The concept of algorithms is very old. The use of the concept can be ascribed to Greek mathematicians but the term algorithm itself derives from the 9th Century mathematician Muḥammad ibn Mūsā al’Khwārizmī##^{[4]}##. But what is an algorithm?

**Problem**

We will begin this discussion by first addressing what is a *problem*. In general, a problem is a difficult situation that we wish to solve in some way. In the context of the present discussion, a problem represents a situation, in which we need to perform some computation in order to get the result or to aid in the solution of another bigger problem.

Problems are divided according to their solvability in three categories:

- Solvable, for which their solution is already known
- Unsolvable, for which it is known that there is no solution so far
- Open, for which their solution, although not found yet, we are not sure if can be found

There is also a categorization of the solvable problems according to their structure. The structure of a problem refers to the individual subproblems of which our original problem consists.

- Unstructured: they accept many potential solutions but there is not a specific one
- Half-structured: they have a limited number of potential solutions
- Structured: the solution is automated. The problems in this category can be:

** –** Decision problems: problems that are solved through “Yes” or “No”

** –** Computational problems: problems that require computations

** ****–** Optimization problems: finding the optimal (most efficient) solution.

But what are the reasons for using a computer to solve a problem? These are mainly four:

- Repetitive operations
- Complexity of computations
- Speed of execution
- Large data volumes

Next, we come to our main question of what is an algorithm.

**Definition of an algorithm**

Informally speaking, an algorithm is a sequence of steps or a kind of “recipe” we can follow, in order to solve some problem. Now, this is not a particularly useful definition for a programmer, as it is very general while the computer has finite resources and our life is also finite. This brings us to the formal definition of an algorithm: an algorithm is a finite sequence of actions (steps) that are strictly defined and can be executed in a finite timespan for the purpose of solving a problem.

**Criteria for an algorithm**

Algorithms must satisfy five criteria: First, an algorithm must be able to accept an input (if needed). Second, an algorithm must always provide an output. Third, an algorithm must be decisive i.e. its behavior must be explicitly described under any circumstances. Fourth, an algorithm must finish its job after a finite number of steps. Fifth, effectiveness: each and every step of the algorithm must be clear i.e. straightforward.

**Representations of algorithms**

But how can we represent an algorithm? There are various ways to do this. The four most widely used are

- Using natural language
- Using natural language in a step-wise fashion
- Drawing a flowchart
- Writing pseudocode

I’ll give an example of an algorithm represented by i) Natural language in steps, ii) Flowchart.

*Example 1*

*Problem:* A number and its order of magnitude are given via the keyboard of a computer. We are asked to write an algorithm that computes and displays the MSB and the LSB of the number.

*Solution: * i) * *

- Read an integer
- Read the order of magnitude of the number
- Divide the number by the order of magnitude and put the quotient to MSB
- Divide the number by 10 and put the remainder to LSB
- Display MSB and LSB

ii)

From the above example, we see that the representation of an algorithm in natural language with steps has the advantage of being very close to how we humans think but it is not so close to an implementation in a programming language as it is abstract enough for this. The situation is even worse in this regard if we use just natural language. Flow charts as ii) above use various shapes in order to state what each step represents i.e. use oval shape for start, rectangular shape for a process, parallelogram for input/output of data, diamond(rhombus) for decision, etc. so they are more descriptive than natural language but they have not used so much anymore. For more on flowcharts see Wikipedia##^{[5]}##

In order to be as close as we can to the human way of thinking but at the same time also to the implementation in a programming language, we use* pseudocode*. Pseudocode is an informal high-level description of the operating principle of a computer program, i.e. it uses the structural conventions of a programming language in its representation of an algorithm but is intended for human reading i.e. it cannot be executed directly on a computer. Now, because it is closer to a programming language implementation this means that it takes various forms, depending on which programming language we intend to use. In older times, there was widespread use of Pascal-like pseudocode i.e. pseudocode with Pascal language structural conventions. Now, C-like pseudocode is much more common for educational purposes regarding the development of a procedural-type program. Below, I give a list of the elements used in Pascal-like pseudocode, not just for historical reasons but because although different in various aspects from other pseudocode styles, it still gives the description of an algorithm in a way that can be transformed easily into a real program in any procedural type programming language.

First line | Algorithm<name> | |||

Last line | end<name> | |||

Comments | ! | |||

Assignment | variable ##\leftarrow## expression | |||

Simple conditional | if <condition> then <series of actions to be executed>end_if | |||

Composite (multi-part) conditional | if <condition> then <series of actions_1 to be executed>else<series of actions_2 to be executed> end_if | |||

Multi-case conditional | case <expression> of<case 1> <series of actions_1 to be executed> …………………………………………………. <case n> <series of actions_n to be executed> else<series of actions_else to be executed> end_case | |||

Repetition |
| |||

Input | read <variable(s)_name(s)> | |||

Output | write<elements list> | |||

Constants | Defined through assignment to a variable | |||

Variables (Input-Output) | Data //…//Results //…// | |||

Functions | We don’t define any specific functions. We use whichever we need (e.g. mathematical functions etc.) |

Now, a few words about the above elements. *Comments* are explanatory text intended for other people reading the pseudocode. *The assignment* is giving a value to a variable. A *conditional structure* is used in order to execute some series of actions if a certain condition is met. A *case structure* offers better readability and helps to avoid deep nesting of simpler conditional structures, provided that it is appropriate for the case at hand. *Repetition structures* are used in order to execute some specific series of actions while (i.e. as long as) a certain condition is fulfilled or for a specific number of times. A *constant* takes a specific value that does not change during the execution of the algorithm (i.e. of the program that implements it). In contrast, a *variable* can take different values within a specific range e.g. integers. As *operators* for the math we use ##+##, ##-##, ##*##, ##/##, ##=##, ##>##, ##<##, ##\geq##, ##\leq## as known from math, *mod* for the remainder operator and *<> * for “not equal”.

All the above elements give a pretty good idea about the description of an algorithm in pseudocode. I have omitted more advanced constructs – like procedures and functions for example, for two reasons. First, Pascal is almost not used anymore – I mean the old *Turbo Pascal* but of course, there are modern descendants like *Object Pascal*, for anything other than educational purposes – and even this, is very limited now. So, anyone wanting to get into the programming realm will likely pick some other language at some point but all the above will still be useful. Second, all this stuff is formally taught along with a real programming language, so it is advisable that a reader that does not have such a requirement in his / her formal courses and/or wants to learn to program on his / her own should follow some tutorial or go through some textbook or other relevant resource, in order to learn the programming basics. The way I was taught these was pseudocode in a pure algorithmic and data structures course along with a course on Turbo Pascal. Later on, I picked up *C* and I can say that this is by far the most influential, concise, and “lowest” high-level procedural programming language I’ve ever met as is also its descendant – albeit not only procedural and way richer in many regards, *C++.* Another thing that I also want to point out, is that you may come across Pascal-like pseudocode that is somewhat different, as there is no absolute standardization worldwide but in any case, the basic elements and notions are still the same. In the third part of this insights series, I’ll try to give some facts, thoughts, tips, and things drawn from my own experience as a software developer about programming languages, programming, and tools of the trade.

As the reasons why emphasizing so much, the pseudocode representation has become clear, I’ll give two examples of somewhat more involved (for the present context) algorithms, in order to see some of the elements of pseudocode in action.

*Example 2*

*Problem:* Write an algorithm that reads (i.e. takes as input) ten integers and finds minimum (min) and maximum (max).

*Solution:*

Algorithm MAXMIN

Data //max, min, i, n:INT// ! here we do variable definition and declaration

max = -30000 ! we give some arbitrary values for min, max and use the trick

min = 30000 ! of max be low and min be high

i##\leftarrow##1 ! initialize counter for repetitions

while i##\leq##10 do ! while…do repetition for ten numbers

##\space####\space####\space####\space####\space##read n

if n##>##max then ! if…then conditional structures for comparing numbers

##\space####\space####\space####\space####\space##max##\leftarrow##n

end_if

if n##<##min then

##\space####\space####\space####\space####\space##min##\leftarrow##n

end_if

i##\leftarrow##i + 1 ! increase counter by 1 to proceed to the next iteration

end_while

write max,min ! display max and min number

In the above example, the arbitrary values we give for a min, max are meant to convey the *idea *of the trick. The specific values we can pick for them depend on the range of the integers we want to cover. Evidently, giving all integers greater than the minimum or less than the maximum we have set, will not work. As a more efficient way, we can write our algorithm as below (I’ll write only the interesting part; the reader is encouraged to fill in the missing parts by comparing to the above version):

Algorithm MAXMIN

## ………………

i##\leftarrow##2

read n

max ##\leftarrow## n

min ##\leftarrow## n

while i##\leq##10 do

##\space####\space####\space####\space####\space##read n

## ……………..

*Example 3*

*Problem:* Write an algorithm which computes the sum ##S = 1 – 2 + 3 – 4 + 5 – … + 99 – 100##

*Solution:*

In order to make this more interesting, let’s do it in two different ways (here I have only the interesting part of the pseudocode; the rest elements can be easily filled in)

*1st way*

Split the sum into two sums (positive – negative)

sum1##\leftarrow##0

for i from 1 to 99 step 2

##\space####\space####\space####\space####\space##sum1##\leftarrow##sum1 + i

end_for

sum2##\leftarrow##0

for i from -2 to -100 step -2

##\space####\space####\space####\space####\space##sum2##\leftarrow##sum2 + i

end_for

sum##\leftarrow##sum1 + sum2

*2nd way*

sum##\leftarrow##0

sign##\leftarrow####\space##-1

for i from 1 to 100

##\space####\space####\space####\space####\space##sign##\leftarrow##sign##\cdot##(-1)

##\space####\space####\space####\space####\space##sum##\leftarrow##sum + (sign##\cdot##i)

end_for

As a couple of last things for representations, first, writing first pseudocode in the design phase, we can test the results with pen and paper and see if our construct does what is intended to do or not, otherwise, we’ll end up having *logical* *errors* that may be extremely difficult to spot or realize inside a real program. In this regard, it is very useful to create a table that holds the values of variables and counter(s) of the loop(s), execute the algorithm in our mind, and write down the results. Are these correct? As a programmer becomes better, he/she will at some point start to use premade code for the algorithms, in the form of libraries or various other forms but the above recommendations still pay off in the long run because there are many cases, where we want to develop a version of an algorithm fitting specific needs that may not be readily available or modify some premade code according to our specific needs. Second, the representations presented so far are all very simple because we have not used any data structures. In the second part of this series, I’ll give some more involved examples.

**Correctness of algorithms**

Algorithms are mathematical procedures and as such their *correctness* is something that can be (and is) checked and verified. We use math in order to prove that an algorithm gives always the correct result(s). In order to check the correctness, it is not sufficient to test the algorithm with any number of different sets of data because this gives no guarantee that the algorithm always gives correct results or even finishes its job at all. In order to prove the correctness of an algorithm in a strict way, two conditions must be satisfied: we must prove that the algorithm gives correct results in any case of execution and also prove that the execution of the algorithm indeed terminates in any case. For the first, a very often used method is *mathematical induction*. Let’s see an example of how this method works.

*Example 4*

*Euclides* *algorithm* for ##GCD ##(Greatest Common Divisor) of two positive integers a.k.a. ##HCF##(Highest Common Factor) , is a very old one (developed in about 300 B.C.). It is based on the relations

##GCD(x,y) = GCD(y##, remainder of the division ##\frac{x}{y})## if ##y > 0##,

##GCD(x,y) = x## if ##y = 0##

where ##x##,##y## are any non-negative integers.

We’ll use natural language in steps because we are just interested in the proof of correctness

while y##<>0## do

##\space####\space####\space####\space####\space##calculate the remainder of x/y

##\space####\space####\space####\space####\space##put the value of y to x

##\space####\space####\space####\space####\space##put the value of the remainder to y

##\space####\space####\space####\space####\space##**

write x

*Inductive hypothesis:* When we reach the line **, ##GCD(x,y) = k##

*Induction basis:* Initially, ##x = ka## and ##y = kb## for two numbers ##a## and ##b## which don’t have any common factors. Let ##r## be the remainder of the division ##\frac{x}{y}##. Then ##x = my + r##. So, we have ##ka = mkb + r## (1) so ##r = k(a – mb)##. Consequently, ##r## must have ##k## as a factor. From (1) it also follows that ##\frac{r}{k}## has no common factor with ##b## otherwise ##a## would have this same factor but this is a contradiction as we have assumed that ##a## and ##b## don’t have any common factors. So, it follows that ##y## and ##r## have ##k## as highest common factor. So, the first time we reach the line ** we have ##GCD(x,y) = GCD(y##, remainder of ##\frac{x}{y}) = GCD(y,r) = k##.

*Inductive step:* From the above we also have that the ##(n + 1)##-th time we reach the line ** , ##GCD(x,y)## is equal to the ##GCD## of ##x##, ##y## for the ##n##-th time we reached this same line. This is the value ##k##. So, we have this same value for the ##(n + 1)## – th time and so we have proved that the inductive hypothesis holds true for ##n + 1## Q.E.D

So, we proved that every time we reach the line ** the result is correct. This guarantees that the algorithm is *partially* correct. We say “partially” because there is also the second condition I mentioned above: that we must also prove that the execution of the algorithm indeed terminates in any case. Up to here, we have no guarantee for this. In order to prove it, we notice that ##y## decreases at least by ##1## for each loop execution. For each subsequent execution, ##y## becomes equal to a number between ##0## and ##y – 1## so, finally, it reaches ##0## and the loop terminates. So, we have proved the second condition as well.

**Classification of algorithms**

Algorithms can’t be classified in some universal way. In a broad sense, they can be classified according to their* design pattern,* according to the *kind of problem they solve,* or according to their* complexity*. According to the first, they can be classified into *divide and conquer* *greedy*, *dynamic programming,* etc. algorithms, so the way they solve the problem gives a hint about what kind of problems is each best at. According to their complexity, they can be classified as *constant time*, *linear time*, *logarithmic time*, *quadratic time*, *exponential time, *etc. Algorithmic complexity is concerned with how fast or slows a particular algorithm performs.

Of particular interest for a beginning programmer are *sorting* algorithms. Sorting is the process of putting a list of objects (e.g. numbers, strings, characters) in ascending or descending order, according to some criteria. Sorting, besides being useful in itself, is also useful in giving us a sorted list of objects as an input to some other process.

The first algorithm of this kind often taught in an introductory CS context, is *Bubble Sort*##^{[6]}##. This algorithm is easy to be understood and implemented but is not particularly efficient for big data sets. Two very efficient sorting algorithms are *Merge Sort*##^{[7]}## and *Quicksort*##^{[8]}## and variations of them like 3-way Merge Sort and Quick Sort with 3-way partition.

In the second part of this series, I’ll refer to the *dynamic programming* strategy and some interesting cases of algorithms that follow this strategy.

**Analysis of algorithms**

In order to check the efficiency of an algorithm in a theoretical scientific way, we perform the task of *analysis*. We can also compare two algorithms regarding their efficiency using this task. The term was coined by *Donald Knuth*. Using analysis we determine the *computational complexity* of an algorithm i.e. the amount of time, space(memory – storage) and/or other resources necessary for its execution. For the complexity of algorithms, we use *Asymptotic Analysis*.

Can’t we just run our program on our computer and see its performance? Or implement two algorithms that we want to compare regarding their performance and just run the programs?

The answer is that the problem we don’t take into account this way is the input size. It may be the case that one (implemented) algorithm has good performance for some input sizes but this changes from a certain size on. Or in the case of comparing the implementations of two different algorithms, the first performs better for some input sizes but the second for others. Also, if we run them on two different machines, the results may be not what we expect. So, in order to get rid of specific details and evaluate the performance of some algorithm(s) according to their input size, we use Asymptotic Analysis. In this type of analysis, we analyze the running time or space usage of programs by trying to estimate the time or space as a function of the input size. The asymptotic behavior of such a function ##f(n)## refers to the growth of this function as ##n## gets large. So, we have a good measure of the efficiency – implied from the complexity, of an algorithm but the asymptotic analysis is not perfect. If, for instance, we have two algorithms with the same order of growth whose running time differs by some constant, we can’t tell which one is better, as we ignore constants in Asymptotic Analysis. Also, in this analysis, we talk about input sizes larger than some constant but it may be the case that such an input size is never really given to our algorithm.

Asymptotic Analysis uses *asymptotic notation*##^{[9]}## This is a mathematical representation of its complexity.

Big – Oh ##(O)## notation is used to define the upper bound of an algorithm in terms of time complexity.

Big – Omega ##(\Omega)## notation is used to define the lower bound of an algorithm in terms of time complexity.

Big – Theta ##(\Theta)## notation is used to define the average bound of an algorithm in terms of τime complexity.

In the next part of this series, I’ll talk about *data structures*.

### Bibliography

Niklaus Wirth “Algorithms and Data Structures” Longman Higher Education 1986

Les Goldschlager and Andrew Lister “COMPUTER SCIENCE A MODERN INTRODUCTION” Second Edition, Prentice-Hall 1988

### Resources

##^{[1]}## https://pdfs.semanticscholar.org/013b/f90f472e49c05263b90d9e36f8d2705e7fc7.pdf

##^{[2]}## https://www.cs.cmu.edu/~crary/819-f09/Hoare69.pdf

##^{[3]}## https://pdfs.semanticscholar.org/baf7/f3bd10c4e84d61e3b7eafbdbfb66bab367a9.pdf

##^{[4]}##https://en.wikipedia.org/wiki/Algorithm

##^{[5]}## https://en.wikipedia.org/wiki/Flowchart

##^{[6]}##https://en.wikipedia.org/wiki/Bubble_sort

##^{[7]}##https://en.wikipedia.org/wiki/Merge_sort

##^{[8]}##https://en.wikipedia.org/wiki/Quicksort

##^{[9]}##https://en.wikipedia.org/wiki/Big_O_notation

### Recommended Bibliography

Thomas H. Cormen, Charles E. Leiserson, Ronald R. Rivest, Clifford Stein “Introduction to Algorithms” Third Edition, MIT Press 2009

Steven S. Skiena “The Algorithm Design Manual” Second Edition, Springer 2008

Jon Kleinberg, “Algorithm Design” First Edition, Pearson 2005

Ronald L. Graham, Donald E. Knuth, Oren Patashnik “Concrete Mathematics: A Foundation for Computer Science” Second Edition, Addison-Wesley Professional 1994

### Recommended Resources

“Algorithms Design and Analysis” Part 1, Stanford Lagunita

Electronics – Technical school (four semesters worth). Worked as electronics lab TA for four years. Software/Web Development (3+ certification (E.U))

Thank you so much for sharing your thoughts. I'm doing VMWare training in Chennai and I was in need of additional information about programming. This is really a great platform for me to learn.

Con

I started readingrthe book: Concrete Mathematics A Foundation for Computer Science. The book has various quotes (many from students) in the margin. I like this algorithm for self study on page 173. https://drive.google.com/open?id=0BxUZlTyD-AbqVTFITXdocExtdXZ1NnRjZzJKbGhpS0V6bWJz

How many Homework forum posters decide on a value for N of 1 or less? ?:)

. . . I have put

Concrete Mathematics: A Foundation for Computer Sciencein the Recommended Bibliography (co-authored by Donald Knuth) in order to point out the great importance of the relevant CS math for the beginner programmer.After reading this article, I decided to buyConcrete Mathematics: A Foundation for Computer Science.I found a used one on Amazon for less than $40 (in great condition BTW). It arrived Friday, but I just started reading it. It's pretty impressive, so far. I think it'll be a good resource to hang onto.I'll also give my opinion about considering "go to" statements harmful in the third part of this insights series which will be about programming.

General comments here.

I was never a fan of flowcharts, and encountered similar opinions at a few companies. Those companies were using a program that converted a simple type of English like psuedo-code into flowcharts, but it turned out that it was easier to follow the pseudo-code used to generate the flow charts than the created flow charts, especially for larger flow charts.

Developing new algorithms is relatively rare, even for large projects. Typically, existing algorithms are used in a project, and much of the design phase for larger projects is to split up the work and define interfaces between components. The components will typically involve algorithms, but the entire team does not need to know every algorithm used in a project, as each team member mostly focuses on the component that team member is working on, while a few team members focus on the entire project.

In some cases, the programming environment takes care of most of the complicated part of a large project. For example, environments like Azure take care of issues like multi-processing, load balancing, garbage collection, … , allowing the team to write simpler code.

10 x=1

25 y=0

20 if x < 30000 then x = y + 1

30 y + x

40 print x

35 if x = 30000 break

45 goto 20

I guess this would be considered bad form.

In the second part of this insights series, where I'll try to give some important things about data structures and some more advanced things about algorithms waiting for the second part :-)

Good Insight Article. Congratulations.

Everything I know about this subject came from The Art of Computer Programming: Sorting and Searching. Volume 3 even though it's scope was limited to sort&search, his whole approach to algorithms was IMO the canonical example of the art. I read and re-read it so many times that I could cite page numbers from memory like Bible students cite passages. It would be nice to see it included in your bibliography.

Knuth used pseudocode extensively, and many times I wrote code in different programming languages based on his pseudocode. I think that is the superior way to document any algorithm.Thank you very much. For me, as for most people involved with CS and programming, Donald E. Knuth is more than a giant in the fields of Mathematics and Computer Science and I really admire his great contributions in these fields. I have the complete series of the books

The Art of Computer Programmingand they're reallyclassic.I can't really find another way to describe their great value and their beauty. In this first part of my insights, I have putConcrete Mathematics: A Foundation for Computer Sciencein the Recommended Bibliography (co-authored by Donald Knuth) in order to point out the great importance of the relevant CS math for the beginner programmer. In the second part of this insights series, where I'll try to give some important things about data structures and some more advanced things about algorithms, I'll definitely includeThe Art of Computer Programmingin the bibliography. In my opinion, Knuth's books are best for a mid-level programmer and beyond but in any case, the great influence of his work and books is present both in explicit and implicit ways in other books, such as the ones I've included in the bibliography in this first part.Good Insight Article. Congratulations.

Everything I know about this subject came from The Art of Computer Programming: Sorting and Searching. Volume 3 even though it's scope was limited to sort&search, his whole approach to algorithms was IMO the canonical example of the art. I read and re-read it so many times that I could cite page numbers from memory like Bible students cite passages. It would be nice to see it included in your bibliography.

Knuth used pseudocode extensively, and many times I wrote code in different programming languages based on his pseudocode. I think that is the superior way to document any algorithm.

So I learned flowcharts, way back when I first took a programming course (Pascal, actually in college). I taught myself BASIC on an Apple II+ from a book, and with some friends. I think maybe that book also used flowcharts to show how the programs worked.

I've heard of pseudocode, but not actually used it. It looks like a good tool to use when developing the algorithm to be programmed.

As far as that min/max algorithm is concerned: This is a special case where there are exactly 10 values and they are being processed as they are being received serially. But to handle the min/max more fluently:…Yes, that's the addition I did to the insight as an alternative more efficient way presenting only the interesting part and leaving the rest to be filled in by the reader by comparing with the initial algorithm of

Example 2.However, for a beginning programmer it is useful to see both algorithms.It depends on what you are going to be doing in future years. If it's going to be basically the same algorithms over and over again, then you're all set.I dont know what I ll do in future years. I am a physics student so coding is kind of a language that we should know. But I want to learn more about it so I can be better. Other then the school homeworks I manage to solve 30 project euler questions. But as I said I know just the basics of the coding, it would be better to go deeper not deep as computer scientist or a great programmer but learning the basics of the coding or at least mastering most of the stuff…Like lets say in numerical analysis or in some physics problems.

As far as that min/max algorithm is concerned: This is a special case where there are exactly 10 values and they are being processed as they are being received serially. But to handle the min/max more fluently:

In the future years or etc is it really important or its just a tool that we may use or may not use because I dont feel like to use it.It depends on what you are going to be doing in future years. If it's going to be basically the same algorithms over and over again, then you're all set.

But if you never write a program that's bigger than what you can hold in your head in one moment, you are not programming at the professional level – or anything close. Things like pseudo-code and flowcharts are the tip of the iceberg. For large systems, you will be dealing with pages of requirements and systems with several components – and a team of developers. The entire project needs to be done very systematically. Perhaps the most important documents will be interface documents – where you describe what the rules are when one system component uses or communicates with another.

When you do flow charts or pseudo code "in the real world", it is generally for one of these reasons:

1) You are writing a design document and you need to explain what the system needs to do. Normally, design documentation does not include the depth that is reached in the coding process – but in some cases it does.

2) You are writing the code for the first time – and you need to work out precisely what needs to happen in a particular module. In this case, you may be drawing things out on a white board or that famous "back of a paper napkin".

3) You are documenting code you have just written. In this case, the pseudo-code will address the issues that need to be communicated – not necessarily every detail.

4) You are changing code or hunting down a bug. Once again, this is "back of the napkin" type stuff. You are writing things out because you want to condense what is in the code to the essential elements of the task at hand.

5) You are "programming" a person. That is, you are providing operator instructions.

I'll totally agree to what @Mark44 said in post #11. The reason that I began this series of insights for programming from

algorithmsis that they are an extremely important thing for anyone that wants to be a programmer or at the very least to learn the fundamentals about programming. Pseudocode is a very useful and helpful – for meindispensable,tool to this end. Every beginning programmer is tempted to sit at the keyboard and write the program directly, without thinking about the algorithms – and a little later about the basic data structures needed,first.Well, for simple small programs, there is really not much to think. But when the things are starting to get tougher, if you try to write directly your program then you'll have to think about the most efficient strategy you can follow, about mathematical methods you need to utilize, about the syntax of the language you use, all at the same time. Now, beyond syntax errors that the compiler will complain about, inevitably, various logical errors will escape your attention and all these just because you won't have adesignof the program in place. Conversely, if you start writing your program by designing first, what is the optimal strategy to follow i.e. pick the algorithm(s) you need, how you can do the various computations you'll have to do in the most efficient way and finally writing some pseudocode tailored to the specific programming language you'll use, you'll have almost entirely solved the problem for which you want to write code and you'll just have (maybe) to do some minor corrections and implement it in some programming language. The difference between the two above ways of writing your code is not so evident till the time that you'll have to deal with some complicated problem. I highly recommend to take a look at the recommended resource I've put in the article.I see your point and yes, thats exactly me..I enrolled the course I am not sure I can finish it but I ll try my best to do so.I also need to improve my coding skills but I guess I should open another thread for that.I was given to do 4 project . First we should have to write an algorithm and then we should start to write the codes. But I started with the codes and then I wrote the algorithm which it was kind of useless. My question is for programming is it really important to write an algorithm first ? Well even in the coding part I am thinking an "algorithm" to write the code but I am just not putting those into the steps. In the future years or etc is it really important or its just a tool that we may use or may not use because I dont feel like to use it.I'll totally agree to what @Mark44 said in post #11. The reason that I began this series of insights for programming from

algorithmsis that they are an extremely important thing for anyone that wants to be a programmer or at the very least to learn the fundamentals about programming. Pseudocode is a very useful and helpful – for meindispensable,tool to this end. Every beginning programmer is tempted to sit at the keyboard and write the program directly, without thinking about the algorithms – and a little later about the basic data structures needed,first.Well, for simple small programs, there is really not much to think. But when the things are starting to get tougher, if you try to write directly your program then you'll have to think about the most efficient strategy you can follow, about mathematical methods you need to utilize, about the syntax of the language you use, all at the same time. Now, beyond syntax errors that the compiler will complain about, inevitably, various logical errors will escape your attention and all these just because you won't have adesignof the program in place. Conversely, if you start writing your program by designing first, what is the optimal strategy to follow i.e. pick the algorithm(s) you need, how you can do the various computations you'll have to do in the most efficient way and finally writing some pseudocode tailored to the specific programming language you'll use, you'll have almost entirely solved the problem for which you want to write code and you'll just have (maybe) to do some minor corrections and implement it in some programming language. The difference between the two above ways of writing your code is not so evident till the time that you'll have to deal with some complicated problem. I highly recommend to take a look at the recommended resource I've put in the article.In the article, the end of an "if-else" statement is written as "end_if", while the end of a loop is written as "end_for". What is the system for nested loops and nested conditional statements?The purpose of using

end_if,end_while,end_for. is to denote where the end of a conditional or repetition structure is, while in real Pascal, when there are more than one statements we use "BEGIN" and "END" to denote the beginning and the end of the series of statements. The only exception is the "REPEAT…UNTIL" repetition structure where we don't use "BEGIN", "END". Now, if you have nested conditional or repetition structures in pseudocode, you use the same "end_if", "end_while", "end_for" for the end of each such construct and you align them appropriately in order to show explicitly where each "end_…" refers to.In any OOP based language, we have functions. You didn't write too much about functions. So, how are functions mentioned? Is it written like "start function <function_name>()" or something like that? How do I call a function in a later step of the algorithm?In an OOP language we prefer to call it

methodsbecause they refer to specific objects and we sayxobject hasa, b, c, d…methods in general, in contrast tofunctionsof an e.g. procedural type language, where functions are constructs which take arguments in a more agnostic i.e. abstract way. In any case, I didn't write about more advanced constructs like procedures and functions – I mention these because I've presented Pascal – like pseudocode, for the two reasons I give in the articleI have omitted more advanced constructs – like procedures and functions for example, for two reasons. First, Pascal is almost not used anymore – I mean the old Turbo Pascal but of course there are modern descendants like Object Pascal, for anything other than educational purposes – and even this, is very limited now. So, anyone wanting to get into the programming realm will likely pick some other language at some point but all the above will still be useful. Second, all this stuff is formally taught along with a real programming language, so it is advisable that a reader that does not have such a requirement in his / her formal courses and / or wants to learn programming on his / her own should follow some tutorial or go through some textbook or other relevant resource, in order to learn the programming basics.but if you want to write pseudocode for e.g. a procedure you can do it like this

and for calling it

The representation of a function in pseudocode is done in a similar way – taking into account the differences between procedures and functions in Pascal.

Now, a note for the above: As I also write in the article, there is no absolute standardization for pseudocode worldwide, as this is not a real programming language. I wrote it the way I was taught it many years before but again the basic ideas are still the same in any type / style of Pascal-like pseudocode.

EDIT: Also a note about

<real_parameters_list>above. Here, with "real", I mean the specific arguments used for calling the procedure.I had two questions:

For #1, keep in mind that @QuantumQuest is writing about algorithms and pseudocode, and that there are no etched-in-stone guidelines or rules for this. You could do something like this, though:

For #2, the intent of the article was an introduction to algorithms, aimed at beginning programmers, who likely don't know much about functions just yet.

In an OOP language, you also have objects, which have their own data and their own functions. All procedural languages have functions, at least by some name, so this isn't limited to OOP languages. If you wanted to write an algorithm that used functions, you could say "call Fun1()", and then have a separate part of the algorithm that described what Fun1() was supposed to do, including parameters passed to it, and any value it returned. I suspect that was well beyond the scope of what QuantumQuest was trying to do.

First we should have to write an algorithm and then we should start to write the codes. But I started with the codes and then I wrote the algorithm which it was kind of useless.In several of your posts about a month ago, it was apparent that you started by writing code without a clear idea of what your algorithm should be. You could have floundered around a lot less if you had started with a clear idea of what the steps needed to be — IOW, an algorithm. Several people commented that you clearly weren't starting with an algorithm.

In a programming class I had many years ago, the instructor said, The sooner you sit down to the keyboard, the longer it will take to write your program.

My question is for programming is it really important to write an algorithm first ?Yes, especially for any non-trivial program.

I was given to do 4 project . First we should have to write an algorithm and then we should start to write the codes. But I started with the codes and then I wrote the algorithm which it was kind of useless. My question is for programming is it really important to write an algorithm first ? Well even in the coding part I am thinking an "algorithm" to write the code but I am just not putting those into the steps. In the future years or etc is it really important or its just a tool that we may use or may not use because I dont feel like to use it.

I had two questions:

In order to be clear about

Example 2,I've added a note below the example about the values of min, max and also an alternative way.I mean in these two cases one of the outputs of the algorithm at the end (the max or the min) will be wrong. For example, suppose you apply your algorithm to find the maximum and minimum of the ten numbers 38000, 42000, 35000, 43000, 61000, 32000, 47000, 33000, 54000, 73000. These numbers are all greater than the initial value 30000 of min, so the test n < min in your loop never passes and the variable min is never updated. Your algorithm would incorrectly return a minimum of 30000 in that case.

Conversely, if the ten input numbers are all less than -30000, then the minimum will be found correctly but the maximum will be incorrectly reported as -30000.Yes, I see what you mean but what you refer to is inside the purpose of the example – that's the way I was taught it too, many years before. If you give numbers that are all greater than ##30000## then you obviously see that ##30000## is less than them from the outset and the same holds true for the ##-30000## i.e. if you give numbers which are all less than it, then ##-30000## is bigger than them from the outset. Then the reader will see what went wrong and will have to stretch the range of numbers for ##max##, ##min## as this is just for educational purposes and not a real world situation. – also, we're talking about beginning programmers. In any case, I don't present the values of ##max##, ##min## as "hardwired" values.

Well it's not a big deal but my point here was: I think the reduction to computing (−1)×50(−1)×50(-1) times 50

issomething that a beginner could not only understand, but also quite likely notice for themselves when reading your problem statement. Then they might wonder why you don't say anything about it and proceed to solve it using loops.I appreciate your opinion but I don't agree.I mean in these two cases one of the outputs of the algorithm at the end (the max or the min) will be wrong. For example, suppose you apply your algorithm to find the maximum and minimum of the ten numbers 38000, 42000, 35000, 43000, 61000, 32000, 47000, 33000, 54000, 73000. These numbers are all greater than the initial value 30000 of min, so the test n < min in your loop never passes and the variable min is never updated. Your algorithm would incorrectly return a minimum of 30000 in that case.

Conversely, if the ten input numbers are all less than -30000, then the minimum will be found correctly but the maximum will be incorrectly reported as -30000.These are good points, but a comment in the algorithm description could mention that it is assumed that all input values are between the two extreme values.

Well it's not a big deal but my point here was: I think the reduction to computing ##(-1) times 50##

issomething that a beginner could not only understand, but also quite likely notice for themselves when reading your problem statement. Then they might wonder why you don't say anything about it and proceed to solve it using loops.The algorithm as presented is fine, IMO, since the focus is on adding the numbers, not on noticing that this particular finite sum could be added by grouping adjacent terms. A comment that for the example in question, another method could be used that would simplify things wouldn't hurt, though. In my experience of teaching college calculus for nearly 20 years, I can assure you that some beginners wouldnotthink of the simplification that you used to get the sum.Thanks for the comment but I cannot really see your point about giving all numbers greater than ##30000## or less than ##-30000##. For

eachiteration there is acurrentmaximum and minimum. How is it that one of the results will be wrong? You can give the numbers any way you wish.I mean in these two cases one of the outputs of the algorithm at the end (the max or the min) will be wrong. For example, suppose you apply your algorithm to find the maximum and minimum of the ten numbers 38000, 42000, 35000, 43000, 61000, 32000, 47000, 33000, 54000, 73000. These numbers are all greater than the initial value 30000 of min, so the test n < min in your loop never passes and the variable min is never updated. Your algorithm would incorrectly return a minimum of 30000 in that case.Conversely, if the ten input numbers are all less than -30000, then the minimum will be found correctly but the maximum will be incorrectly reported as -30000.

Yes, you can do it this way but don't forget the purpose of the tutorial. I present two methods

for a beginner programmerwhere he / she can see how to (pseudo)code the calculation of a sum. The first is more obvious but the second way is smarter in algorithmic terms i.e. shorter and more concise, so more efficientWell it's not a big deal but my point here was: I think the reduction to computing ##(-1) times 50##issomething that a beginner could not only understand, but also quite likely notice for themselves when reading your problem statement. Then they might wonder why you don't say anything about it and proceed to solve it using loops.Great first Insight QQ!

If you do this and all the inputs are more than 30000 or if they are all less than -30000 then one of the results will be wrong.Thanks for the comment but I cannot really see your point about giving all numbers greater than ##30000## or less than ##-30000##. For

eachiteration there is acurrentmaximum and minimum. How is it that one of the results will be wrong? You can give the numbers any way you wish.Also, maybe this was left out because it is supposed to be obvious, but I think the way any good programmer/algorithm designer/physicist/mathematician/engineer etc. would do…Yes, you can do it this way but don't forget the purpose of the tutorial. I present two methods

for a beginner programmerwhere he / she can see how to (pseudo)code the calculation of a sum. The first is more obvious but the second way is smarter in algorithmic terms i.e. shorter and more concise, so more efficient, where the reader sees how to include the sign as an expression and use it and so eliminate the need for splitting the sum.Er,

If you do this and all the inputs are more than 30000 or if they are all less than -30000 then one of the results will be wrong.

Also, maybe this was left out because it is supposed to be obvious, but I think the way any good programmer/algorithm designer/physicist/mathematician/engineer etc. would do ##1 – 2 + 3 – 4 + dotsb + 99 – 100## is $$(1 – 2) + (3 – 4) + dotsb + (99 – 100) = underbrace{(-1) + (-1) + dotsb + (-1)}_{50 text{ terms}} = -50 ,.$$