[Fortran] Naming conventions for integers

  • Context: Fortran 
  • Thread starter Thread starter anorlunda
  • Start date Start date
  • Tags Tags
    Fortran Integers
Click For Summary

Discussion Overview

The discussion revolves around the naming conventions for integers in programming, specifically in FORTRAN, and their origins. Participants explore the implications of these conventions in both programming and mathematical contexts.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • Some participants note that in FORTRAN, variable names starting with I, J, K, L, M, N are treated as integers by default, while other letters represent real numbers.
  • One participant suggests that this convention may have been established to facilitate programming for scientists and engineers, aligning with common mathematical practices.
  • Another participant mentions that the choice of I-N for integers could be linked to the first letters of the word "integer."
  • Some express uncertainty about the historical origins of the convention, with references to its use in mathematics prior to FORTRAN.
  • A participant recalls that their background in pure mathematics led to initial confusion with the I-N convention, as they were accustomed to different uses of these letters.
  • There is mention of the historical context of variable naming conventions in mathematics, including references to René Descartes, though its relation to the I-N convention remains unclear.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the origin of the I-N naming convention, with multiple competing views and uncertainties expressed throughout the discussion.

Contextual Notes

Some participants highlight that the convention may vary in different contexts, and there is a lack of explicit historical documentation regarding its origins.

anorlunda
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Messages
11,326
Reaction score
8,755
Long ago I learned Programming in FORTRAN. I got used to the convention that names starting with I,J,K,L,M,N were INTEGER while all other letters were REAL. I thought it was a convention of FORTRAN only. Since then, I came to realize that the same convention is widely used in science and math independent of computer programming. But I do not recall ever being explicitly taught any such convention.

My question: what is the origin of this convention?

I'm posting it here under math as a guess as to the right forum.

P.s Wikipedia mentions this convention under "naming conventions (programming)", but it does not mention the origin.
 
Technology news on Phys.org
In Fortran you don't necessarily have to declare variables. Just use them and voila! they exist. The default convention is that variables that start with I to N are integers. A variable whose initial character is A to H or O to Z means the variable is real by default.

Note: IMPLICIT NONE turns off this default convention and forces a programmer to declare all variables. That's the recommended practice for new Fortran code. There's a lot of very old Fortran code still in use. Getting rid of the default convention would mean that a lot of this old code would have to be rewritten.
 
Why do you think such a convention was established for FORTRAN in the first place?

The letters I-N have commonly been used in matrix notation and elsewhere to represent index variables, dummy variables, exponents, etc. Since FORTRAN was designed to be a FORmula TRANslator, making the letters I-N represent integer values, I think, was seen as a convenience to induce scientists and engineers to write programs in FORTRAN by making for less fussing with variable types when using common math formulas and procedures.
 
I am not a mathematician or a historian and, yes, the first time I ever heard that variables representing integers start with the letters I though N was when I learned Fortran...whether I might have used those letter for integers before I had learned Fortran is possible, then again, I started to go to school quite a few years AFTER Fortran had been invented.

Needless to say, such I-N convention was just a convenience for implicit variables.

At the top of many old Fortran programs, there is an implicit statement at the top when they declare which initial variables are going to be used for what types of variables...I-N for integers, C for complex, everything else for reals or something like that.

By the way, nobody has explicitly mentioned it, what I learned in Fortran is that the reason why the set of letters from I to N had been chosen to represent integers is because such (inclusive) I-N range can be represented just so using the first two letters of the word INteger itself.
 
anorlunda said:
Long ago I learned Programming in FORTRAN. I got used to the convention that names starting with I,J,K,L,M,N were INTEGER while all other letters were REAL. I thought it was a convention of FORTRAN only. Since then, I came to realize that the same convention is widely used in science and math independent of computer programming. But I do not recall ever being explicitly taught any such convention.

My question: what is the origin of this convention?

I'm posting it here under math as a guess as to the right forum.

P.s Wikipedia mentions this convention under "naming conventions (programming)", but it does not mention the origin.

Today, the convention is to give variables meaningful names. The only exception to the rule is with indexes in loops.

For example...
Code:
area = height * width;

instead of

Code:
a = h * w;
 
Sure, but it does not say anything about the I-N convention, which is what the OP was wondering about.
 
I think the convention started with FORTRAN. Coming from a pure math background, I vaguely remember having a hard time with that convention at first. Before that, i and j meant the imaginary part of complex numbers to me. The m and n were more commonly used for integers. n was a natural number.
 
... convention of using i through n for integers ...
FactChecker said:
I think the convention started with FORTRAN.
It predates Fortran. For example, summation or products series often use i, j, k for the iterating values and n (plus m and sometimes l) for the limiting values. i, j, k, ... are also used as an index / suffix when using a generic naming convention for terms in a polynomial, matrix, set, ... .
 
Last edited:

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K
Replies
16
Views
3K
  • · Replies 20 ·
Replies
20
Views
2K
  • · Replies 9 ·
Replies
9
Views
12K
  • · Replies 8 ·
Replies
8
Views
5K
  • · Replies 24 ·
Replies
24
Views
16K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 12 ·
Replies
12
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K