Trying to decide which programming language I want to learn

Click For Summary
Choosing a programming language to learn can be challenging, especially for someone with a background in older languages like assembly, Fortran, and Pascal. C# and C++ are considered for firmware design, while Python is favored for its ease of use and relevance to gaming, particularly with grandchildren. C++ is noted for its speed and compactness, making it suitable for game programming, but it may require more effort to learn compared to C#. Resources like Visual Studio for C# and various online tutorials can help beginners get started, while microcontroller programming can be explored through platforms like Arduino and Raspberry Pi. Ultimately, the choice should align with personal interests in scientific or gaming applications.
  • #61
Jenab2 said:
If you have already learned a programming language, then learning another should be easy. I also learned BASIC, FORTRAN, PASCAL in my youth. When I learned BASIC, I also had to learn coding logic. After BASIC, I knew coding logic, so learning the other languages were only a matter of becoming familiar with new syntax.

These are all procedural programming languages that are very similar to each other though.

I think there are broadly two reasons to learn a new programming language. The first is that you want to get into or learn more about some specific kind of programming, e.g. systems programming, video game programming, scientific computing, web development, etc. There it's fairly straightforward: you want to look up what programming languages and libraries people in these domains use and start using them yourself.

The second is to learn more about programming and different kinds of programming concepts and approaches in general. What you want to do there is learn some programming languages that are very different from each other. On that:

1) There are a lot of good "single issue" programming languages that concentrate on doing one thing or a few related things well that you can learn a particular programming style from. Some potentially interesting languages I'm aware of:
  • ML, Haskell, and similar languages for their pure functional programming approach and the strict static-type inferencing system that this makes possible.
  • Smalltalk, for object-oriented programming.
  • Erlang, for concurrency.
  • Prolog, for its relation/query-driven approach to programming.
  • Forth, for its stack-driven model of programming.
  • Scheme, for its minimalism and extensibility.
  • I'd also include C here, since its model of computing is basically just the von Neumann architecture. (C's features are all meant to map straightforwardly to the basic resources and operations supported by a typical computer consisting of a processor and memory. All of C's basic data types are meant to fit in one or at most a few processor registers.)

2) Most "mainstream" programming languages roughly fall somewhere on a spectrum between C and Lisp in terms of their features and programming styles they support. Java, C#, and C++ are closer to the C end while the more popular dynamic languages (Python, Ruby, Javascript, etc.) are closer to the Lisp end.

Concentrating on Python (it's the language of its "kind" that I happen to be most familiar with) then if you look past its syntax and look at its features and major decisions about how it does things -- supports interactive programming, strong but dynamic typing, all variables are references, automatic memory management, built-in easy-to-use aggregate data types, exception handling, first class functions, supports multiple programming styles, etc. -- then it looks rather like a simplified Lisp dialect but without Lisp's metaprogramming capabilities. AI researcher Peter Norvig wrote a fairly detailed comparison of Python and Lisp here; his reply to this Quora question also succinctly explains what different Lisp programmers might like or dislike about Python.
 
Last edited:
Technology news on Phys.org
  • #63
yungman said:
I have a question, this it the program:

#include <iostream>
int main()
{
std::cout << "Hello Buggy World"<< std::endl;
std::cout << "this is a test" << std::endl;
std::cout << "test again" << std::endl;
return 0;
}

Is the function main() defined in the <iostream> ? Because if I change to some other names, the build failed.

I am learning to define a function inside the program, I thought main() is just defined by my program, I guess it's not true.
Every C and C++ program has to have a function named main(). This serves as the entry point for the program. It is not defined in any header -- your code is providing the definition of this function.

The main function can appear in another form with two arguments, which makes it possible to pass arguments to main. If you want to learn more about this feature, do a web search for "C arguments to main".

If you post code, it's a good idea to use code tags. They look like this:
C:
<< Your code >>
The above tag can be used for C or C++.
Also, you can eliminate some typing by a using statement, like so.
C:
#include <iostream>
using std::cout;
using std::endl;

int main()
{
    cout << "Hello Buggy World"<<endl;
    cout << "this is a test" << endl;
    cout << "test again" << endl;
    return 0;
}
 
  • Like
Likes yungman, sysprog and pbuk
  • #64
yungman said:
This is my first one. I am stoked.
Congratulations!

I can sense your struggles, because I started by programming in FORTRAN in college and graduate school (mid 1970s through mid 1980s), although not at the microprocessor level. Then I learned Pascal so I could teach intro programming courses in it, then switched those courses to C++ in the mid 1990s, and stopped teaching them in the mid 2000s.

For introductory teaching and learning, I've always preferred a simple command-line interface. In my C++ days, we used the Unix command line, and the basic commands for compiling, linking and running simple programs. In more advanced classes (taught by other people), students were introduced to industrial-strength IDEs: Eclipse in our case, Visual Studio in your case. The only one I've dabbled with is Xcode for MacOS.

IDEs are valuable and even essential for managing complex projects, but IMHO they get in the way when you're starting to learn programming. You spend more time trying to figure out where everything is, than on the details of the programming language.

For my simple hobby-level programming, I still prefer to use the Unix command line in a MacOS Terminal window, or an xterm window in Linux.
 
  • Like
Likes yungman and sysprog
  • #65
This needs a large amount of emphasis!
jtbell said:
IDEs are valuable and even essential for managing complex projects, but IMHO they get in the way when you're starting to learn programming. You spend more time trying to figure out where everything is, than on the details of the programming language.
 
  • #66
jtbell said:
IDEs are valuable and even essential for managing complex projects, but IMHO they get in the way when you're starting to learn programming. You spend more time trying to figure out where everything is, than on the details of the programming language.

For my simple hobby-level programming, I still prefer to use the Unix command line in a MacOS Terminal window, or an xterm window in Linux.
That's all very well but the OP is using Windows and doesn't have any experience at the command line. Python (or more accurately Python dependency management using conda and especially pip) can be a real struggle on the Windows command line due mainly to admin/non admin user issues and PATH problems, even if you know what you are doing. For a Windows user, Anaconda or PyCharm takes away this problem and means you can concentrate on your code.

Edit: mixed up threads, this one is about C++ not Python - the same applies though, compiling C++ in Visual Studio is much easier than the Windows command line.
 
Last edited:
  • Like
Likes Astronuc
  • #67
Mark44 said:
Every C and C++ program has to have a function named main(). This serves as the entry point for the program. It is not defined in any header -- your code is providing the definition of this function.

The main function can appear in another form with two arguments, which makes it possible to pass arguments to main. If you want to learn more about this feature, do a web search for "C arguments to main".

If you post code, it's a good idea to use code tags. They look like this:
C:
<< Your code >>
The above tag can be used for C or C++.
Also, you can eliminate some typing by a using statement, like so.
C:
#include <iostream>
using std::cout;
using std::endl;

int main()
{
    cout << "Hello Buggy World"<<endl;
    cout << "this is a test" << endl;
    cout << "test again" << endl;
    return 0;
}
What is Tags? I search google, still don't quite get it.
C++ program 1.jpg

this is the program I am working with, it looks like what you show, but when I do Ctrl C and copy into the post, I lost all color and indent. This is a screen capture instead of copy, you can see it looks like what you have.

I still don't know what is Tags.
 
Last edited:
  • #68
jtbell said:
Congratulations!

I can sense your struggles, because I started by programming in FORTRAN in college and graduate school (mid 1970s through mid 1980s), although not at the microprocessor level. Then I learned Pascal so I could teach intro programming courses in it, then switched those courses to C++ in the mid 1990s, and stopped teaching them in the mid 2000s.

For introductory teaching and learning, I've always preferred a simple command-line interface. In my C++ days, we used the Unix command line, and the basic commands for compiling, linking and running simple programs. In more advanced classes (taught by other people), students were introduced to industrial-strength IDEs: Eclipse in our case, Visual Studio in your case. The only one I've dabbled with is Xcode for MacOS.

IDEs are valuable and even essential for managing complex projects, but IMHO they get in the way when you're starting to learn programming. You spend more time trying to figure out where everything is, than on the details of the programming language.

For my simple hobby-level programming, I still prefer to use the Unix command line in a MacOS Terminal window, or an xterm window in Linux.
So far, VS is behaving for me, I am on chapter 3 of the book and I play with quite a few program by creating new projects on VS and type in the lines, experimenting by changing and adding my lines. So far, (knock on wood) it's...I dare to say, smooth! ( I hate saying this!)

So far, I deal with colsole cout and cin to input variables, display strings of words, perform simple arithmetic function and display in cmd.exe. Creating a function like "int customfunction()" and call the function in main(). So far so good. no question yet...Knock on wood.
 
  • #69
I speak too soon, I do have a question. I am going through all the examples in the book, the book tends to build on the last example for the next. Meaning the next example is using like 90% of the code of the last example. It would be very convenient to save the program of the last example and rename as new program so I don't have to retype the whole thing.

I look at "file", there is no "Save as" option that I can save the program in another name. Is there any way to do this?

Thanks
 
  • #70
yungman said:
I still don't know what is Tags.
@Mark44 was referring to bbcode tags (BBcode Guide is linked at the bottom left of PF pages, next to the LaTeX Guide link) which are a kind of markup used on the forums to assign attributes to parts of a post ##-## the suggestion was to wrap your code in between code and /code tags like this:
Mark44 said:
If you post code, it's a good idea to use code tags. They look like this:
[[/color]code=c]
<< Your code >>
[[/color]/code]
The result would be:
C:
<< Your code >>
 
  • #71
yungman said:
I speak too soon, I do have a question. I am going through all the examples in the book, the book tends to build on the last example for the next. Meaning the next example is using like 90% of the code of the last example. It would be very convenient to save the program of the last example and rename as new program so I don't have to retype the whole thing.

I look at "file", there is no "Save as" option that I can save the program in another name. Is there any way to do this?

Thanks
It's called "Save File As" in MS Visual Studio.

If you use Notepad++, you can choose "Save As" or "Save a Copy As" ##-## the latter let's you make multiple versions while keeping the file open with the prior name.
 
  • #72
sysprog said:
It's called "Save File As" in MS Visual Studio.

If you use Notepad++, you can choose "Save As" or "Save a Copy As" ##-## the latter let's you make multiple versions while keeping the file open with the prior name.
Actually I referred to saving the whole project. Yes, there is a save .cpp, but that only save one file, I have to go in and create a new folder and put it in before working with the .cpp. Do they have a way to say the all the source files like when they create the new project?

Thanks
 
  • #73
yungman said:
Do they have a way to say the all the source files like when they create the new project?
File -> New -> Project from existing code

But I wouldn't bother for this purpose, do you really need to save your "Hello World" code for posterity? Just work through the early exercises modifying as you go.
 
  • #74
pbuk said:
File -> New -> Project from existing code

But I wouldn't bother for this purpose, do you really need to save your "Hello World" code for posterity? Just work through the early exercises modifying as you go.
No no, I way passed that 2 days ago, this is writing a local function and how to declare global variables. It's like the program showed in post #67. I'll try the new project from existing code.

thanks
 
  • #75
This is just a comment but I feel very strongly about this. In my days, we do everything in HEX number, it is so so much more intuitive. Studying about ASCII and reading the book remind me that now they use decimal numbers, not HEX or binary anymore. In my days, we are trained to work in HEX or binary because that's how the computer thinks, you actually can see the bits how the signal is. For hardward, this is so important to think in HEX. Decimals don't mean a thing. I don't know why it's so hard for people that they have to change to decimal particular doing firmware ( which is a big big field). What is 65535? If it is 0FFFFH, you know right away it's all 16 1s! I just hate it when I design the controller hardware and the programmer talk to me in decimals. Tell me the HEX, I immediately know what signal is what.

I used to program in machine language for Z80 and just type away in HEX to test hardware. People really need to tuff it up and stick the HEX, can't think and do HEX, there might be other career that's better.
 
  • #76
yungman said:
Studying about ASCII and reading the book remind me that now they use decimal numbers

Most programming languages have a syntax for expressing numbers in hex instead of decimal. For example, in Python you can say 0xFFFF instead of 65535.
 
  • #77
PeterDonis said:
Most programming languages have a syntax for expressing numbers in hex instead of decimal. For example, in Python you can say 0xFFFF instead of 65535.
I know the programmers I worked with all these years use C ( don't know C++ or whatever), they talk decimals. I sure hope people use languages for firmware do HEX.

I am reading the book on short, long, long long and gives all the decimal numbers, I have to search and finally found it's just 2bytes, 4bytes and 8bytes resp! Just say it! It's not that hard to learn, suck it up!
 
  • #79
jtbell said:
Same in C++. There are also similar notations for octal and binary integer literals.

https://en.cppreference.com/w/cpp/language/integer_literal
Problem is you give people the option, they take the easy way and use decimal.

It's like for an 8bit controller and write to an 8 bit port to control 8 different event. If I write 05AH, I know immediately it's 01011010, I know right away what condition I am driving. If you tell me 90 in decimal, what is that? Hell, I had to use google to translate 05AH to get 90!
 
  • #80
As has been mentioned, most programming languages support writing and printing integers in different bases (decimal, hex, and octal are the ones usually supported). You're meant to just use whichever one is most natural for what you're doing. So typically you'd use hexadecimal for low-level computer related things (e.g. memory addresses) where powers of 2 tend to have special significance or you see integers as blocks of bits, and decimal for more everyday things (e.g., number of employees on your payroll).

C++ can print in different bases, although the way iostream handles this is really ugly:
C++:
#include <iostream>

using namespace std; // protect code from std prolifiration.

int main()
{
    int x = 0x2A;

    // Save state of cout stream.
    ios_base::fmtflags flags(cout.flags());

    cout << hex << uppercase << x << " in hex is "
         << dec << x << " in decimal." << endl;

    // Restore cout to its initial state.
    cout.flags(flags);
    
    return 0;
}
 
  • #81
What you should use is mostly dependent on what you want to do with it.

One caveat about C languages is that the syntax can get really cryptic and it's so flexible there's a dozen ways to do every task with some far more appropriate than others, so I'd recommend liberally commenting your code to keep track of what you wrote. Also for anything performance-related it pays to know something about the compiler because the size of the code and speed it executes is highly dependent on how the source code is written. I'm sure the compilers are much better with the error-checking now but it probably still pays to keep the code simple and concise to avoid loose pointers and non-terminating loops unless you are focusing on speed of execution and then you'll be writing a lot more inline code to give the compiler more explicit direction or reverting to assembler for hardware-level interface and mathematically intense computation where error checking in the compiler slows down the object code. I'd recommend avoiding the temptation to get clever with tightly condensed C code that is impossible to read and compiles poorly just to save ascii characters in the source and prove your intelligence.

If you are writing first-person-shooter video games with fast action for reflex response time you probably won't be writing such games as a hobby because they are difficult to optimize and all those worries about garbage collection slowing things down don't apply otherwise.

If you are writing role-play games and kid's games or web-based games you will probably end up using Python, Java, or some other modular stickum for speed of development and ease of debug because there's no point in re-inventing the wheel.

If you are doing embedded systems on microcontrollers, you might toss that whole idea of a Windows IDE in favor of a dedicated solution like pi. Honestly since I never did that personally I don't know. I'm just trying to point out that writing games for your grand kids is not the same thing as writing microcode for an embedded controller. You will probably end up with two different development environments and working in entirely different programming languages like the 'real' programmers do.

For GPIB just about anything works because it's so slow anyway and it makes sense to use somebody else's interface library if you can rather than build it yourself.

I'll second the shout-out to Linux. For decades Microsoft programmers used Linux to develop Windows. Linux is industry standard for embedded development AFAIK plus the OS and development tools are available in free and open source versions (FOSS) so you won't have to invest a fortune just to get up and running or spend your time in a Microsoft sandbox environment (I mean the consumer type for kiddies, not the development type for security). Engineers I worked with used Java for web compatible user documentation and the embedded developed in C++/assembler. We used python for quick development of test fixtures with simple to code syntax that technicians can readily understand, modify, and operate interactively via interpreted mode execution on the manufacturing assembly line.

If you just want to build some simple games for your grandkids, I'd stick with a pre-made Windows solution and nix all the artsy. It won't be fast or elegant but it will be simple to learn and you won't need to create your own integrated development environment from scratch on the command line or worry too much about the math or the hardware. The existing libraries will handle all of that for you and you can find workarounds for speed of execution online if you need them.

If I'm off base people feel free to jump in. I didn't actually do much of this stuff personally and I've got strictly an overview perspective from the outside. I did most of my programming in Unix shell scripting for massaging the output files of one design tool as input to another tool and most of that was simple syntax transformation or report generation. I did a little C and assembler also but it was minimal. I also did some HP Basic for HPIB/GPIB interface and that's an entirely different ballgame dedicated to an instrumentation environment for automated testing and control. The most sophisticated C program I wrote drew a Mandelbrot and it was incredibly slow compared to the versions that were out there doing all those fractal graphics in the 1990s so the concern about speed is real when it comes to mathematically intense graphics and not something to be dismissed lightly. The programming language may be less important but the way you write your code has a tremendous impact on how fast it executes and I didn't know anything about it so I wrote turtle code.
 
  • #82
I studied 60 pages, 3 chapters. This is the notes I have in attached file. 60 pages of these! The next chapter is Managing Arrays and Strings! I am still waiting to get to logic function, condition statements to really write some interesting stuffs. This is so boring.
 

Attachments

  • #83
Notes aren't bad, just a couple of things
yungman said:
* C++ is Object Oriented Language.
* Compiler converts .cpp into machine language called Object Files.
The word "Object" refers to two completely different concepts in those two statements.

yungman said:
*using namespace std to avoid repeating std:: on every line.
*using std::cout; using std::cin at the beginning so don’t have to repeat for the rest of the program.
That's an either/or, you don't do both. I prefer the latter.

yungman said:
-#define: #define pi 3.1416
A define is different from a constant: the former basically saves you from typing 3.1416 every time you want to enter this literal value in your source code so the compiler can compile it whereas the latter saves the value 3.1416 in some location in memory to be used at run time.
 
  • #84
pbuk said:
Notes aren't bad, just a couple of things

The word "Object" refers to two completely different concepts in those two statements.That's an either/or, you don't do both. I prefer the latter.A define is different from a constant: the former basically saves you from typing 3.1416 every time you want to enter this literal value in your source code so the compiler can compile it whereas the latter saves the value 3.1416 in some location in memory to be used at run time.
Each of the statement after the "*" is a completely independent statement, they have no relation to each other. I use * to say it's a separate bullet point.

I feel so far, it's like looking at all the cars, programs and everything computer related products, they all try so hard to make it user friendly, as much freedom of style to express the same thing, making it easy for people that doesn't have the neck of computer to use computer. That it have so many different ways of doing the same thing.

The most ridiculous one is the typeof! If you don't like the standard definition "long long" for 8byte data length, you can use:

typeof long long int bossLecturing;// so I can use bossLecturing to define long long int.

It is very common 3 or 4 programmers working on the same project, each write a portion of the program. If anything goes wrong after integration and need to debut the whole thing. Someone need to read all the sub programs of different programmer.

Think of all three programmer defined the same variable by each chose to use typeof, one use bossLecturing, one use wifeNagging, one use motherinlawTalking. All to define long long. Can you imagine how confusing this can get?

What's wrong with using straight dedicated words, no option. Then everyone understand exactly what it means instead of having to go read the declaration in the program? What's wrong that the codes look like a computer code instead of looking like English?

Maybe I am old, this whole 60 pages can be summed up in 5 pages if all these nonsense get eliminated. It all can be learn in one afternoon instead of reading 60 pages carefully to make sure it's all nonsense! I still think people either have it or don't. If anyone need the codes to be like English, this might not be for them...or put it more bluntly, they don't have what it takes. These complications cause bugs. I still can't get over my 2018 car spent 5 weeks in the first 7 or 8 months in the shop, all computer problems. Still not getting fix, just learn to live with it.
 
  • #85
yungman said:
* #include <iostream> to perform function of cout to output streams to display using std::cout << follow by
the “asdfongfasd” for words etc.
I get what you're saying, but technically, cout is not a function -- it's a stream object. The actual "function" is the << operator, which is also used to shift bits left. Another standard stream object is cin, and its related operation is >>, which is also used to shift bits right.
yungman said:
*using namespace std to avoid repeating std:: on every line.
*using std::cout; using std::cin at the beginning so don’t have to repeat for the rest of the program.
I agree with what @pbuk said, especially about preferring the latter. A using namespace std statement brings in the entire namespace, which contains a very large number of classes and everything else that is defined in this large namespace. A better idea is to have a separate using statement for each thing you want to use.
yungman said:
*unsign short int: 2 short int: 2
unsign long int: 4 long: 4
int: 4 unsign long long: 8
long long: 8 unsign int: 4
There is no keyword "unsign" -- the correct one is unsigned.
Note that there is some overlap with these types.
unsigned short int is the same as unsigned short
unsigned int is the same as unsigned
unsigned long int is the same as unsigned long
signed char is the same as char
signed short is the same as signed short int and short
signed int is the same as int
signed long int is the same as signed long and long
And so on.
yungman said:
*Declare constant: const double pi = 22/7 // declares “double pi” is constant = 22/7.
Your constant definition intializes pi to 3.0.
Because 22 and 7 are both int constants, 22/7 is calculated using integer division. Any of 22.0/7 or 22/7.0 or 22.0/7.0 would result in floating point division, which is what you certainly intended.
 
  • #86
yungman said:
Each of the statement after the "*" is a completely independent statement, they have no relation to each other. I use * to say it's a separate bullet point.

I feel so far, it's like looking at all the cars, programs and everything computer related products, they all try so hard to make it user friendly, as much freedom of style to express the same thing, making it easy for people that doesn't have the neck of computer to use computer. That it have so many different ways of doing the same thing.

The most ridiculous one is the typeof! If you don't like the standard definition "long long" for 8byte data length, you can use:

typeof long long int bossLecturing;// so I can use bossLecturing to define long long int.

It is very common 3 or 4 programmers working on the same project, each write a portion of the program. If anything goes wrong after integration and need to debut the whole thing. Someone need to read all the sub programs of different programmer.

Think of all three programmer defined the same variable by each chose to use typeof, one use bossLecturing, one use wifeNagging, one use motherinlawTalking. All to define long long. Can you imagine how confusing this can get?

What's wrong with using straight dedicated words, no option. Then everyone understand exactly what it means instead of having to go read the declaration in the program? What's wrong that the codes look like a computer code instead of looking like English?

Maybe I am old, this whole 60 pages can be summed up in 5 pages if all these nonsense get eliminated. It all can be learn in one afternoon instead of reading 60 pages carefully to make sure it's all nonsense! I still think people either have it or don't. If anyone need the codes to be like English, this might not be for them...or put it more bluntly, they don't have what it takes. These complications cause bugs. I still can't get over my 2018 car spent 5 weeks in the first 7 or 8 months in the shop, all computer problems. Still not getting fix, just learn to live with it.
You meant to say typedef right? The point of it is that in C++, types, in their namespaces, and with their templates, and everything, can get extremely long and complicated. typedef is a convenience to save you time and mental energy.

As an example, if you use CGAL (a computational geometry library), you have to do something like this:

C:
typedef CGAL::Exact_predicates_inexact_constructions_kernel         K;
typedef CGAL::Triangulation_vertex_base_with_info_3<unsigned, K>    Vb;
typedef CGAL::Delaunay_triangulation_cell_base_3<K>                 Cb;
typedef CGAL::Triangulation_data_structure_3<Vb, Cb>                Tds;
typedef CGAL::Delaunay_triangulation_3<K, Tds>                      Delaunay;
typedef Delaunay::Point                                             Point;int main()
{
    Point p1( 0, 0, 0 );
    Point p2( 1, 1, 1 );
    Point p3( 2, 2, 2 );
}

because without typedef, you'de need to do this:

C:
int main()
{
    // p1
    CGAL::Delaunay_triangulation_3<CGAL::Exact_predicates_inexact_constructions_kernel,
    CGAL::Triangulation_data_structure_3<CGAL::Triangulation_vertex_base_with_info_3<unsigned,
    CGAL::Exact_predicates_inexact_constructions_kernel>,
    CGAL::Delaunay_triangulation_cell_base_3<CGAL::Exact_predicates_inexact_constructions_kernel>>>::Point p1( 0, 0, 0 );

    // p2
    CGAL::Delaunay_triangulation_3<CGAL::Exact_predicates_inexact_constructions_kernel,
    CGAL::Triangulation_data_structure_3<CGAL::Triangulation_vertex_base_with_info_3<unsigned,
    CGAL::Exact_predicates_inexact_constructions_kernel>,
    CGAL::Delaunay_triangulation_cell_base_3<CGAL::Exact_predicates_inexact_constructions_kernel>>>::Point p2( 1, 1, 1 );

    // p3
    CGAL::Delaunay_triangulation_3<CGAL::Exact_predicates_inexact_constructions_kernel,
    CGAL::Triangulation_data_structure_3<CGAL::Triangulation_vertex_base_with_info_3<unsigned,
    CGAL::Exact_predicates_inexact_constructions_kernel>,
    CGAL::Delaunay_triangulation_cell_base_3<CGAL::Exact_predicates_inexact_constructions_kernel>>>::Point p3( 2, 2, 2 );
}

It's not very fun writing out a type that is 4 lines of code long every place you declare a variable.

Or maybe you want computer language looking names? The CGAL developers could have done this:

C:
int main()
{
    // p1
    F10010::C1001<F1010::R1001, F10010::N0000<F10010::T1111<unsigned, F10010::R1001>, F10010::C2002<F10010::R1001>>>::K0011 s001( 0, 0, 0 );

But that would be pretty easy to mess up. "What in the heck is this s001 thing supposed to be?", people would wonder in despair.

Why are CGAL's types so complicated? If you are on a quest to become a C++ expert, then you will know in about 10-15 years ;)
 
Last edited:
  • #87
yungman said:
It is very common 3 or 4 programmers working on the same project, each write a portion of the program. If anything goes wrong after integration and need to debut the whole thing. Someone need to read all the sub programs of different programmer.
This is why we have encapsulation as one of the three fundamentals of OOP, and why unit testing and integration testing are key components of the development cycle. You are now learning the fundamental principles that make these things possible; persevere and once you start to build a real application it will make more sense.

Perhaps it would be good to take a brief "time out" from the book to get a bit of perspective and inspiration by taking a look at the documentation from a real world library that you might want to #include. Here's a link to a WiFi library for the Arduino: https://www.arduino.cc/en/Reference/WiFi.
 
  • #88
I copied the file I input according to the example in the book. The question only want to read the values in both arrays. That's NOT the question I want to ask. My question is about constexpr that making the moreNumber(24) , a 24 element array. The book does not talk anything or the syntax on how to create constexpr on array, I just very interest in learning. I tried google, all the example have too many terms I have not learned, so they don't make sense to me. Because of potential error if I write and read back to say moreNumbers(20). If the array is defined wrong, this would be out of bound, but the compiler would not flag out of bound( according to the book). So even if I read back the correct value, that doesn't verify the moreNumber() is created correctly.

If you look at line 3 and line 10, Something really looks wrong to me. line 3 defines
line3: constexpr int Square(int number) { return number * number; }
line10: int moreNumbers[Square(ARRAY_LENGTH)];
How is line 3 set up for line 10? Can anyone explain this to me?

Thanks
C++:
#include<iostream>
using namespace std;
constexpr int Square(int number) { return number * number; }

int main()
{
    const int ARRAY_LENGTH = 5;

    int myNumbers[ARRAY_LENGTH] = { 5, 10, 0, -101, 20 };
    int moreNumbers[Square(ARRAY_LENGTH)];

    cout << "Enter index of the element to be changed: ";
    int elementIndex = 0;
    cin >> elementIndex;

    cout << "Enter new value: ";
    int newValue = 0;
    cin >> newValue;

    myNumbers[elementIndex] = newValue;
    moreNumbers[elementIndex] = newValue;

    cout << "Element" << elementIndex << "in array mynuNumbers is: " << myNumbers[elementIndex] << endl;

    cout << "Element"<< elementIndex <<" in array moreNumbers is:" << moreNumbers[elementIndex] << endl;

    return 0;
}
 
  • #89
Sorry I have not responded to the suggestions of my notes, I am all consumed by the question I posted above. I'll come back later.

Thanks
 
  • #90
My advice is to keep it simple and start with C.
You can start much easier with C, but if you insist on starting with C++ then you should not complain when you run into a lot of unfamiliar terms and concepts. The entire basic C language is concisely defined in a short book by Kernighan and Ritchie. On the other hand, C++ books are many hundreds of pages where the concepts are usually scattered all throughout, and incompletely explained in anyone part.
 

Similar threads

  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 133 ·
5
Replies
133
Views
10K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
16
Views
3K
  • · Replies 54 ·
2
Replies
54
Views
5K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 58 ·
2
Replies
58
Views
4K