I'm becoming a much better programmer, but maybe not a faster one

  • Thread starter SlurrerOfSpeech
  • Start date
In summary: But it should be possible to classify them into different "sizes" of problem. This is a great way to track your progress in performance.In summary, the conversation discusses the speaker's experience with software design and their improvement over the years. They have become adept at using various techniques such as interfaces, dependency injection, and abstraction to create "good" code. However, they question whether this future-proofing actually pays off in terms of speed and efficiency. The conversation also touches on the importance of simplicity and avoiding over-abstracting code. The concept of future proofing is also challenged, as changes often come outside of what was predicted. Additionally, the importance of testability in code and the relationship between development overhead and
  • #1
SlurrerOfSpeech
141
11
After ~5 years of real-world experience I've become so much better than I used to be at software design. Interfaces, dependency injection, small classes, composability, factory methods, generics, micro-optimial algorithms, etc. However I'm not sure I have become faster at delivering work items. It used to be that someone gives me a business problem and I could whip out a solution that works, although my code was "bad." Now it takes me the same amount of time, but my code has 100 layers of abstraction over the concrete business problem. This is all very "good" code as it can be justified with "We can easily swap out some dependency and it will still work," but I question whether this future-proofing pays off on average.
 
  • Like
Likes Jarvis323 and PeroK
Technology news on Phys.org
  • #2
To really master it, you must now abstract only as necessary to support future features. Don’t overdo it.

I’ve seen horrible code abstracted to the nth degree that was too fragile to extend with new features. The interfaces locked down what you could do too tightly.

I worked on one super abstracted system called Taligent. The basic app template was a GuiCompoundDocument that you would subclass from. The problem was this class was subclassed to at least 10 levels and startup was slow, GUI responses were slow and it was very difficult to know what methods to call. It was an era before the IDE tools.
 
  • Like
Likes .Scott, QuantumQuest and Ibix
  • #3
Abstraction is valuable when you are writing code that will have a great variety of applications which share some abstract properties, or when you are applying well-known abstract properties to a particular application. Otherwise, it obscures the details of an application that might be simple. In my experience, the vast majority of code is best as simple and direct rather than abstract. Apply the KISS method (Keep It Simple Stupid). I have seen code by people who think that they are developing an entirely new programming language (as though they are making the next Python language) for a simple problem. I personally find that annoying.
 
  • Like
Likes Jarvis323, nsaspook, sysprog and 3 others
  • #4
SlurrerOfSpeech said:
It used to be that someone gives me a business problem and I could whip out a solution that works, although my code was "bad."
We work on moderately complex software projects here at my work -- they span from small embedded devices all the way up to Enterprise and Cloud-based systems. (Yes, think IoT) We have found that writing "good" code that does not require lots of detailed (schedule killing) debugging is much more important than whipping out something fast to get to market. So to the extent that abstraction helps to write complex applications in a moderate-size development group (spanning many timezones), and to the extent that it helps to maintain and extend the code over the lifespan of the product line, that is a good thing. :smile:
 
  • Like
Likes Wrichik Basu, QuantumQuest and FactChecker
  • #5
"The myth of future proofing"!

SlurrerOfSpeech said:
but I question whether this future-proofing pays off on average.

This is a good question.

I worked mostly in Business Systems. The problem I always saw was that changes when they come are often outside the scope of what was future proofed.

I sat in many client presentations where they would ask "is the system future proofed". And, of course, the answer had to be yes.

But future proofed against what scope of change?
And this was against a background of IT systems fundamentally changing architecturally every 5 years or so.

Then, when the client submitted an innocuous looking change request, the costs were enormous. For example, a requirement for full scale system, integration, performance and user acceptance testing would massively outweigh the raw development effort.

The code changes were only a minor part of the overall implementation costs.
 
  • Like
Likes Klystron, sysprog, QuantumQuest and 2 others
  • #6
PeroK said:
Then, when the client submitted an innocuous looking change request, the costs were enormous. For example, a requirement for full scale system, integration, performance and user acceptance testing would massively outweigh the raw development effort.

The code changes were only a minor part of the overall implementation costs.
This speaks to the wisdom of "code for testability". Where I worked, people based their entire career on developing a test system which could inject and monitor variable values in a system that was running in real time. Static variables with fixed addresses at test points were desirable.
 
  • #7
FactChecker said:
I have seen code by people who think that they are developing an entirely new programming language ... for a simple problem. I personally find that annoying.
Every problem has its own language - but most of the times it's good enough to put that into comment and do not cement it into the structure of the product :thumbup:

SlurrerOfSpeech said:
However I'm not sure I have become faster at delivering work items. ... Now it takes me the same amount of time, but my code has 100 layers of abstraction
You have cleared a level but not the whole labyrinth. Just keep gathering XP and it'll get better eventually.
 
  • #8
The advantages of fully applying formal design methodologies doesn't really kick in until 7-800ish lines, but they pull away rather sharply from the "organic"(to be polite) method after that.

Reason, far as I can figger, is that development overhead increases linearly with program size/complexity, but debugging is exponential.
 
  • #9
hmmm27 said:
Reason, far as I can figger, is that development overhead increases linearly with program size/complexity, but debugging is exponential.

I'm not so sure about that. When layers upon layers are programmed to abstractions, it can be very difficult to figure out from reading the code what it actually does at runtime in the context of the application that I'm trying to debug.
 
  • Like
Likes Jarvis323, PeroK and FactChecker
  • #10
SlurrerOfSpeech said:
I'm not so sure about that. When layers upon layers are programmed to abstractions, it can be very difficult to figure out from reading the code what it actually does at runtime in the context of the application that I'm trying to debug.

Much, much easier if you know what the "abstractions" used were, and if they were applied consistently. Your shop might have standards, for such.
 
  • #11
SlurrerOfSpeech said:
However I'm not sure I have become faster at delivering work items.

I won't say you should necessarily, as long as the work items are quite or vastly different from what you have already developed and for the time span you talk about. Software development skills need a lot of time to mature but the whole thing is also very dependent on specific factors, which would make for a very long discussion.

SlurrerOfSpeech said:
It used to be that someone gives me a business problem and I could whip out a solution that works, although my code was "bad." Now it takes me the same amount of time, but my code has 100 layers of abstraction over the concrete business problem. This is all very "good" code as it can be justified with "We can easily swap out some dependency and it will still work," but I question whether this future-proofing pays off on average.

A solution that works but written using "bad" code - I assume this to mean not thoroughly thought /designed and / or tested and / or documented, won't give a brilliant future to your application and I think it's needless to say why as it is very obvious. On the other hand, giving an unnecessary lot of layers of abstraction without the appropriate design work beforehand, is also a call for trouble. Abstraction is neither for free nor of low price, generally speaking.

It definitely helps towards having code that can be modified / adapted without lots of effort and in a number of other things as well, but overdoing it will have performance costs - to say the least, and finally will lead to a complex piece of code for which it is many times very questionable if it is reasonably inside the demands and the constraints of the solution needed. Unfortunately, it is very evident that software development has followed this trend in many kinds of applications. Leaving aside the professional reasons that justify this, including protection of intellectual property, I think that the whole thing goes out of hand in a vast number of cases.

Now, for the question of future-proofing, I would say that in most cases it pays off on average, as long as the client remains inside the boundaries of what he / she has initially asked for. As @PeroK says, don't be surprised if the client asks for something that will essentially cancel future proofing. So, I think that keeping reasonable measures is the best thing to do.
 
  • #12
embedded devices - future proofing
Consider the case of hard drives. 32 bit sector addressing was good enough until hard drives exceeded 2 TB in size. The host to drive interface had already been changed to allow for 48 bit sector addressing, but it was a significant change to the firmware for the drives. Another example was the addition of the SSE family of instructions to X86 processors, where in the case of most programming languages, new code had to be written to take advantage of the xmm registers and their parallelization of operations.

abstraction versus performance
In the case of hard drives, other embedded devices, and some applications, performance is a key factor, and abstraction beyond a certain point affects performance. Compile time abstraction, such as C++ templates, allows for abstraction that generally doesn't impact performance.

encapsulation - get - set
In some cases the usage of get and set becomes excessive (this is somewhat opinion based). It's rare that a significant change to a class member isn't going to affect the code that does the get, modify, and set for that class member.

faster programmer
Usually a programmer does get faster at both design and implementation, unless a project is unique compared to prior projects, or requires the development of a new algorithm.
 
  • #13
Get and set are good future proofing schemes as they allow you to add in validation during a set operation or changing an instance attribute to a computed value.

Kotlin, for example, provides getter setter methods if needed but always makes them appear as direct access. In contrast, Java is more explicit and insists on getter/setters for access to instance attributes in their java bean scheme.

Interfaces are also a good future proofing scheme allowing you to define a protocol for classes and allowing you to change out one class for another. As an example, a tax application might have a calculator interface with an agreed upon list of methods.

Calculator classes for each tax year can be written supporting these methods but doing slightly different calculations for each tax year. The tax program can maintain the same GUI but with changing tax year computations and the interface provides a clean separation.

There's also the notion that a few good interfaces can make understanding a program's flow easier although you give up the easy ability to track through from class to class through an interface as there may be several possible classes to choose from. Instead, you'll need to use a debugger to see what class is actually used on the other side of the interface.
 
Last edited:
  • #14
Over the years, I found that I was able to produce functioning codes faster because I had a toolbox of my prior codes to draw from, i.e., like a toolkit. As you start writing a code from scratch, you find you need an algorithm that you did a couple of years ago and you adapt it into the current project. You might improve it some but the basic flow is there. The code group I worked in had developed some of these for geometry and other basic operations so that you could get a code running quickly and be inline with the groups coding standards.
 
  • Like
Likes Klystron, sysprog, jedishrfu and 1 other person
  • #15
SlurrerOfSpeech said:
However I'm not sure I have become faster at delivering work items.

This is the motivation for low-code platforms such as OutSystems. These platforms are establishing industrial process into software development, and reusable abstraction is the core of that. You can deliver apps about four times faster once you get up to speed, which is a huge productivity boon. They also slash the ongoing maintenance effort, which combats technical debt. As has been noted, common abstractions for aspects such interfaces, connectors, and user elements are helpful, but that's the tip of the iceberg when a drag/drop automatically creates the entire CRUD UI for a class of database items!
 
  • Like
Likes sysprog
  • #16
My recommendation is to practice coding quickly. Coding quickly and well are two different skills and a good programmer should be able to do both. You'll likely have situations where you'll need to professionally.

I find that future proofing does not pay off. I rarely write code that I would call future proof. I just write it in such a way that IF I need to change it later, I can without too much trouble. For example, any time I'm dealing with a database, I will put all of my queries in one place. However, unless I specifically have specs that say I need to make it replaceable, inside the db class, there might be a mess.

Think about it like this. If a customer says "mysql is nice, but I really need to be able to use Oracle" there are three possible responses:

1) No problem, I just have to write the DAL for oracle because everything is already using dependency injection
2) Okay, I need to modify the database layer, then write the oracle
3) Seriously? I have queries and sql dependencies everywhere, it'll take a week to refactor all of that.

Sounds like you are usually going to give response 1. It's okay to give response 2. Just don't write it so poorly that you become case 3.
 
  • Like
Likes PeroK, FactChecker and jedishrfu
  • #17
newjerseyrunner said:
My recommendation is to practice coding quickly. Coding quickly and well are two different skills and a good programmer should be able to do both. You'll likely have situations where you'll need to professionally.
My beginner-level skills and only some hobbyist experience tells me that Coding Quickly means Making Mistakes That Might Be Difficult to Find and Fix. This destroys the goal of Coding Quickly.
 
  • #18
symbolipoint said:
My beginner-level skills and only some hobbyist experience tells me that Coding Quickly means Making Mistakes That Might Be Difficult to Find and Fix. This destroys the goal of Coding Quickly.
On any program, there are time and budget constraints. So speed and efficiency become important.
 
  • #19
A few points, many that have already been addressed:
1) Future-proofing: If you know of specific changes that are already planned, then coding with those features in mind makes sense. Otherwise, I have found that attempting to predict what changes or what kind of changes will be made in the future is a loosing game.
2) Instead of future-proofing, think maintainability. Work to make your code easy to understand. Document what is not obvious - including the reason that the code exists at all.
3) Fast coding: There are a few things that go on during the "coding" process. First, all the most detailed design work happens at coding time - everything that precedes that was either less detailed or only a guess. Second, is the coding itself - everything related to the syntax and form of the source code. Finally there is the typing. Certainly the coding and the typing accelerate as you gain more experience. But those final detailed design decisions are key. As far as the design is concerned, give it as much time as it needs. By scrimping on the process of understanding the requirements and other design steps, you can get very fast coding - but you risk running into dead ends or maintenance issues that will sink the schedule.
 
  • Like
  • Informative
Likes Klystron, PeroK, sysprog and 1 other person
  • #20
.Scott said:
A few points, many that have already been addressed:
1) Future-proofing: If you know of specific changes that are already planned, then coding with those features in mind makes sense. Otherwise, I have found that attempting to predict what changes or what kind of changes will be made in the future is a loosing game.
2) Instead of future-proofing, think maintainability. Work to make your code easy to understand. Document what is not obvious - including the reason that the code exists at all.
3) Fast coding: There are a few things that go on during the "coding" process. First, all the most detailed design work happens at coding time - everything that precedes that was either less detailed or only a guess. Second, is the coding itself - everything related to the syntax and form of the source code. Finally there is the typing. Certainly the coding and the typing accelerate as you gain more experience. But those final detailed design decisions are key. As far as the design is concerned, give it as much time as it needs. By scrimping on the process of understanding the requirements and other design steps, you can get very fast coding - but you risk running into dead ends or maintenance issues that will sink the schedule.
I like IBM's model of backward compatibility. Code that was written for machines of decades ago can run unchanged on the latest systems without a hiccup. It can't use the more recent features, but it can still do now what it did then; if that model is kept in effect, code written yesterday and today will still work tomorrow, just as yesterday's code still runs today.
 
  • Like
Likes Klystron
  • #21
sysprog said:
I like IBM's model of backward compatibility. Code that was written for machines of decades ago can run unchanged on the latest systems without a hiccup.

Is this their mainframe hardware? And what OS? Irrespective, old code is hard to maintain and esp. hard to extend. I worked on an insurance system that is a couple of decades old. We hit a speed hump, and one of the older devs literally pointed across the office and said "Speak to Paul, he wrote that code in the first place." Needless to say, that helped, but even Paul struggled to figure out what his code was doing more than ten years after he wrote it. Without Paul? We probably would have just hooked in a newly coded extension, that's way cheaper than decoding what dinosaur devs were thinking :biggrin:
 
  • Like
Likes sysprog
  • #22
Tghu Verd said:
Is this their mainframe hardware? And what OS? Irrespective, old code is hard to maintain and esp. hard to extend. I worked on an insurance system that is a couple of decades old. We hit a speed hump, and one of the older devs literally pointed across the office and said "Speak to Paul, he wrote that code in the first place." Needless to say, that helped, but even Paul struggled to figure out what his code was doing more than ten years after he wrote it. Without Paul? We probably would have just hooked in a newly coded extension, that's way cheaper than decoding what dinosaur devs were thinking :biggrin:
Yes, I was referring to IBM mainframe hardware. In reference to today's machines, the 'mainframe' term is retained primarily to distinguish the direct successor machines, which run a superset of the instruction set of the predecessor machines, from the other systems available. In terms of OS, I'm thinking of the whole IBM mainframe OS family, all of which operating systems observe backward compatibility for application code, as the mainframes on which they run do for any code. Old code is hard to maintain? Well, if it was poorly written in the first place, maybe it is. To gain perspective, please go and write some machine language fixes and mods with only core dump printouts to work with, you big crybaby. :cry: :rolleyes: :wink:
 
  • #23
Tghu Verd said:
Is this their mainframe hardware? And what OS?
Typically Z/OS. Think of it as hardware and an OS that can run multiple virtual machines, each with its own virtual hardware and virtual OS (tri-modal addressing), but at full speed and in parallel.

https://en.wikipedia.org/wiki/Z/OS
 
  • #24
Ah yes, the platform that IBM spruiked as their "Highlander" - there need only be one. Pretty much jumped the shark when you could install X86 blades and run Windows apps.

"Wow, a mainframe that we can put a PC in," said nobody ever!
 
  • #25
sysprog said:
I like IBM's model of backward compatibility. Code that was written for machines of decades ago can run unchanged on the latest systems without a hiccup. It can't use the more recent features, but it can still do now what it did then; if that model is kept in effect, code written yesterday and today will still work tomorrow, just as yesterday's code still runs today.

There's a big difference between writing a mainframe O/S (or any O/S) and a "business application", which is what I took @.Scott's advice to apply to.
 
  • #26
Since my last post to this thread, I've realized that there are cases that fall between "maintainability" and "future-proofing".
For example, whenever I create a new file format I include the version number of the file format and the byte size of the header in the header. Is this future-proofing or is this maintainability? Whichever it is, I've learned that it's a tiny effort compared to the headaches it often avoids.
About a year ago, when asked to write a tool for copying FPGA code into an embedded flash device, I didn't use the hex file as source, but a file that included providence information (who compiled it, what their version number was, the target FPGA device model and version number, the date they compiled it, etc) so that I could tuck that information into a sector of the flash memory as well. It was only 6 months later that someone walked into my office with a radar sensor that had been programmed with that tool and sorely needed to know that information. So, I probably shouldn't say that future-proofing is always a "loosing game". But you certainly need to be careful about which of those games you choose to play.
 
  • Like
Likes Klystron and sysprog
  • #27
This discussion reminds me of a comment by an IT professional: "Well, if there is no requirement that the code should work, I can write it in less than an hour".
 
  • Like
Likes Klystron and Rive
  • #28
rcgldr said:
Typically Z/OS. Think of it as hardware and an OS that can run multiple virtual machines, each with its own virtual hardware and virtual OS (tri-modal addressing), but at full speed and in parallel.
That seems more like z/VM (virtual machine) or EMIF (extended multi-image facility). z/OS is the descendant of MVS (multiprocessing virtual storage).
Tghu Verd said:
Ah yes, the platform that IBM spruiked as their "Highlander" - there need only be one. Pretty much jumped the shark when you could install X86 blades and run Windows apps.
This is so incorrect that I barely know where to begin. You appear to be contending that IBM touted its mainframe technology as sufficient for all computational purposes, which it never has done, and you then seem to suggest or say that not only is this sufficiency untrue, but that blade server farms have obviated the need for the mainframe architecture. Both claims are manifestly false. If you think either of them to be true, please post some support for them, rather than just couching them in trendy terms.
"Wow, a mainframe that we can put a PC in," said nobody ever!
IBM was the primary corporate sponsor of the PC, and among the first companies to bring about integration between PCs and mainframes. In fact, IBM mainframes have used high-end single board computers, running OS/2, in their HMCs (hardware management consoles) since the '90s.

245191
 
  • #29
PeroK said:
There's a big difference between writing a mainframe O/S (or any O/S) and a "business application", which is what I took @.Scott's advice to apply to.
Indeed there is, and in general, the backward compatibility paradigm applies to both.
 
  • #30
sysprog said:
You appear to be contending that IBM touted its mainframe technology as sufficient for all computational purposes, which it never has done

I feel like I've touched a nerve, but about the turn of the century, IBM was telling the company I worked for - we were partners - that the mainframe could host traditional Z-series banking apps, and with the appropriate blades (or perhaps they were called 'cards', it was a while ago) could run Linux and Windows apps as well. They had impressive ROI graphs showing how this was considerably more cost effective...and supposedly secured...than typical approaches. None of our customers showed any shred of interest, it seemed an unlikely mixing of big iron and less disciplined business unit computing. So yes, they were telling us was sufficient for 'all' computational purposes that a regular business might have done at the time. To be fair, we didn't take that to mean SCADA or specialist types of ancillary computing, or even ML/AI, which was not really a thing at the time.

And sorry, I didn't keep any of that collateral, it was entirely secondary to what we were doing.

sysprog said:
but that blade server farms have obviated the need for the mainframe architecture.

Nope, not saying that and didn't say that, you're reading something else into my few words. It was exactly the opposite, the mainframe was meant to subsume your PC hardware.
 
  • #31
Tghu Verd said:
Nope, not saying that and didn't say that, you're reading something else into my few words. It was exactly the opposite, the mainframe was meant to subsume your PC hardware.
Please translate the following 2 sentences of yours into standard English without metaphor:
Tghu Verd said:
Ah yes, the platform that IBM spruiked as their "Highlander" - there need only be one. Pretty much jumped the shark when you could install X86 blades and run Windows apps.
 
  • #32
sysprog said:
Indeed there is, and in general, the backward compatibility paradigm applies to both.

For a lot of business applications there is no concept of backward compatibility. You have version 1 with a defined set of functionality for a defined set of users and a defined set of interfaces; and, you have version 2 with a revised specification. There's certainly no principle that version 2 must be a superset of version 1 functionality.

If, for example, in version 2 a group of users is no longer going to use the application (they have perhaps moved on to a more specific application for them - or perhaps that part of the business has been sold), then there is no obligation to include a revised specification for them.

Or, for example, much of the system may have moved from batch printing to email to communicate with customers. Do you have to include a the old printing functionality in the new version, just in case the decision is reversed?

In truth, it's a moot point since you would have a certain budget and timescale for version 2 development and, in the sort of environment I worked, there would be no possibility of adding unspecified backward compatibility to the solution.

We may be talking at cross purposes here.
 
  • #33
PeroK said:
For a lot of business applications there is no concept of backward compatibility. You have version 1 with a defined set of functionality for a defined set of users and a defined set of interfaces; and, you have version 2 with a revised specification. There's certainly no principle that version 2 must be a superset of version 1 functionality.

If, for example, in version 2 a group of users is no longer going to use the application (they have perhaps moved on to a more specific application for them - or perhaps that part of the business has been sold), then there is no obligation to include a revised specification for them.

Or, for example, much of the system may have moved from batch printing to email to communicate with customers. Do you have to include a the old printing functionality in the new version, just in case the decision is reversed?

In truth, it's a moot point since you would have a certain budget and timescale for version 2 development and, in the sort of environment I worked, there would be no possibility of adding unspecified backward compatibility to the solution.

We may be talking at cross purposes here.
A concrete example of backward compatibility is that original MS Word .doc files can be read and edited by MS Word 2016, even though the .docx file format has superseded the .doc format. The earlier versions of the product could not have been built with anticipation of the newer functionalities of the later versions as effectively as the later versions were able to accommodate the existing formats of their predecessors. I think that reliance on an existing and ongoing commitment to some form of backward compatibility is more reasonable than trying to impose a come-what-may forward compatibility requirement.
 
Last edited:
  • Informative
Likes Klystron
  • #34
sysprog said:
A concrete example of backward compatibility is that original MS Word .doc files can be read and edited by MS Word 2016, even though the .docx file format has superseded the .doc format. The earlier versions of the product could not have been built with anticipation of the newer functionalities of the later versions as effectively as the later versions were able to accommodate the existing formats of their predecessors. I think that reliance on an existing and ongoing commitment to some form of backward compatibility is more reasonable than trying to impose a come-what-may forward compatibility. requirement.
MS word is not a business application. There must be hundreds of millions of users of Word. A typical business application that I'm taking about would have a small number of customers. Often only one.

Although, generally, my experience was in putting together software and hardware components from various sources. MS Word would be a standard off-the-shelf component.

Towards the end of my career a general inability to distinguish between the something like Word and a full blown business application - perhaps to manage hospital patient information - was at the root of several IT disasters.

Anyway, I'm out of the industry now, so I ought not to have an opinion anymore.
 
  • Wow
Likes sysprog
  • #35
PeroK said:
MS word is not a business application. There must be hundreds of millions of users of Word. A typical business application that I'm taking about would have a small number of customers. Often only one.
Many typical business application sets (e.g. accounts receivable, accounts payable, customer maintenance, general ledger, inventory control) that could run on a System/370 of 45 years ago, could still run unchanged on a z/OS system of today.
Although, generally, my experience was in putting together software and hardware components from various sources. MS Word would be a standard off-the-shelf component.
Many of us tended to call that kind of activity 'cobbling' things together.
Towards the end of my career a general inability to distinguish between the something like Word and a full blown business application - perhaps to manage hospital patient information - was at the root of several IT disasters.
That's just plain terrible, but it's sometimes hard to determine whether a fault is in vendor equipment or code, or in something in-house for which the customer is responsible.
Anyway, I'm out of the industry now, so I ought not to have an opinion anymore.
That last line is clearly a non sequitur. The opinions of seasoned veterans should always be in the mix. I appreciate the idea of handing over the reins to the new guard; however, they will do well to ensure that they do not fail to uptake the insights of the old guard.

It's interesting to me that you mention hospital patient information.

The 'patient information' term can refer to medical records regarding individual patients; however, in the normal parlance of hospital administration, 'patient information systems' are what the physician interacts with in order to produce the sets of advisory to-the-patient information sheets.

When I was doing Y2K work at a major hospital complex, the IBM mainframe for which I was their systems programmer, which had interfaces to multiple other systems, was running a database product that had to be upgraded to a then-new Y2K-compliant version. The new version had to be able to work with the prior version's set of databases, and to change all the 2-digit-year date fields to allow 4-digit years. The success of that upgrade foundationally depended upon effective before-and-after anticipation, observation, and implementation, of backward compatibility.
 
  • Like
Likes Klystron and PeroK
<h2>1. How can I become a better programmer?</h2><p>Becoming a better programmer takes practice and dedication. Some tips for improving include regularly practicing coding, seeking feedback from others, and continuously learning new languages and techniques.</p><h2>2. Is becoming a faster programmer more important than becoming a better one?</h2><p>While speed is important, it is not the only factor in being a successful programmer. It is important to prioritize quality and efficiency over speed, as rushing can lead to errors and lower quality work.</p><h2>3. Can I become a better programmer without natural talent?</h2><p>Yes, programming is a skill that can be learned and improved upon with practice and dedication. While some individuals may have a natural aptitude for coding, anyone can become a successful programmer with hard work and determination.</p><h2>4. How can I balance becoming a better programmer with meeting deadlines?</h2><p>It is important to prioritize both becoming a better programmer and meeting deadlines. One way to balance these is to regularly practice coding and continuously improve your skills, which can ultimately lead to faster and more efficient coding in the long run.</p><h2>5. What resources can I use to become a better programmer?</h2><p>There are many resources available for improving programming skills, such as online courses, coding communities, and books. It is important to find resources that work best for your learning style and to continuously seek new opportunities for growth and improvement.</p>

1. How can I become a better programmer?

Becoming a better programmer takes practice and dedication. Some tips for improving include regularly practicing coding, seeking feedback from others, and continuously learning new languages and techniques.

2. Is becoming a faster programmer more important than becoming a better one?

While speed is important, it is not the only factor in being a successful programmer. It is important to prioritize quality and efficiency over speed, as rushing can lead to errors and lower quality work.

3. Can I become a better programmer without natural talent?

Yes, programming is a skill that can be learned and improved upon with practice and dedication. While some individuals may have a natural aptitude for coding, anyone can become a successful programmer with hard work and determination.

4. How can I balance becoming a better programmer with meeting deadlines?

It is important to prioritize both becoming a better programmer and meeting deadlines. One way to balance these is to regularly practice coding and continuously improve your skills, which can ultimately lead to faster and more efficient coding in the long run.

5. What resources can I use to become a better programmer?

There are many resources available for improving programming skills, such as online courses, coding communities, and books. It is important to find resources that work best for your learning style and to continuously seek new opportunities for growth and improvement.

Similar threads

  • Programming and Computer Science
Replies
29
Views
2K
Replies
127
Views
16K
Replies
16
Views
2K
Replies
11
Views
5K
  • STEM Academic Advising
Replies
12
Views
3K
  • Mechanical Engineering
Replies
1
Views
3K
  • Special and General Relativity
Replies
13
Views
2K
  • STEM Academic Advising
Replies
4
Views
2K
  • General Discussion
Replies
12
Views
5K
Back
Top