I'm becoming a much better programmer, but maybe not a faster one

  • Thread starter SlurrerOfSpeech
  • Start date
In summary: But it should be possible to classify them into different "sizes" of problem. This is a great way to track your progress in performance.In summary, the conversation discusses the speaker's experience with software design and their improvement over the years. They have become adept at using various techniques such as interfaces, dependency injection, and abstraction to create "good" code. However, they question whether this future-proofing actually pays off in terms of speed and efficiency. The conversation also touches on the importance of simplicity and avoiding over-abstracting code. The concept of future proofing is also challenged, as changes often come outside of what was predicted. Additionally, the importance of testability in code and the relationship between development overhead and
  • #36
sysprog said:
Please translate the following 2 sentences of yours into standard English without metaphor:

Without metaphor, eh? I considered writing this response as pseudocode but decided that would be unnecessarily cheeky, so...

Around the year 2000, IBM's product marketing assumed that their Z Series was a sufficiently compelling platform that it would entice clients to consolidate all their business computing needs onto it, not just the Z/OS ones. The mechanism for this was dedicated x86 hardware that allowed for Unix and Windows to be partitioned into the Z, all managed from a central software control console application. It included virtualization-type capabilities and resource sharing between operating systems.

IBM reps told us this presented an unbeatable offering, but for some reason, IBM failed to appreciate that each class of computing community considered their needs separate and had no wish to be involved in the other. One Z Series Admin told me there was no way a PC was going to "pollute" his mainframe, and that seemed to be major stumbling block to the whole concept.

It seemed that small number of clients adopted this, but it was not what the majority of the market wanted, and soon enough, promotion of this concept ceased.
 
  • Like
Likes PeroK
Technology news on Phys.org
  • #37
Tghu Verd said:
Without metaphor, eh? I considered writing this response as pseudocode but decided that would be unnecessarily cheeky, so...

Around the year 2000, IBM's product marketing assumed that their Z Series was a sufficiently compelling platform that it would entice clients to consolidate all their business computing needs onto it, not just the Z/OS ones. The mechanism for this was dedicated x86 hardware that allowed for Unix and Windows to be partitioned into the Z, all managed from a central software control console application. It included virtualization-type capabilities and resource sharing between operating systems.

IBM reps told us this presented an unbeatable offering, but for some reason, IBM failed to appreciate that each class of computing community considered their needs separate and had no wish to be involved in the other. One Z Series Admin told me there was no way a PC was going to "pollute" his mainframe, and that seemed to be major stumbling block to the whole concept.

It seemed that small number of clients adopted this, but it was not what the majority of the market wanted, and soon enough, promotion of this concept ceased.

Around this time my company was asked to submit a bid for a new reservations system based on an IBM mainframe offering. I volunteered to put the solution together (no one else would touch it, but I thought it might be quite interesting!). One problem was that our Data Centre pricing model was based on MIPS. And, we had to quote for the costs of the system for all possibilities, including very large transaction volumes. The quoted costs were astronomical. IBM had a good staggered pricing model for their products and licences but our Data Centre people loaded the bid with astronomical support and operator costs. It was simply linear per MIP.

I argued long and hard with our mainframe Data Centre people. I said to them: you keep telling us that the mainframe is competitive and when we try to put together a bid (at the customer's insistence) based on a mainframe solution, you load the bid with unjustifiable support and operations costs.

Anyway, it was ridiculously expensive compared to the Unix/Oracle alternative we were bidding against. It was a shame because I really believed the IBM mainframe hardware and software was a really good option. The mainframe, as platform, had a lot of advantages.

The UNIX/Oracle support teams (ironically, that was my background) had been forced to become more flexible and commercially aware. The mainframe people were "take-it-or-leave-it" dinosaurs. And that, not any failing of the IBM mainframe itself, was why we never submitted another solution for a new-system based on mainframe technology.
 
  • Like
Likes Klystron
  • #38
PeroK said:
Anyway, it was ridiculously expensive compared to the Unix/Oracle alternative we were bidding against. It was a shame because I really believed the IBM mainframe hardware and software was a really good option. The mainframe, as platform, had a lot of advantages.

Agree with that, @PeroK, shame really, but the best tech doesn't always win. (Though IBM sold about $20B of Z-Series kit last year, so I guess "lose" is a relative term!)
 
  • Like
Likes PeroK
  • #39
Let me give an example of what I mean.

Suppose there's a very simple requirement: Write a program to recursively search a directory and count the number of "*.dll" files.

Could easily whip up a working solution using C# DirectoryInfo class in about 1 minute.

However, you can think of this problem as a specific example of a more general problem of "Find some matching items in a possibly infinite tree of nodes containing items" and create abstractions like

interface IDataReference<TData>
{
TData Read();
}

interface IDataReferenceFilter<IDataReference<TData>>
{
bool IsFiltered(IDataReference<TData> dataReference);
}

interface IDataNode<IDataReference<TData>>
{
IEnumerable<IDataNode<TData>> Children { get; }

IEnumerable<IDataReference<TData>> Values { get; }
}


and then implementations like

// basically a wrapper over FileInfo
class FileReference : IDataReference<Stream> { }

class FilePathFilter : IDataFilter<FileReference> { }

// basically a wrapper over DirectoryInfo
class FileDirectory : IDataTreeNode<FileReference> { }


but is it worth it?
 
  • #40
What would Dilbert do?
Pointy-Haired Boss said:
I have a very simple requirement: Write a program to recursively search a directory and count the number of "*.dll" files.
Dilbert said:
Sure, Boss, simple requirement; simple solution -- here:
Code:
dir *.dll /s
No, I want just the count; not all that other stuff.
Dilbert said:
Do you want the program to say what the count is a count of, too, or just say the number?
I want you to stop trying to make me do your job. I want to say what I want, and then you figure out what I meant, and you come back with what I wanted. Is that clear?
Dilbert said:
Clear as fog, Boss; I'll get right on it.
Attaboy.
What, hypothetically, is the origin of the requirement in your example? Why would you need to write a program to do something that can be done with a single command? What's the real requirement?

Whether you provide a more abstract or general-purpose solution, or a more specific one, or simply re-use existing code that already solves the problem, should depend on the real requirements you're trying to address.
 
Last edited:
  • Like
Likes Klystron, Mark44 and QuantumQuest
  • #41
There is a difference between what I called "future proofing" and backward compatibility.

Future proofing is when you try to include features or elements to support unknown future requirements. For example, including the header size and the version number of the file format in a data file header will have no use in version 1.00 of the code - but it will allow backward compatibility in later versions.

Backward compatibility means that newer revisions of the application(s) will support older user data sets (data files, scripts, programming, etc). This can be done either natively or with conversion tools. For example, the latest versions of Word can still read the earliest Word files - but to edit them, it needs to convert them to the newer format.

It was also mentioned earlier in this thread that operating systems are different than most business applications. The key difference as it relates to backward compatibility is the degree to which application developers have control over the existing data sets that are supported by the application. When developing something like Word, there is never any possibility of going out and converting all Word files to the latest format. But in many business situations, there is only a single data base and it is completely practical to include all the current applications that support it with each backup of that data set. In such a case, a one-off data base conversion program is all that is needed to assure system continuity whenever those applications are updated.
 
  • #42
sysprog said:
What would Dilbert do?

What, hypothetically, is the origin of the requirement in your example? Why would you need to write a program to do something that can be done with a single command? What's the real requirement?

Whether you provide a more abstract or general-purpose solution, or a more specific one, or simply re-use existing code that already solves the problem, should depend on the real requirements you're trying to address.
One of my rules of survival on the job: If your boss asks you to do something, and it is easy to do, then do it -- quickly -- without questions or arguments. ;>)
 
  • Like
Likes QuantumQuest, Klystron and jedishrfu
  • #43
sysprog said:
What would Dilbert do?

That's a great sequence, and pretty much elaborates what Agile software development tries to solve from a requirements perspective. Whether Agile works depends on a lot of local factors, but the concept of getting the people who want something closer to the team doing the work - and delivering incremental improvements faster - is a good one.

In terms of @.Scott's future proofing, I've found that hard to design for. Perhaps I'm poor at predicting the future, but apart from simple aspects such as global variables and self-contained components where possible, any 'feature' that I thought would be worth lobbing in on a "just in case" basis, was wasted time. I figured that was me, but the theme of future proofing being a waste of time seems common in dev forums, and Steve Konves blog on the topic seems a good summary.
 
  • #44
One way to think about it is in terms of objects, instances, classes and interfaces as a means to future proof your code. Design to the interface and then classes that implement the interface can be swapped out for better ones without changing your overall logic. Also consider designing with the model view controller pattern where the model holds the data that your program needs and the view asks the model for whatever data it needs to display while the controller handles all the event activity going on..
 

Similar threads

  • Programming and Computer Science
Replies
29
Views
3K
Replies
127
Views
16K
Replies
16
Views
2K
Replies
11
Views
5K
  • STEM Academic Advising
Replies
12
Views
3K
  • Mechanical Engineering
Replies
1
Views
3K
  • Special and General Relativity
Replies
13
Views
2K
  • STEM Academic Advising
Replies
4
Views
2K
  • General Discussion
Replies
12
Views
5K
Back
Top