A Sad End to an Epic Run for Me with Trainz.

Well, I was "young and stupid" half way through the last century and I have reached the "old" bit but as far as the latest Trainz versions are concerned I am sure I have not reached the "wise" bit :).
Doug.
Hey you and I both. I still have scores of learning to do when it comes to the Trainz world.
 
Last edited:
Considerably better off?
This would be wonderful in an ideal world. If N3V had billions of clams in its bank account with a bevy of programmers that rival the numbers working at Oracle and Microsoft, then maybe they could put the software through a lengthy QA test that lasts for years prior to release.

Back in the olden days, and I mean the 70s, this was the way software was written. Programs weren't written for public consumption and were written for specific systems. There were some software packages available such as General Ledger and other accounting packages, but they were written in-house by the company that created the hardware but this usually wasn't the case, and it was up to the customer to write programs in-house for their very expensive systems they purchased.

Then the consumer market opened up and consumers wanted products quickly. The development cycle had to change in order to accommodate the changes in the market and this development cycle has gotten shorter and shorter with more corners cut in order to keep up with the ever- demanding public.

To make matters worse, it's no longer private companies that run the show. There are big venture capitalists calling the cards now and put pressures on companies to come up with products faster while cutting corners to maximize their profitability, meaning let's cut the workforce to the barest minimum while working whoever's left to the bone and offshore any other work at the same time. It doesn't help that the program is developed by various groups working on code without really seeing what the product does. This is done mostly to prevent software piracy by preventing the contract developers from seeing all the parts together.

Because of this fierce competition, companies no longer go through the lengthy QA cycle before releasing products and instead use tools to check the code for errors. The code may be semantically correct, but that doesn't mean that it always works as intended.

Once the program is together, it's tested for the barest amount of time before the public does the testing. As I said before, being up to the public to test is well and good at the end prior to release but not so good during early development because there are so many bugs. Being up to the public means there's still going to be a small core of real testers while the rest buy into the early releases with zero interest in testing.

If the product doesn't crash to the desktop the majority of the time, it's released to the public. This is when the bugs appear because so few tested the software in the first place.

During this phase, the developers are sent off to a new project while the QA Team takes over. Tech support takes in the bug reports and QA validates them before sending them back to the developers for fixes. If it warrants, a hotfix and service pack or two is released to address immediate bugs.

In the meantime, the development team addresses a few bugs and throws more energy into new fancy features and the cycle continues.

The thing is, it's not just N3V that's caught in this cycle and business model. Apple, Microsoft, Oracle, Adobe, especially Adobe, and so many other developers. A classic example of this mess came out of Colossal Order with their new Cities Skylines 2. They were forced to rush the product to market by Paradox and its investors. This caused what may have been a stellar release to go completely flat causing much bad publicity and a tarnished reputation.
 
Then the consumer market opened up and consumers wanted products quickly. The development cycle had to change in order to accommodate the changes in the market and this development cycle has gotten shorter and shorter with more corners cut in order to keep up with the ever- demanding public.
Without belaboring the point, the same public that wants everything now, also complains when things don't work as promised. Remember the old adage: you can get it cheap, good, fast, but you only get to pick any two.

Realistically, could it be any other way today?
 
Without belaboring the point, the same public that wants everything now, also complains when things don't work as promised. Remember the old adage: you can get it cheap, good, fast, but you only get to pick any two.

Realistically, could it be any other way today?
Yes, that too and spot on.

I'm not so sure it could be unless people want to wait and wait. Honestly, I wouldn't want to be in the game creation industry where the consumer hits the hardest today.
 
unless people want to wait and wait
Which raises the point of how much testing is enough?

Today software is labelled with its OS (e.g. "designed for MS Windows") and usually little else. How many different hardware and OS variations does that meager label cover? How many different combinations of hardware, system and hardware drivers, AV and anti-malware software would have to be tested?
 
Which raises the point of how much testing is enough?

Today software is labelled with its OS (e.g. "designed for MS Windows") and usually little else. How many different hardware and OS variations does that meager label cover? How many different combinations of hardware, system and hardware drivers, AV and anti-malware software would have to be tested?
Isn’t this a bit of a red herring? Isn’t it the OS that’s dealing with the hardware and providing the interface for the programmer?

I don’t think anyone ‘hits the metal’ these days, it’s handled by abstract layers.
 
Isn’t this a bit of a red herring? Isn’t it the OS that’s dealing with the hardware and providing the interface for the programmer?
Yes, the OS has the API "hooks" that allow the program to interface with the hardware but go back through the forum posts and you will find complaints about bugs in Trainz where a particular piece of hardware does not work (or suddenly stops working after a Trainz SP update). Other complains will be about data being "lost" or other strange behaviour.

In many such cases (I would not claim all) the fault has not been with Trainz but with faulty system/hardware drivers and OS updates.

A few posters, over a year ago now if I can recall correctly, had never bothered to keep their OS up to date - clearly a human, not a hardware or software issue. The result was that when a new version of Trainz was released that used a feature or features in a recent OS update, their version of Trainz no longer worked correctly (i.e. the new feature(s) did not work).

Certain AV and VPM programs, but not all, will block Content Manager from downloading from the DLS.

Wifi mouses and keyboards are a common hardware issue for gaming software and not just with Trainz.

There have been a quite a few cases where MS has issued an SP that caused problems for systems with specific hardware and software configurations.

Once again how much testing is enough before you can be "certain" that you have caught all these types of issues?

As a further example (but not Trainz related) where I was last working I noticed that a new piece of software would not work on some computers but would work on other identical computers. This would even apply to two identical computers, side-by-side, in the same room. All the computers had been built from the identical image file sent out by the file server overnight. I eventually (I was scrapping the barrel by that stage) identified that the only difference between the computers was the network switch through which their connection to the file server was routed. One particular brand of switch was blocking all communication between the Internet and the software package - and there was no software configuration in those switches that I could change to solve the issue. Moving those switches to network locations where this particular software package was not used solved the problem.
 
.....

As a further example (but not Trainz related) where I was last working I noticed that a new piece of software would not work on some computers but would work on other identical computers. This would even apply to two identical computers, side-by-side, in the same room. All the computers had been built from the identical image file sent out by the file server overnight. I eventually (I was scrapping the barrel by that stage) identified that the only difference between the computers was the network switch through which their connection to the file server was routed. One particular brand of switch was blocking all communication between the Internet and the software package - and there was no software configuration in those switches that I could change to solve the issue. Moving those switches to network locations where this particular software package was not used solved the problem.

Ouch! You had to debug the network to find the issue. Rather you than me.

One issue I recall that I thought was interesting was an incident where a swapper file (swaps out memory to disk) kept growing and growing over several days until the main server virtually ground to a standstill. It turned out that the programming language system I was using was inefficient with memory management for small data structures of around 32 bytes or so. It wasn't recycling such small chunks of memory so these structures would never get recycled and consequently chewed up a rather large chunk of memory. I fixed it by creating a pool of these data structures that were managed independently of the inbuilt garbage control system. i.e. the code would ask for a record/struct and, when finished with it, would hand it back to the pool. It worked a treat.
 
Yes, the OS has the API "hooks" that allow the program to interface with the hardware but go back through the forum posts and you will find complaints about bugs in Trainz where a particular piece of hardware does not work (or suddenly stops working after a Trainz SP update). Other complains will be about data being "lost" or other strange behaviour.

In many such cases (I would not claim all) the fault has not been with Trainz but with faulty system/hardware drivers and OS updates.

A few posters, over a year ago now if I can recall correctly, had never bothered to keep their OS up to date - clearly a human, not a hardware or software issue. The result was that when a new version of Trainz was released that used a feature or features in a recent OS update, their version of Trainz no longer worked correctly (i.e. the new feature(s) did not work).

Certain AV and VPM programs, but not all, will block Content Manager from downloading from the DLS.

Wifi mouses and keyboards are a common hardware issue for gaming software and not just with Trainz.

There have been a quite a few cases where MS has issued an SP that caused problems for systems with specific hardware and software configurations.

Once again how much testing is enough before you can be "certain" that you have caught all these types of issues?

As a further example (but not Trainz related) where I was last working I noticed that a new piece of software would not work on some computers but would work on other identical computers. This would even apply to two identical computers, side-by-side, in the same room. All the computers had been built from the identical image file sent out by the file server overnight. I eventually (I was scrapping the barrel by that stage) identified that the only difference between the computers was the network switch through which their connection to the file server was routed. One particular brand of switch was blocking all communication between the Internet and the software package - and there was no software configuration in those switches that I could change to solve the issue. Moving those switches to network locations where this particular software package was not used solved the problem.
All this is true, but the product is bought from N3V so the buck stops with them. No doubt if a person's system is insufficient to meet the minimum requirement, support will point this out.
 
I could not find the original post but I seem to recall that this "track object bug" is one that had existed in the code for some time but did not surface until the SP5 update, and even then it only affected track in pre-existing (pre-SP5) routes. As I and others have pointed out bugs are a constant feature of software. The question is, and always has been, how much time and resources do you spend on testing in the unrealistic hope of finding them all?

Finding a bug will (normally) lead to a "fix" and then another round of testing in the unrealistic hope of finding any new bugs that the fix has introduced. It is a never ending cycle that continues with every new release.
 
I could not find the original post but I seem to recall that this "track object bug" is one that had existed in the code for some time but did not surface until the SP5 update, and even then it only affected track in pre-existing (pre-SP5) routes. As I and others have pointed out bugs are a constant feature of software. The question is, and always has been, how much time and resources do you spend on testing in the unrealistic hope of finding them all?

Finding a bug will (normally) lead to a "fix" and then another round of testing in the unrealistic hope of finding any new bugs that the fix has introduced. It is a never ending cycle that continues with every new release.
I encountered this bug in T:ANE and reported it back then and it proved elusive. It was one of those things that couldn't be tracked down because the bug occurred intermittently and there was no way of knowing exactly what triggered it.
 
I could not find the original post but I seem to recall that this "track object bug" is one that had existed in the code for some time but did not surface until the SP5 update, and even then it only affected track in pre-existing (pre-SP5) routes. As I and others have pointed out bugs are a constant feature of software. The question is, and always has been, how much time and resources do you spend on testing in the unrealistic hope of finding them all?

Finding a bug will (normally) lead to a "fix" and then another round of testing in the unrealistic hope of finding any new bugs that the fix has introduced. It is a never ending cycle that continues with every new release.
I seem to recall IBM mainframes reached a sort of steady state with bugs. Each patch on average to solve one bug introduced a a 98% chance of a new bug.

John
 
I seem to recall IBM mainframes reached a sort of steady state with bugs. Each patch on average to solve one bug introduced a a 98% chance of a new bug.

John
I had an IBM Computer and had absolutely no issues with it infact the printer that came with the computer originally Outlasted the IBM computer by almost 20 years... 1995-2011 Rip IBM Printer!
 
This would be wonderful in an ideal world. If N3V had billions of clams in its bank account with a bevy of programmers that rival the numbers working at Oracle and Microsoft, then maybe they could put the software through a lengthy QA test that lasts for years prior to release.

Back in the olden days, and I mean the 70s, this was the way software was written. Programs weren't written for public consumption and were written for specific systems. There were some software packages available such as General Ledger and other accounting packages, but they were written in-house by the company that created the hardware but this usually wasn't the case, and it was up to the customer to write programs in-house for their very expensive systems they purchased.

Then the consumer market opened up and consumers wanted products quickly. The development cycle had to change in order to accommodate the changes in the market and this development cycle has gotten shorter and shorter with more corners cut in order to keep up with the ever- demanding public.

To make matters worse, it's no longer private companies that run the show. There are big venture capitalists calling the cards now and put pressures on companies to come up with products faster while cutting corners to maximize their profitability, meaning let's cut the workforce to the barest minimum while working whoever's left to the bone and offshore any other work at the same time. It doesn't help that the program is developed by various groups working on code without really seeing what the product does. This is done mostly to prevent software piracy by preventing the contract developers from seeing all the parts together.

Because of this fierce competition, companies no longer go through the lengthy QA cycle before releasing products and instead use tools to check the code for errors. The code may be semantically correct, but that doesn't mean that it always works as intended.

Once the program is together, it's tested for the barest amount of time before the public does the testing. As I said before, being up to the public to test is well and good at the end prior to release but not so good during early development because there are so many bugs. Being up to the public means there's still going to be a small core of real testers while the rest buy into the early releases with zero interest in testing.

If the product doesn't crash to the desktop the majority of the time, it's released to the public. This is when the bugs appear because so few tested the software in the first place.

During this phase, the developers are sent off to a new project while the QA Team takes over. Tech support takes in the bug reports and QA validates them before sending them back to the developers for fixes. If it warrants, a hotfix and service pack or two is released to address immediate bugs.

In the meantime, the development team addresses a few bugs and throws more energy into new fancy features and the cycle continues.

The thing is, it's not just N3V that's caught in this cycle and business model. Apple, Microsoft, Oracle, Adobe, especially Adobe, and so many other developers. A classic example of this mess came out of Colossal Order with their new Cities Skylines 2. They were forced to rush the product to market by Paradox and its investors. This caused what may have been a stellar release to go completely flat causing much bad publicity and a tarnished reputation.
cancelled
 
Mainframe?

John
yup for a business my family had at the time.... If I had to buy that computer today it would cost upwards of $8,000 to $10,000 Dollars... the computer i had was one of the first IBM Computers to come out in the 1990's when internet first came into the picture...
 
Back
Top