PDA

View Full Version : internal test build 77412



WindWalkr
July 23rd, 2015, 04:40 AM
New internal build available. A lot of work-in-progress changes in this one, so performance might be wildly better or wildly worse depending on your configuration. Let us know. QuickDrive train placement should hopefully be working more reliably, and a few crashes have been quashed. A few changes have been made to the texture streaming algorithms; this shouldn't result in any user-visible change but if you download this build then please ensure that streaming is enabled so that we can pick up on any problems. The CM list windows should now respond in a more platform-standard way to modifier key input during mouse selection.

chris

ianwoodmore
July 23rd, 2015, 10:01 PM
I've experienced a problem on my maxi install with CM custom filters.
TANE Userdata from TANE 77327 was retained unaltered. Build 77412 was pointed to this folder.
I started a Database Rebuild that I'll discuss separately. On ?completion I found that only some CM filters were working. This included default as well as custom.

The operators 'AND' and 'AND NOT' had been replaced with default 'saved filter'.
On restoring the correct operators filters then worked OK.
After saving the settings persisted.

WindWalkr
July 23rd, 2015, 10:23 PM
Well spotted. It looks like one of the enumerations was incorrectly changed. This will be reverted in the next build.

chris

JCitron
July 23rd, 2015, 10:54 PM
Hi Chris,

I didn't do much testing of CM myself this time, instead, I did some driving.

There are still popping up leaves on the Speed Trees, though not as bad as before. This was on the Warwick route, which really, really looks nice by the way even with a few places, just outside of Fletcher were there is floating grass. I'm not sure if this is a bug in this build or they were always there.

With my modest settings set as follows in the launcher:

Shadow Quality High
Main shadow resolution: 4096
Texture Detail: High
Post Processing: High
Antialiasing: 2x

all options checked.

Then in game:

Draw Distance: 8000 m
Scenery Detail: Ultra
Tree Detail: High
Post Processing: High
Processing Objects Behind Camera, checked.

I saw between 24fps for worst case and up to 49fps in the best case.

I then saved my driver session, quit and came back where I left off without any crashes, and then loaded up my own route, one I imported from TS12 and into T:ANE retail about a week ago, did some quick driving in there and that worked as well.

I'll keep poking at it some more in the next coming days and report back when I get a chance. Performance wise, this appears to be the best build so far. ;)

John

pcas1986
July 24th, 2015, 02:10 AM
CM is much more user friendly than before but the Submit Edits hotkey (Ctrl+Shift+E) only works intermittently. Initially it worked fine and I was able to commit single and multiple assets. But, after downloading some assets from the DLS, the hotkey stopped working. I'm not sure if the downloading is part of the problem but I was trying to clear some missing dependencies.

andi06
July 24th, 2015, 06:09 AM
Generally frame rates are comparable and the display is visually very good, still slightly jittery though. The display seems to be better (quicker and smoother) than some previous builds at handling complete changes of viewpoint.

Early days but I haven't seen crashes on exiting Quickdrive or random dropouts during a Driver session but I'm now seeing intermittent CTDs on exiting the game or on quitting one route to load another, no obvious pattern but these are different crash triggers (you seem to have moved some of the problems even if you haven't fixed them). All of these have been blue screens and none have generated crashdumps.

One specific case:

1. Open the Launcher
2. Open the Profiler window
3. Run the game, exit and close the game and the launcher, forget about closing the profiler and do something else for a while.
4. On closing the profile window I had a BSOD quoting 'Refererence_by_Pointer'

This didn't happen on a second try but I'm not going to try too hard to generate blue screens.

I just drove a route during a thunderstorm and I have to say that I love the crisp direct shadows :-)

WindWalkr
July 24th, 2015, 09:11 AM
I'd also like some feedback on validation in general. We'd like to update the DLS parsing tool soon and we'll be using this version of the code. Is anyone aware of any regressions as compared to the current version of the DLS parsing (or compared to the current retail TANE, if you're not very familiar with the DLS)?

chris

whitepass
July 24th, 2015, 09:44 AM
1. A lot better FPS.

2. My Alco PB1 PRR has a Warning in 76401 and an Error in 77412 this is the under 500 poly in LOD witch I forgot to make.

3. I got an E-mail of my content on the DLS that need Error fixes, 2 I will fix, 1 is an old track I will not, and 2 are PaintShed templates that no one can fix.

andi06
July 24th, 2015, 04:25 PM
I'd also like some feedback on validation in general.

Error Flag: - Indexed meshes are not supported for traincars as of trainz-build 3.8. It is recommended that you upgrade 'left-doors.im' to a LOD mesh.

1. left-doors.im is a 443 poly animated door mesh attached to a culled attachment point (its not my asset's fault that this isn't working) There are a couple of other smaller *.im meshes in this asset so there must be a size threshold which isn't quoted - DON'T LEAVE US IN THE DARK!

2. When you state 'it is recommended that ...' then by definition it is not an error, even in australian English.

This needs to be demoted to a warning and if the mesh visibility is restricted by other means (such as culling) dropped altogether.

The same asset is trainz-build 4.2 and CM is warning about the absence of a shadow mesh. A 4.2 asset can't be installed in any version of trainz where a shadow mesh can be used so this needs to be dropped.

andi06
July 24th, 2015, 04:29 PM
<kuid:122285:8400> (this is the asset I sent you a week or so ago) is still giving errors:

- The meshes in LOD level 3 must total at least 20% fewer polygons than the next higher LOD.
- The meshes in LOD level 2 must total at least 20% fewer polygons than the next higher LOD.
- The meshes in LOD level 1 must total at least 20% fewer polygons than the next higher LOD.

The meshes in question are in a mesh-asset, the errors are incorrect.

Myself and others have suggested that the texture.txt/tga error should be automatically fixed. Your response to this was so conservative that you could have stood for parliament in the UK.

You need to apply the same level of reticence about issuing errors like this. Failing to catch a non-fatal error (and things like this are non-fatal) is infinitely preferable to condemning a valid asset.

If you reject a single asset which should have been accepted then you have failed miserably.

WindWalkr
July 24th, 2015, 09:02 PM
1. left-doors.im is a 443 poly animated door mesh attached to a culled attachment point (its not my asset's fault that this isn't working) There are a couple of other smaller *.im meshes in this asset so there must be a size threshold which isn't quoted - DON'T LEAVE US IN THE DARK!

You are correct, the check kicks in at 300 polygons. Below that, we still don't really agree with the usage but it's not worth getting picky about at the current time.



2. When you state 'it is recommended that ...' then by definition it is not an error, even in australian English.

"You have a broken leg. It is recommended that you see your doctor."

Avoiding the doctor won't prevent it from being broken. It's also not the only solution, just an obvious recommendation.



This needs to be demoted to a warning and if the mesh visibility is restricted by other means (such as culling) dropped altogether.

Perhaps. There's a tradeoff between how much smarts we build into validation, how tough we make validation, and how badly the object performs.

* The heuristics are never going to match human logic. There will always be cases where you can argue "that may be true in general, but it's unnecessary here because X". As long as what it asks for is not actively hurting performance or making development much harder than it should be, then I think we're doing a reasonable job here.
* Too tough, and people will have a hard time creating content.
* Too lax, and people will have a hard time playing the game.

We're aware that the heuristics here aren't perfect and can be improved. One of the better options is possibly to render the mesh at different distances and see how it performs as a whole (scripts and all.) The problem with this approach is that it's very slow compared to the rules-based validation we run currently.

I think that perhaps one of the differences in our lines of thinking is that you may be assuming that we're trying to squeeze an extra 10-50% performance gain out of the loco models. We're actually aiming at more like 1000% in some areas. The current models are very, very inefficient. There are reasons for all of these inefficiencies, but at the end of the day we're an order of magnitude away from where we need to be. This is going to take a concerted effort from everybody involved, and it isn't going to happen quickly or painlessly. My prime focus right now is on getting tools out to creators that can highlight the problems, after which we're going to have to work through the various reasons that these problems exist and work out ways to solve them.



The same asset is trainz-build 4.2 and CM is warning about the absence of a shadow mesh. A 4.2 asset can't be installed in any version of trainz where a shadow mesh can be used so this needs to be dropped.

Agreed. We already have this on the task list for CM.


thanks,

chris

WindWalkr
July 24th, 2015, 09:10 PM
<kuid:122285:8400> (this is the asset I sent you a week or so ago)

Yep. I haven't had a chance to look at this one yet. I'll let you know what I find when I get to it.



If you reject a single asset which should have been accepted then you have failed miserably.

We'll have to agree to disagree on that one. We're playing with some pretty coarse filters, and sometimes we'll pick up things that could, after some human inspection, have been passed as harmless. If we're picking up thousands of such objects and all of them are harmless, then we've absolutely got an avenue to improve our checks. If we're picking up a small handful of objects, then it's probably not worth improving our checks. If we're picking up thousands but many of them are legitimately faulty, then we're doing our job reasonably well. The fact that some technically-faulty-but-practically-okay assets get caught in the net is the price we pay for having this kind of validation.

Again, validation will never attain human-level decision making capabilities, and we're not going to try to push in that direction. As long as there are good reasons for each check, then we're happy with the occasional asset being flagged that a human would have said "well, that's harmless enough." Note that I'm not claiming every check is perfect. Every check is added to address some real-world problem, but sometimes there are outcomes (such as the LOD level checks on mesh libraries) which cause unintended side effects on some assets because the assets work differently from what the implementor had in mind when the check was developed. In these cases, we need to re-evaluate the check and either give it up as too hard (living with the original problem) or make specific workarounds for the problem cases. This is something that we need to consider very carefully rather than rushing into.

chris

pcas1986
July 25th, 2015, 12:08 AM
Just to be clear. When testing validation against the release and current beta, should the assets being tested be at build 4.2?



I think that perhaps one of the differences in our lines of thinking is that you may be assuming that we're trying to squeeze an extra 10-50% performance gain out of the loco models. We're actually aiming at more like 1000% in some areas. The current models are very, very inefficient. There are reasons for all of these inefficiencies, but at the end of the day we're an order of magnitude away from where we need to be. This is going to take a concerted effort from everybody involved, and it isn't going to happen quickly or painlessly. My prime focus right now is on getting tools out to creators that can highlight the problems, after which we're going to have to work through the various reasons that these problems exist and work out ways to solve them.

Perhaps you might elaborate on this statement. I've invested a lot of time into examining LOD not only for the main parts of my locos, but, more recently, in the attached meshes. I don't recall ever reading about the 300 poly limit for attachments. Your post of the other day saying there was a problem with culling was also the first I've read of that problem. If I had known about that then I wouldn't have wasted my time trying to fix it on my end. Perhaps this is what Andi means by "leaving us in the dark".

If the Preview Asset poly information can show us how many polys are "in use" as one zooms in and out, that would be useful. Dynamic information about texture efficiency within Preview Asset would also be a plus. Unfortunately this is a bit "chicken and egg" because you have to build the model first and then discover there is a problem. Perhaps lessons learned might be applied for later models to reduce rework.

narrowgauge
July 25th, 2015, 12:23 AM
Chris

Every time I light up CM in a new build, my first thing I check is the 'Asset ID' column width. It is still locked. I guess this is not seen as important but it seems such a simple change to make and such a silly thing to have done in the first place, please, can you fix it. You want new content, don't make it harder than it needs to be.

Peter

WindWalkr
July 25th, 2015, 01:19 AM
Just to be clear. When testing validation against the release and current beta, should the assets being tested be at build 4.2?

Anything at all, although I'm most interested in items above 3.5 as I believe that's the current cut-off of the DLS parser (so anything less than that won't be immediately affected.)



Perhaps you might elaborate on this statement. I've invested a lot of time into examining LOD not only for the main parts of my locos, but, more recently, in the attached meshes. I don't recall ever reading about the 300 poly limit for attachments. Your post of the other day saying there was a problem with culling was also the first I've read of that problem. If I had known about that then I wouldn't have wasted my time trying to fix it on my end. Perhaps this is what Andi means by "leaving us in the dark".

I'm honestly not sure what you're referring to. You clearly have some specific post(s) in mind but I've no idea which ones.

To try to reword/elaborate on the above:

* We want people to use LM files for locomotives. As Andi has noted, sometimes this is less important (to the point where it's not actively beneficial) but on the other hand, sometimes it's critically important.
* Since the validation code has no real way to know what is important and what is not, it currently uses "300 polygons" as a decider. If you're above that, it will complain. If you're below that, it will assume that you either know what you're doing, or that there are bigger issues than the use of IM vs LM.
* I can't think of any case where LM is detrimental (as compared to IM) on a loco. At best, if you've done everything else perfectly, there will be no benefit but no real detriment. For the majority of cases, it will be actively beneficial to use LM.

The ideal scenario is to not use an extra mesh at all, of course, but we're a long way from being able to identify which cases might be legit vs which are problematic for that.



If the Preview Asset poly information can show us how many polys are "in use" as one zooms in and out, that would be useful.

Yeah, it does that. It's important not to become too fixated on the polygon counts however, to the detriment of other areas that should also be optimised.



Dynamic information about texture efficiency within Preview Asset would also be a plus.

I'm not really sure what you mean by "texture efficiency" here. Let me know what you think would be useful.



Unfortunately this is a bit "chicken and egg" because you have to build the model first and then discover there is a problem. Perhaps lessons learned might be applied for later models to reduce rework.

Exactly this. I don't expect that everyone will turn around and rebuild their existing models, but I do hope that we'll be able to improve the next generation of content substantially, and over time we'll weed out the problem cases.

chris

pcas1986
July 25th, 2015, 01:46 AM
...
I'm honestly not sure what you're referring to. You clearly have some specific post(s) in mind but I've no idea which ones.


Post #11, but your comments below satisfy my question.


...

To try to reword/elaborate on the above:

* We want people to use LM files for locomotives. As Andi has noted, sometimes this is less important (to the point where it's not actively beneficial) but on the other hand, sometimes it's critically important.
* Since the validation code has no real way to know what is important and what is not, it currently uses "300 polygons" as a decider. If you're above that, it will complain. If you're below that, it will assume that you either know what you're doing, or that there are bigger issues than the use of IM vs LM.
* I can't think of any case where LM is detrimental (as compared to IM) on a loco. At best, if you've done everything else perfectly, there will be no benefit but no real detriment. For the majority of cases, it will be actively beneficial to use LM.

The ideal scenario is to not use an extra mesh at all, of course, but we're a long way from being able to identify which cases might be legit vs which are problematic for that.



...
I'm not really sure what you mean by "texture efficiency" here. Let me know what you think would be useful.


Despite quite a lot of advice from you on using materials and textures, I remain a little confused about how many, how big, etc. Based on posts I've read elsewhere, I think other CCs are as well. If there was some way that the Preview Asset tool can do some analysis and provide an opinion on the efficiency of the textures used by a model as it progresses through LOD changes then I would find that useful.

pcas1986
July 25th, 2015, 02:25 AM
I'm generally happy with the validation process but I cannot create a loco/traincar for 4.1 or up without incurring the 500 poly error. This, I believe, is the culling problem. So I don't see how anyone can upload a 4.1 loco asset at this time unless it is a really simple model.

pcas1986
July 25th, 2015, 02:41 AM
My virus buster (Trend Micro Ultimate) keeps blocking TANE when I try to open an asset for editing (edit in Explorer). It then hides both the exe and the desktop link in some secret location. Despite me putting it on an exception list it always takes me some time to figure how to restore the files.

N3V may want to consider talking to Trend Micro about it.

WindWalkr
July 25th, 2015, 03:15 AM
Despite quite a lot of advice from you on using materials and textures, I remain a little confused about how many, how big, etc. Based on posts I've read elsewhere, I think other CCs are as well. If there was some way that the Preview Asset tool can do some analysis and provide an opinion on the efficiency of the textures used by a model as it progresses through LOD changes then I would find that useful.

There are two ways to give advice like this:

1. Relating to a specific item. We can pick it apart and determine how efficient it is compared to other similar items, speculate on how it can be improved, and note which areas appear wasteful. The preview tool can help you determine the raw numbers, but it can't tell you a "theoretical best" for your object- you'll have to compare to other objects and discuss what can be done.

2. Generalised advice. This by nature will give you a number of conflicting points, and you'll need to work out which of them are most problematic for your object. Examples are "reduce overall polygon count", "reduce polygon count as rapidly as possible using LOD", "ensure that distant polygon count is as low as possible for bulk usage", "use as few as possible materials", "don't use larger textures than you need", "avoid using animation", "reduce animation bone counts", "avoid unnecessarily large textures", "avoid texture replacement", "don't use attachment points", "don't use extra meshes", "ensure that any of the above points which were violated at higher LODs are dropped in lower LODs", "avoid LM for static scenery", "share materials between meshes for stitched objects", "animated or moving objects cannot be stitched", "don't use a library/atlas approach for items that won't be used together", "don't use too many LODs", etc.

What we're trying to do with the preview tool is give you a way of generating metrics by which you can compare and discuss your object. Giving meaningful advice is much more complex, and probably beyond the capabilities of a simple computer program. "Minimise all the numbers" is accurate but somewhat useless advice. The preview tool can be used, however, to determine whether a given change has been beneficial for performance. Rather than just assuming that a given technique is a massive benefit, you can test it on your specific object in a variety of scenarios, and confirm whether your expectations hold true in reality.

chris

pcas1986
July 25th, 2015, 03:40 AM
Thanks for responding and the advice. I'll wait and see how the Preview Asset initiative works. If we are going to do comparisons then a reference model or models would be very useful. Rather than expecting N3V to produce them maybe the CCs could come up with a list of suitable assets. Just a thought.

clam1952
July 25th, 2015, 04:35 AM
My virus buster (Trend Micro Ultimate) keeps blocking TANE when I try to open an asset for editing (edit in Explorer). It then hides both the exe and the desktop link in some secret location. Despite me putting it on an exception list it always takes me some time to figure how to restore the files.

N3V may want to consider talking to Trend Micro about it.

Tried putting the T:ANE folder and where ever you have the data folder on the exceptions list? Should be a Quarantine folder somewhere on your HD where these files go, may not any use as they are probably archived and can only be restored by the Trend program.

andi06
July 25th, 2015, 04:49 AM
I think that perhaps one of the differences in our lines of thinking is that you may be assuming that we're trying to squeeze an extra 10-50% performance gain out of the loco models. We're actually aiming at more like 1000% in some areas.
If you look at my post about the shoddy payware Duchess you will see that I fully appreciate just how much improvement is required. It seems necessary to preface every post that I make on this subject by saying that I support what you are trying to do in general terms.

However I have written about half a dozen assets since these test builds were issued, I'm a fairly experienced author and, for the time being at least, I have your ear and the benefit of your advice. All of these assets are LOD'd to the eyeball with the intention of looking at exactly the issues being discussed, yet 50% of them are falling foul of erroneous or over-zealous validation. These are straightforward vehicles and traincars, not the tricky stuff that I sometimes play with.

It is ridiculous to add lm.txt to a 430 poly animated mesh attached to a culled attachment, we both know this. If validation forces this I will work around the rules by making LOD2 an invisible mesh at the same cut off point as the mesh cull - a painful and pointless exercise all round. Again, if you can't get it 100% right you are failing.

If you continue along these lines a couple of things will happen:

1. A number of content creators will decide that life is too short and stop producing assets for Trainz at all. No improvement overall.
2. Those that remain will resort increasingly to techniques which work around and side step your validation errors. No improvement overall.

A more construcive approach would be to continue to issue warnings, the more the merrier, but instead of rejecting inefficient assets under red flags you should assign a one to five star performance rating and publish this on the DLS. Carrot rather than stick, and one which gives route-builders something to work with and might just get some asset builders on your side for a change.

At last you seem to be taking creation tools seriously, about time and I hope that they will be useful but:

1. We have discussed methods of relaxing validation for assets in development but we haven't seen any action in this department.
2. There seem to be issues at the DLS with cross asset inclusion, your colleagues can't seem to be arsed to fix them or to respond to posts.
3. I've pointed out several areas where you could ease the load on authors, allowing the use of mirrored meshes, allowing comments in config sources to temporarily suppress sections of the file and so on, again no action yet.

pcas1986
July 25th, 2015, 05:12 AM
Tried putting the T:ANE folder and where ever you have the data folder on the exceptions list? Should be a Quarantine folder somewhere on your HD where these files go, may not any use as they are probably archived and can only be restored by the Trend program.
Yes, understand all that. I just have to remember to put it on the exceptions list when I next install another version.

whitepass
July 25th, 2015, 08:15 AM
Found a bug, if you use the AI on your train you can never get the Driver HUD back.

1. Give train an AI "Drive" command.
2. Give train an AI "Stop" command.

WindWalkr
July 25th, 2015, 08:26 AM
It is ridiculous to add lm.txt to a 430 poly animated mesh attached to a culled attachment, we both know this.

As a general statement, that isn't correct. I completely agree that there are cases where it can be correct, but evaluating whether or not we're talking about such a case is very difficult, well beyond the scope of what our validation is intended to achieve at the current time.



If validation forces this I will work around the rules by making LOD2 an invisible mesh at the same cut off point as the mesh cull - a painful and pointless exercise all round.

Or you could also do what it's asking for and give it a decent LOD.

Also, to be clear, I haven't had a chance to look at your asset or some of the other reports of problems with the LOD validation yet, and until then I won't be able to give you an actual answer one way or the other.



A more construcive approach would be to continue to issue warnings, the more the merrier, but instead of rejecting inefficient assets under red flags you should assign a one to five star performance rating and publish this on the DLS.

We're definitely working toward that, although it's still unclear exactly what form it will take. Having decent user-visible performance metrics is certainly a solid first step.



1. We have discussed methods of relaxing validation for assets in development but we haven't seen any action in this department.

Things take time. We've got several months of work lined up for everybody and while I will try to fit at least an initial implementation in my spare time, that's a pretty sparse resource and it's currently tied up with the performance analysis tools.



2. There seem to be issues at the DLS with cross asset inclusion, your colleagues can't seem to be arsed to fix them or to respond to posts.

I can't really comment on what others are saying or doing in this regard, but as above I'd like to get this build tested for such issues so that we can push a validation update.




3. I've pointed out several areas where you could ease the load on authors, allowing the use of mirrored meshes, allowing comments in config sources to temporarily suppress sections of the file and so on, again no action yet.

There are a lot more of you (content creators) than there are of us (N3V engineers.) This is part of the reason that this group exists, to reduce the number of people who are directly making such requests, and ensure that such things can be discussed and prioritised amongst the group members. Servicing a single request of the nature that come up here can take an engineer anywhere from an hour to a week, and that's not even our primary function. Don't assume that just because we haven't yet serviced every single request that you've put forward that we're sitting idle. If I don't intend that a certain request will be worked on, I will let you know- otherwise, feel free to bump the thread where you requested it every so often, but don't be offended if I don't give a response when there's nothing to be said beyond "it's on the list."

cheers,

chris

WindWalkr
July 25th, 2015, 08:28 AM
Found a bug, if you use the AI on your train you can never get the Driver HUD back.

1. Give train an AI "Drive" command.
2. Give train an AI "Stop" command.


Thanks for the heads-up. We had seen this one but the repro I was aware of was quite a bit more complex. This will make life easier for whomever is fixing it :)

chris

andi06
July 25th, 2015, 10:33 AM
Or you could also do what it's asking for and give it a decent LOD.
Why would I waste my time doing that when culling will turn the whole mesh off and prevent the LOD from ever being seen?


There are a lot more of you (content creators) than there are of us (N3V engineers.)
Of course there are, you're building a record player, we're making the music.

My main point is that both your priorities and your presentation in this area are misguided. You are rejecting objects which fail to meet a set of arbitrary rules before the tools to establish compliance are properly available and (whilst I acknowledge that this is not your intention) you seem to be making life as difficult as possible for those of us on this side of the fence. (How else can I interpret a system which forces me to provide something which I know will never be visible or forces me to find a workaround to enable a compliant asset to pass validation)

I also have limited available time and I resent using it to sort out this sort of problem. This is something that should worry you too because your business model depends on people like me providing the objects which make your game usable and pay your salaries (please interpret the word 'you' as meaning N3V rather than Chris Bergmann). It is very much in your interests that your unpaid work force can be as productive as possible - not buried under mountains of digital bureaucracy.

You should be leaving anything which isn't 100% rock solid as a warning until you have finished writing the game and providing the tools. Once this is done will be the sensible time to toughen up the rules (there won't be a meaningful quantity of TANE assets until then anyway) And you should also look at how the whole process can be turned around to provide an incentive to produce efficient work rather than a stick to beat us around the head with.

WindWalkr
July 25th, 2015, 10:40 AM
Why would I waste my time doing that when culling will turn the whole mesh off and prevent the LOD from ever being seen?

Because, at the end of the day, that's what works. We've discussed why it is this way. I freely acknowledge that it is not ideal, but we've yet to come up with a better alternative.



My main point is that both your priorities and your presentation in this area are misguided. You are rejecting objects which fail to meet a set of arbitrary rules before the tools to establish compliance are properly available

Consider the rules one way of properly establishing compliance. Yes, it's a little crude, but it's better than no compliance.



You should be leaving anything which isn't 100% rock solid as a warning until you have finished writing the game and providing the tools.

You're confusing two unrelated issues here. The presence or absence of tools will not resolve your concern. Your concern is "most people need to do it that way, so the system forces it to be done that way, but in my case it's optional." You're quite correct, and that has nothing at all to do with tools, but is simply the case of validation working to a "lowest common denominator." Unfortunately we don't have a "this is Andi, he knows what he's doing" option ;-) If it were truly preventing you from moving ahead, then you'd have a lot more sympathy from me, but it's not- it's just forcing you to do things a certain way, which is admittedly slightly more work, but not more than a few minutes worth.

chris

andi06
July 25th, 2015, 11:19 AM
Unfortunately we don't have a "this is Andi, he knows what he's doing" option ;-) If it were truly preventing you from moving ahead, then you'd have a lot more sympathy from me, but it's not- it's just forcing you to do things a certain way, which is admittedly slightly more work, but not more than a few minutes worth.
Rather unkind of you, if partially true :-)

My comments use the simplified case of a passenger door mesh and I do accept that dealing with this specific error isn't particularly taxing but it is just one example of an issue that might well be much more serious in other assets.

The aim of the game is to reduce the drain on resources and you have provided more than one way of doing this (LOD / mesh culling / suppression of animation etc) I didn't decide to provide these options, you did - and what's more you have volunteered in another thread that you believe some of the existing facilities are insufficiently flexible. The Mr Hyde part of Windwalkr is making sensible noises.

Then along comes Jekyll, ignoring Hyde, trashing the whole concept of flexibility and enforcing the lowest common denominator at gunpoint - just because he needs to see the body but doesn't want to do a few sums. I'm suggesting that you keep Jekyll locked up for a while since you are working on tools which should make the whole topic more digestible for everyone.

I might point out that Jekyll is quite happy with this as part of an lm.txt file:


mesh("0.01") {
name="autocoach-3.im";
}
mesh("0.02") {
name="autocoach-2.im";
}
mesh("0.03") {
name="autocoach-1.im";
}
mesh("1.00") {
name="autocoach-0.im";
}

He needs to be tamed!

How hard is it to build a table of resources in use for an entire asset at four notional LOD levels and base your polycount validation on the totals rather than on statistics that only reveal a part of the issue? Surely this is insignificant compared to the time you take to deal with textures.

WindWalkr
July 25th, 2015, 11:49 AM
How hard is it to build a table of resources in use for an entire asset at four notional LOD levels and base your polycount validation on the totals rather than on statistics that only reveal a part of the issue?

That's exactly what we do. The devil's in the fine print.

chris

andi06
July 25th, 2015, 01:35 PM
That's exactly what we do.
If that is so then your tabulation for the asset in question will reveal that the total polycount (all meshes and all attachments) at each LOD is 22715 / 12794 / 2299 / 213. Fixing your error will shave 176 polys off LOD1 which just isn't worth the effort.

It also happens to be an area where appearance will suffer from a reduction so the only acceptable way of clearing the error might well be to ADD polys to LOD0.

Incidentally there are mesh-asset coupler meshes in this object at 700 polys apiece. Adding LODs to these would really have a benefit since they are used by a large number of dependent assets - validation is happy with them of course.


The devil's in the fine print.
You need to sort out your devils. Look at the overall LOD reductions being achieved and, if they fall short, by all means raise an error and offer suggestions as to where changes might be made. Maybe its a little harder to do but that is your problem not mine - the current errors make you look silly and provoke intense irritation all round without achieving the desired outcome.

martinvk
July 25th, 2015, 04:12 PM
Tried to find an object in new session I created for C&O Hinton but when I click on the Main Menu / Find Object ... nothing happens. The other Main Menu selections all result in their dialog boxes appearing.

Edit: now it shows up, after I complain. See my waiting thread

martinvk
July 25th, 2015, 08:51 PM
Have the color code sequences been inverted? Instead of RGB it is BGR

Before when I wrote fontcolor 255,255,0 I would get a yellow font. That is how it worked in TS12
In TANE 77412 I get a light blue

If I then write 255,0,0 it is red in TS12 and blue in TANE

In TS12 with 255,255,0
713

Same arrow in TANE with 255,255,0
714

In TANE with 255,0,0
715

WindWalkr
July 26th, 2015, 12:59 AM
I've updated the help page on the wiki for the performance analysis tool. Neither the tool nor the help page is finalised, but hopefully it will give you some ideas about where everything is headed.

chris

rumour3
July 26th, 2015, 02:31 AM
Regarding validation, my latest model includes the automatic running numbers tags (which works very nicely, thanks) but is build 3.7, and this isn't flagged as an error. Surely it should be, since it is an error in TS12?

Edit: Nice to see screenshots uploaded to the gallery now have shadows.

R3

pcas1986
July 26th, 2015, 04:20 AM
I've updated the help page on the wiki for the performance analysis tool. Neither the tool nor the help page is finalised, but hopefully it will give you some ideas about where everything is headed.

chris

While following this lead I noticed you changed my recent changes to the WiKi LOD page and, in particular, meshes used in traincars (http://online.ts2009.com/mediaWiki/index.php/Level_of_Detail#LOD_on_Attachments_to_Traincars). No problems with that, but a few months back I made an AAR coupler that used LM but when I committed it (in TANE but cannot remember the version) I got warnings that it should have animation or attachments. So I changed it to the mesh table method that attracted no errors or warnings. After reading your WiKi change of today I changed it back to LM and now I get no warnings or errors. Have you changed something in the validation recently?

In reading the changes to the WiKi for that page my understanding is that any attached meshes to a traincar must use LM. Is that correct?

Thanks for your efforts (on a Sunday!) on the Performance Analysis page. Much of it I don't really understand yet, but maybe when we get to see a working version it will fall into place. This part "and the draw call count per instance trends towards 1." I don't understand so some extra explanation might be helpful. I gather a draw call of 1 or near to it is good?

WindWalkr
July 26th, 2015, 05:28 AM
Regarding validation, my latest model includes the automatic running numbers tags (which works very nicely, thanks) but is build 3.7, and this isn't flagged as an error. Surely it should be, since it is an error in TS12?

It "should" be, but our validation is primarily focused on the current version (4.3, in this build) and isn't completely aware of the capabilities of older builds. It's something that would be nice to improve at some point.

chris

WindWalkr
July 26th, 2015, 05:55 AM
..AAR coupler that used LM but when I committed it (in TANE but cannot remember the version) I got warnings that it should have animation or attachments. So I changed it to the mesh table method that attracted no errors or warnings. After reading your WiKi change of today I changed it back to LM and now I get no warnings or errors. Have you changed something in the validation recently?

Not that I'm aware of; or at least nothing relevant I can think of off-hand.



In reading the changes to the WiKi for that page my understanding is that any attached meshes to a traincar must use LM. Is that correct?

"must" is probably a poor word choice. "LM.txt" has been our LOD technique for traincars since ~2004, and if you need to use LOD on a traincar then it's the logical go-to. There are probably other techniques which could work, but LM has no disadvantages here so why complicate things?



This part "and the draw call count per instance trends towards 1." I don't understand so some extra explanation might be helpful. I gather a draw call of 1 or near to it is good?

Zero is best ;-)

There is a cost per draw call, much like there is a cost per polygon. We usually say that a draw call is worth about 500 polygons, but performance is not simply a case of adding up all of the values- a lot of these steps (draw calls, vertex processing, pixel processing, etc.) all happen in parallel to some extent, so the cost is effectively the worst of each. If you have one draw call and one polygon, you're probably seeing the same performance as one draw call and 500 polygons. Similarly, ten draw calls and 500 polygons might be similar performing to ten draw calls and 5000 polygons. To make matters worse, it's hardware dependant so there's no simple answer to exactly what will happen except: try it and see. You want to minimise draw calls AND polygons, but often the worst thing you can do is focus on one at the expense of the other, unless you're already quite imbalanced. In most cases, reducing draw calls is more important than reducing polygons, but there are exceptions.

The highest LOD level can afford to be a little bit less efficient. They're right in your face, so you notice excessive optimisation, and there aren't really that many of them in the scene. That doesn't mean that you should be wasteful, but you do have a little more freedom. The important thing is that you want to reduce these overheads quickly as detail drops away.

The second or third LOD should tie up any minor inefficiencies at the expense of some features. Get rid of attached meshes other than bogies and large product queues. Get rid of minor animations such as fans or levers, probably even wheel animations unless there's something particularly visible going on (steam locos are an obvious exception here.) Remove "alphanumber" text because it's horribly inefficient. Remove any text effects that aren't particularly visible. Consider removing the less-visible coronas and that kind of thing. This is in addition to lowering the polygon counts.

The lowest LOD should be down to a single draw call and a minimal number of polygons. There should be no effects, the bogeys should be part of the main body, etc.

This is something that we've been doing a poor job of for traincars. Third-party creators, our own creators, and in fact the code support are all lacking here. We need to hit this pretty hard over the coming weeks- it'll obviously take a while for this to filter through the whole content set but we need to show that it can be done well, and that it's worth doing, and we need to teach people how to do it.

chris

pcas1986
July 26th, 2015, 09:56 AM
Chris,
Perhaps you might have a look at this thread (http://forums.auran.com/trainz/showthread.php?121682-error-checking&p=1425333#post1425333) in the Content Creation forum about validation of some of John Whelan's coaches. I downloaded one of the assets in question and tried to get a clean validation in 77412 after bumping the build number to 3.7. The validation puts out a whole bunch of errors about textures but they look fine to me.

pcas1986
July 26th, 2015, 10:07 AM
...
Zero is best ;-)
...

chris
Once again, thanks for the detailed reply. I think I followed the draw call explanation. :)

WindWalkr
July 27th, 2015, 03:05 AM
It is ridiculous to add lm.txt to a 430 poly animated mesh attached to a culled attachment, we both know this.

Another interesting point on this one. I've spent most of the day working on emulating Jet's attachment point culling. I've yet to encounter a single loco which actually uses attachment point culling. I'm sure that they exist, but in the majority of cases, the attachments are simply not culled. We really need to fix this, but in the meantime this validation is actually doing something useful.

chris

andi06
July 27th, 2015, 04:19 AM
Another interesting point on this one. I've spent most of the day working on emulating Jet's attachment point culling. I've yet to encounter a single loco which actually uses attachment point culling.
I used it in my Pullman cars <kuid:122285:550>...564> However due to another valuation cockup the use of the :Cull flag broke these assets in TC3 so I rather lost my enthusiasm :-)


... but in the meantime this validation is actually doing something useful.
No it isn't.

I gave you one example of an insane lm.txt in post 29 of this thread, here is another. This one is particularly useful because it makes this particular error go away:



version 1.0
offset = 0.01;
calcPoint = center;
multiplier = 1.0;
animationCutOff = 0.00;
renderCutOff = 0.00;
attachmentCutOff = 0.00;

mesh("1.00") {
name="left-doors-0.im";
}

It should be clear from this that, while you are checking whether or not an lm.txt exists, you are not bothering to open the file to see that the outcome you are looking for is actually achieved. As I said, this is irritating me and making you look silly.

What you are NOT doing is what you claim in post 30 (tabulating total polycount) All you really need to do to verify that some sort of half-effective LOD scheme is implemented is to tabulate all LOD levels, check how big the lowest one is, and check overall reduction per level. But you need to do this properly and accurately, and you need to publish details of the implementation - it certainly isn't accurate at present

WindWalkr
July 27th, 2015, 05:08 AM
It should be clear from this that, while you are checking whether or not an lm.txt exists, you are not bothering to open the file to see that the outcome you are looking for is actually achieved. As I said, this is irritating me and making you look silly.

You're wrong again; the LM files are definitely being loaded and tested. I'm not claiming that the current validation code is perfect- any problem cases should definitely be brought to our attention (i'm having a look at the asset you emailed me currently, and I agree something weird is going on there- not sure what yet.)



What you are NOT doing is what you claim in post 30 (tabulating total polycount)

Repeating this doesn't make it accurate. That's exactly what we're doing.



But you need to do this properly and accurately

And this is the trick. We need to make sure that any problems are found and resolved.

chris

andi06
July 27th, 2015, 05:58 AM
You're wrong again; the LM files are definitely being loaded and tested.
I'm not saying you're not opening them at all, I'm saying that you are not testing them properly. I can understand the post 29 example getting past your code but the one in post 42 only references one mesh and clears the error whilst doing nothing to actually address the issue.


Repeating this doesn't make it accurate. That's exactly what we're doing.
This wasn't well worded. I mean that you are not drawing sensible conclusions from the overall mesh count. For validation purposes you should be happy with an overall LOD structure of 22715 / 12794 / 2299 / 213. Given these polycounts, and since there is more than one way of skinning the cat, there is little or no benefit in raising the door meshes as an error.


We need to make sure that any problems are found and resolved.
I rather thought that I was trying to do that.

I did sit down and look at this issue (and your validation requirements for it) in some detail, my conclusions were:

1. The door meshes (which include multiple doors) exist only at traincar LOD0 and LOD1, at lower levels they are culled (or will be).
2. Their polycount cannot be reduced at traincar LOD1 without prejudicing appearance.
3. Their polycount is also adequate at traincar LOD0 and there is no visual benefit in increasing detail at this level.
4. The number of polys involved (443 x 20% = 89) is of little or no significance relative to the overall polycounts.
5. The conclusion is that to meet this error I need to ADD 90 polys per mesh at LOD0. Since the existing LOD0 is visually fine these need to be hidden unless I want to make unnecessary work for myself.

I know that this is a minor issue in the overall scheme of things but if it occurs once its going to occur again ... and again, and again.

A specific query:
Is there any benefit (beyond any polycount savings) in dropping the bogie meshes and incorporating them in the traincar. I ask because the bogie for this asset is widely used as a dependency in other author's traincars and must therefore have a full LOD (down to LOD2 = 16 polys). The bogies can be culled anyway and a mesh representation added to the main model but is this actually worth doing given that it will not save any polys?

WindWalkr
July 27th, 2015, 06:48 AM
I'm not saying you're not opening them at all, I'm saying that you are not testing them properly.

Fair enough, I can absolutely agree that there are cases which don't work properly.



..the one in post 42 only references one mesh and clears the error whilst doing nothing to actually address the issue.

Any chance you can shoot me the asset you used here?



I mean that you are not drawing sensible conclusions from the overall mesh count. For validation purposes you should be happy with an overall LOD structure of 22715 / 12794 / 2299 / 213.

Yeah. The asset you emailed in is showing 12 / 12 / 12 / 12 which obviously isn't well liked. The root cause is something to do with the aliasing; one of the validation code-paths is not handling that as it should. (Unfortunately validation isn't particularly straightforward about how these things are done, so it's possible for small changes to throw us from one codepath to a quite different one with different code involved. This is why apparently small changes can have a big impact.)



I rather thought that I was trying to do that.

That you are, and most helpfully too. It bears keeping in mind that the process is not complete yet, though :)




4. The number of polys involved (443 x 20% = 89) is of little or no significance relative to the overall polycounts.

I'm not sure of the specifics of the asset you're referring to here; it might be a good idea to send it in if you want to talk in this kind of detail. (Or if you've already sent it in, let me know which one you're referring to.)



Is there any benefit (beyond any polycount savings) in dropping the bogie meshes and incorporating them in the traincar. I ask because the bogie for this asset is widely used as a dependency in other author's traincars and must therefore have a full LOD (down to LOD2 = 16 polys). The bogies can be culled anyway and a mesh representation added to the main model but is this actually worth doing given that it will not save any polys?

Yes-
1. You potentially cut out the extra load of updating the bogey visuals, including LOD and season calculations, positioning, animation, etc. Exactly how much is saved here is questionable and VERY dependant on the exact details of the content and the code, but we can accept that there will be a modest saving.
2. You cut out the draw calls involved in the bogeys. At the very least, a train is going to have two bogeys so this amounts to two draw calls (ignoring the cost of shadowing, etc.) Depending on where we go with instancing, this might come down to one draw call in the future or even less (one draw call per bogey per lod in the scene, perhaps?) but that's not something that exists currently so there's no point in speculating too far. Given that we ideally want to be down to one draw call for the train body at low LOD, you're tripling the cost of the low LOD by keeping the bogeys independent.

None of this stuff adds up to much in one-off cases. It's when you have an entire scene active that you will notice the difference.

chris

pcas1986
July 27th, 2015, 06:56 AM
Currently, there is only one cull point specified in a LM file. It seems to me that that particular design point was for bogies. Back in post #38 you suggested dropping attachments early and I can see some value in that assuming I understood you correctly. For a traincar, if you wanted to drop some attachments earlier than a bogie, there is no way to do this. Are there any plans to specify cull points for specific attachments?

WindWalkr
July 27th, 2015, 07:05 AM
Currently, there is only one cull point specified in a LM file. It seems to me that that particular design point was for bogies. Back in post #38 you suggested dropping attachments early and I can see some value in that assuming I understood you correctly. For a traincar, if you wanted to drop some attachments earlier than a bogie, there is no way to do this. Are there any plans to specify cull points for specific attachments?

Yes. I've already mentioned this one- we can definitely see the need for this.

My current expectation is that, in addition to the :cull support carried forth from Jet, we'll support removing attachment points from the individual lower-LOD IM files (but NOT adding them.) It's also possible for us to take this further and add explicit support in the config.txt, but that may prove unnecessary.

chris

andi06
July 27th, 2015, 07:23 AM
The asset you emailed in is showing 12 / 12 / 12 / 12 which obviously isn't well liked. The root cause is something to do with the aliasing; one of the validation code-paths is not handling that as it should. (Unfortunately validation isn't particularly straightforward about how these things are done, so it's possible for small changes to throw us from one codepath to a quite different one with different code involved. This is why apparently small changes can have a big impact.)
I was expecting you to say 0 / 0 / 0 / 0 and the correct answer is 4482 / 2187 / 838 / 164. No coconuts for you either way. :-)


It bears keeping in mind that the process is not complete yet, though :)
Quite, that's why I'm suggesting that its premature to include this in validation.


Any chance you can shoot me the asset you used here?
I'll bundle it up.

Your comments on bogies are understood.

andi06
July 27th, 2015, 07:29 AM
My current expectation is that, in addition to the :cull support carried forth from Jet, we'll support removing attachment points from the individual lower-LOD IM files (but NOT adding them.) It's also possible for us to take this further and add explicit support in the config.txt, but that may prove unnecessary.
This could be quite difficult to manage in the source files. (I generally keep all the attachment points on a separate layer so that I can hide them easily while working on the mesh and also make sure that they are all present in every export). Why not expand the :Cull syntax:

a.point:Cull // cull at the level specified in lm.txt
a.point:Cull2 // cull at LOD2 and below

WindWalkr
July 27th, 2015, 07:29 AM
Quite, that's why I'm suggesting that its premature to include this in validation.

*shrug* that's why we're seeding this test build to you guys, so you can tell us when things aren't working for you.

The last one seems to be working now, btw. Check it out in the next build and let me know.

chris

WindWalkr
July 27th, 2015, 07:31 AM
This could be quite difficult to manage in the source files. (I generally keep all the attachment points on a separate layer so that I can hide them easily while working on the mesh and also make sure that they are all present in every export). Why not expand the :Cull syntax:

a.point:Cull // cull at the level specified in lm.txt
a.point:Cull2 // cull at LOD2 and below

Could do. It's a bit of an ugly hack, but then :cull always was, so *shrugs*.

chris

whitepass
July 27th, 2015, 01:54 PM
Visible loads on flat cars are being culled to soon, this is very noticeable.
Could we have lm.txt type LOD on visible loads?

rumour3
July 27th, 2015, 02:47 PM
Visible loads on flat cars are being culled to soon, this is very noticeable.
Could we have lm.txt type LOD on visible loads?

+1

Culling loads in the current build is unacceptable at the default settings.

R3

pcas1986
July 27th, 2015, 08:28 PM
Yes. I've already mentioned this one- we can definitely see the need for this.

My current expectation is that, in addition to the :cull support carried forth from Jet, we'll support removing attachment points from the individual lower-LOD IM files (but NOT adding them.) It's also possible for us to take this further and add explicit support in the config.txt, but that may prove unnecessary.

chris

Thanks - I must have missed that in the noise.

I can see the need for a test asset that visually demonstrates when events, such as a cull, occur. I made such an asset for procedural track so I could follow the LOD tree changes. Maybe the tools in the Preview Asset will obviate that but I'll have to wait and see.

andi06
July 28th, 2015, 12:47 AM
I've just seen a recurrence of the CTD on exiting Quickdrive which has been absent up until now on this build.

Trigger was as before:
1. Load a route, go to QuickDrive using the menu button
2. Call Main Menu/Exit Driver
3. Result is windows dialogue 'Trainz has stopped working' etc.

WindWalkr
July 28th, 2015, 12:56 AM
I've just seen a recurrence of the CTD on exiting Quickdrive which has been absent up until now on this build.

You're not getting crash dumps from these, right?

chris

andi06
July 28th, 2015, 01:32 AM
I'm afraid not.

pcas1986
July 28th, 2015, 01:49 AM
I got one too although it took a couple of tries. In my case I opened a route for editing and selected QuickDrive as Andi said. I was prompted to save a default session which I did, then I ignored the QuickDrive windows and just went to the main menu and exited Driver. Then I got the Microsoft warning. No dump file.

martinvk
July 28th, 2015, 06:26 AM
Tried to provoke a CTD in 77412 but it doesn't want to.
Kickstarter County
Quick Drive
close dialog boxes
close minimize Quick Drive
Main Menu / Exit Driver
Save
Back to the session selection menu

Same sequence but instead
Main Menu / Exit Game
Game window closes
CM window and Launcher are still open