AI is coming to real trains (but I hope that it is not Alister)

It maybe good, but I would hate to know what would happen if the connection between the computer and train goes out or looses connection for whatever reason.

Right now, trains are a bit more safer with an actual driver than using a computer to control...I personally think AI driven trains aren't safe..just my view
 
It maybe good, but I would hate to know what would happen if the connection between the computer and train goes out or looses connection for whatever reason.

Right now, trains are a bit more safer with an actual driver than using a computer to control...I personally think AI driven trains aren't safe..just my view

Valid points. I would personally rate AI controlled trains as safer than AI controlled cars because there are no steering issues and the rail corridors would be isolated from pedestrian traffic.

The main reason for the push to AI control in anything, not just trains, is to eliminate the human factors - usually distractions, fatigue, medical incidents and poor response times.

The same problem of what to do with a communication or electrical failure exists with the current automated "driverless" trains - the train would go into "safe mode" and be brought to a stop as safely and as quickly as possible. This also happens with trains that have a human in the driver cab.
 
However the application would need to be very sophisticated if used anywhere outside a strictly controlled environment, e.g. rapid transit entirely enclosed in tunnel and platform edge doors. A driverless train is not going to be able to broadcast an emergency call if there is a tree down blocking the opposite line or a trespasser inside the boundary fence. A driver operated train can put the brake in emergency if someone falls off the platform as it is running in and hopefully stop short (unless it is a suicide at point blank range) but a driverless train would continue to the designated stopping point. Even if you provide emergency plungers on the platform by the time someone realises what is occurring and if they react, probably too late.
 
However the application would need to be very sophisticated if used anywhere outside a strictly controlled environment, e.g. rapid transit entirely enclosed in tunnel and platform edge doors.....

Again, all valid points but you qualified your response with "if used anywhere outside a strictly controlled environment" such as tunnels and platform edge doors. These exist already for trains with human drivers and yet there are still incidents with humans in control. The main reason for using AI in train control is to help eliminate the human errors that are still the most common cause of safety incidents.

But I agree, if the problems that N3V are having implementing AI (admittedly, an AI that attempts to "do everything" and not just drive the train) are any indication, then it will not be "just a few short years" as the researchers are predicting.
 
Just heard the tragic news about the first pedestrian to be killed by an AI controlled car (an Uber) in Arizona. The pedestrian was apparently walking her bike across a 4 lane road at night and not at an authorised crossing when she was struck. She later died of her injuries in hospital.

As many have predicted, this incident brings up the issue of culpability. Is the software, the car manufacturer or the "passenger" responsible for the safe operation of the vehicle?
 
Driverless trains are not something new.

I remember back in 2007 hopping aboard a 'driver-less train' at Zurich Airport to get to another Terminal. It was a dedicated line and very unlikely to be involved in an accident. We felt quite safe and unique.

To have them interact with day to day road traffic, I think there could be some underlying problems to address.

Cheers,
Roy
 
Driverless trains are not something new.

The vast majority of "driverless" trains actually do have a human driver located in a comfortable office building nearby or 1,000s of kms away. I can't speak for those at Zurich airport (or other places) but if the train is just shuttling back and forth on the same track without any switching or other trains to deal with then it may well be a "computerized train" that is following a set of programmed instructions (much like the elevators in a building), which is not the same as a train under AI control.

But you are right, mixing AI cars and walking humans (who are often looking at their iPhones and not the road) on the same road is an accident just waiting to happen. I suspect that the only way to eliminate the conflict is to completely isolate them from each other.
 
Just heard the tragic news about the first pedestrian to be killed by an AI controlled car (an Uber) in Arizona. The pedestrian was apparently walking her bike across a 4 lane road at night and not at an authorised crossing when she was struck. She later died of her injuries in hospital.

As many have predicted, this incident brings up the issue of culpability. Is the software, the car manufacturer or the "passenger" responsible for the safe operation of the vehicle?

I think the worst part of this story is that there was a person sitting in the driver seat... a felon, hired by Uber. Why would you test something like this without a member of the test team, that was not a felon? just seems out of line. Might as well put the passenger in the driver seat. This is still in testing phase, right?
 
Yes, I understand that it is still in the testing phase but, like the TANE betas now being tested, perhaps this was more of a "real world" test with a non-programmer, non-technician "at the wheel" (but one big difference here is that a bug in TANE is not going to kill you despite what some posters here claim).

It is also possible that having a technician in the driver seat may not have made any difference in this case.

As for the "felon" (I had not heard that bit), I don't think we can draw any conclusions. It would not surprise me if there was an acute shortage of "non-geek" volunteers for the job of riding as a passenger in a driverless car, for some reason or another.
 
It is also possible that having a technician in the driver seat may not have made any difference in this case.

I just saw the video from the car dashboard camera and, based on what I saw, if I had been driving I would NOT have been able to react fast enough to avoid the collision or even to stop in time. The pedestrian just suddenly "appeared" in the headlights directly in front of the vehicle. From what I can see, the street lighting is below what I would have expected for a major road (although I concede that that may be a limitation of the camera itself).

However, it does not explain why the sensors in the vehicle did not detect the pedestrian and take evasive action.
 
I just saw the video from the car dashboard camera and, based on what I saw, if I had been driving I would NOT have been able to react fast enough to avoid the collision or even to stop in time. The pedestrian just suddenly "appeared" in the headlights directly in front of the vehicle. From what I can see, the street lighting is below what I would have expected for a major road (although I concede that that may be a limitation of the camera itself).

However, it does not explain why the sensors in the vehicle did not detect the pedestrian and take evasive action.

I saw that too. She was wearing dark clothing and crossing a dark street. There's no way to see that, but as you said how come the computer sensors didn't pick up the human in the way.
 
Another automated job going away. But don't worry. an upstanding , high morality, politician will have an advert on Facebook in a short time promising to restore those jobs.
Dick
 
Another incident, involving a car with an "autopilot" (not the same as "driverless"), that resulted in the death of the driver a 38 year old software engineer from Apple, has just had its details released from the crash investigation. The incident occurred last month and involved a Tesla test vehicle.

The "autopilot" control being tested still requires drivers to maintain road vigilance and keep their hands on the steering wheel. The computer maintains the vehicle speed, can change lanes and self park the vehicle. The driver, according to the computer logs, removed his hands from the steering wheel and ignored several subsequent warnings to return his hands to the steering wheel. He also did nothing to stop the vehicle crashing into a concrete barrier.

This is the second fatal crash this year involving a Telsla test vehicle under "autopilot" control.

There is, as yet, no official explanation as to why these crashes have occurred. Apart from possible "medical incidents" where the drivers have become incapacitated, I would theorise that some drivers may be placing far too much faith (and their lives) in the hands of these semi-automated systems. Although one would not expect experienced software and hardware engineers to fall into this trap, some probably do. In the time I spent working in AI support I encountered numerous cases where end users assumed that the computer and/or software was capable of feats of "mind reading".
 
Just heard the tragic news about the first pedestrian to be killed by an AI controlled car (an Uber) in Arizona. The pedestrian was apparently walking her bike across a 4 lane road at night and not at an authorised crossing when she was struck. She later died of her injuries in hospital.

As many have predicted, this incident brings up the issue of culpability. Is the software, the car manufacturer or the "passenger" responsible for the safe operation of the vehicle?

Well it now appears that the passenger in a driverless vehicle will be held responsible for any deaths or injuries that are caused by the vehicle. In an official report just released on the above fatal collision, the passenger, who was watching a streamed episode of "The Voice" at the time of the collision, may be charged with vehicular manslaughter. If she had not been distracted than she would have been able to safely stop the vehicle in time to avoid the collision, according to the report.

Uber have also announced that it is "absolutely prohibited" for passengers to be using their phones or mobile devices while riding in one of their driverless cars.
 
................................

Uber have also announced that it is "absolutely prohibited" for passengers to be using their phones or mobile devices while riding in one of their driverless cars.

That will be difficult to enforce, I think. It's proving impossible to stop people using their devices while actually driving, so are they going to stop using them when they are just sitting there, not driving?

Just my two bitcoins worth,

Mick
 
That will be difficult to enforce, I think. It's proving impossible to stop people using their devices while actually driving, so are they going to stop using them when they are just sitting there, not driving?

I suspect that it is more to cover their own backsides if a passenger has an accident while using a mobile device. They can point to the "conditions of use" and the blame will most likely fall upon the passenger.
 
Personally I think it would not be doable for freight trains. Passenger yes because that is 1 schedule with 1 preset consist which does not change. Freight on the other hand varies from day to day and the AI would also need to know what cars go where and where the consumer wants them to go. Kind of a lot to ask from AI in my opinion. Remotely operated locos are fine in my opinion. It allows 1 guy to operate it without needing a conductor although they still have one. They are extremely useful in yards and such but I can't see them being put into any kind of mailine services.
 
They are extremely useful in yards and such but I can't see them being put into any kind of mailine services.

Remotely controlled mainline freight trains, actually ore trains with hundreds of ore wagons, are already operating in the Pilbera region of North Western Australia with the journey between mine and port about 275km (170mi). The trains are controlled from the city of Perth, about 2,000km further south. This region also holds the world record for the longest train - 682 fully loaded ore wagons and 8 locos with gross weight of almost 100,000 tonnes (metric) and a length of just over 7.3 kms.

While AI controlled freight trains are not yet a reality, it is in isolated and remote operations such the ore lines in the Pilbera, that they are most likely to make their appearance. There are no pedestrians and no towns, very few road crossings, just the odd kangaroo.
 
Back
Top