A lot of this issue has to do with company management and an educated user base. Management do not want to spend money on upgrading infrastructure just as the governments do not want to, or reluctantly do, invest in roads and railways. Management sucks every cent out of the companies and never puts anything in. IT is there to supply the services and needs to be able to upgrade., however, IT is considered a cost-center and is treated by the beancounters as an operating liability on the books. This means less funds are allocated to IT than is sometimes needed to build up and maintain their infrastructure. IT is also sometimes, more often than not, lumped under another department's cost center such as operations. This too means that they may see even less than the minimum requirements for system maintenance and upgrades. This was the problem where I worked at one location.
Now the problem here is management will never allow a full upgrade. I know I worked for many companies, big and small, with one being Oracle and the others being a former Polaroid spin-off, Polaroid, and much smaller organizations. In all cases we had to beg, borrow, and steal to get a budget to support everything we had whether it was new UPS batteries, backup tapes, or even replacement fans and some hot-swap hard drives for the file servers. There was never any planning for spare parts so if a hard drive went in the RAID, I had buy it myself and ask for a reimbursement. Hard drives for the Compaq server were only $250 to $300 each!
At one of the places mentioned, my manager thought that the UPS batteries were optional and would not authorize the spending of $300, yes that's $300 with the trade-in of the old batteries, and that included free shipping both ways. Nope no replacement batteries, and we lost a server, due to a major surge, which took out a RAID, SCSI controller, and the motherboard. I warned him multiple times about this, but he ignored my requests. Oh wait warranty coverage? I had to fight for that too and didn't have that so we were stuck. But... when it came to executive bonuses, the hands were out, while the rest of us got nothing, and we were stuck using ca. 2000-2001 PCs on our desktops in 2010, which were so slow I hated working on them.
Oracle wasn't much better. They just upgraded in 2012 to Windows 7 from Windows XP. Yes... Windows 7, and are probably still "upgrading" there machines. Scary!
Just because users "know" how to use a computer because they have one at home, doesn't mean they have the knowledge of safe computing practices! Sometimes safe computing needs to educated, and other times it needs to be enforced using group-policy settings which disallow access to specific functions on the PC such as thumb drive access via USB ports, and locked down access via proxy settings and network security policies. These policies along with user education and help prevent attacks such as this.
The current attacks are caused by clicking on a message with a bogus attachment/link in it. While I worked in IT, I took the time to educate my users. In the smaller companies, I trained my users on what to do with emails, such as being suspicious of attachments they were not expecting, etc. It got to a point where they would call me if they got a suspicious email and have me look at it before a decision was made, which was to delete it. In the end we had very minimal malware issues, which were handled by our antimalware software that was installed on all file servers and the Exchange Server. Much later, while at Oracle, I continued the practice of training people. When the rounds of new hires came in, I took about 20 minutes out of my busy schedule to attend their orientation classes. I did a quick lecture on malware and safe computing. This didn't prevent all the attacks, however, it cut down many of them.
Proprietary software and proprietary hardware requirements. This is not uncommon, especially with specialized equipment such as medical devices, database software, and other hardware such as engraving machines, embroidery machines, CNC, and RIPs for printing, for example.
In cases such as these, this equipment should be isolated from the internet, if they have to networked as in the case of a RIP, and not treated as a user-PC. In other words, they are specialized devices such as a RIP used for printing images to an image setter, and not a user workstation where they can browse the internet.
The other solution for such programs as database front ends, is to use virtual machine images running on a modern operating system such as Windows 10. The VM is essentially a PC running in it's own environment, which is contained within a single file structure. In this case, Windows XP, Windows 2000, Linux, and other operating systems can run on and on without worrying about the hardware they're running on. This does, however, present an issue with hardware controllers, though, thus, it's important to keep those off the internet and behind firewalls and other protection. If handled properly, the VMs can be backed up via snapshots, and restored should there be some kind of problems. With a VM the complete machine can be restored within minutes in such cases so that situations such as this are mitigated.
Then there are backups. Backups should be done daily and kept offsite, whether on the cloud, or in a drive library somewhere, or in a vault. Backups should not be set to overwrite newer data, and thus need to be managed, circulated, and maintained. There's no good in writing over good data with an infection, which happened to one hapless user who used the cloud to backup her documents. Her cloud-based backup was set to synch her data, which in the end replicated the ransomware right over her only good backup on the cloud. So much for a good backup, and that brings up another point which I'll get to. With backups kept off of the systems, the data can be restored and a clean system brought online. The problem, however, is the IT staff needs to take the time to test their backups. A backup is no good if they can't be restored. Data needs to be restored to a test system to check the integrity of the backup systems.
I see these attacks as exposing the faults in the organization's IT infrastructure and are things that could have been prevented if they had followed common sense and common IT practices. All operating systems have bugs and vulnerabilities, whether it's Android, Apple OSx and even the 'nix-based systems. Right now these attacks are aimed at Windows-based machines and in particular those that are unpatched. This however, does not mean there will not be attacks aimed at other operating system in the future. Mitigating a malware attack takes a multiple-pronged approach. The IT staff can do all they can, if they have the resources available, but they alone cannot do it alone. They need not only management being proactive and giving them the resources to keep the infrastructure up-to-date, but also they need a user-base which is educated and does not click on things because they can. The buy in of management will mean that the user-base will be willing to follow rules as well. The IT staff doesn't have to be reactive, and the proactive approach by all is most successful.
When I was at Oracle from 2010 to 2012, I took a reactive IT environment where the team ran around crazy constantly to one where we took control and made it a proactive environment. We ensured that only specific devices had access to the internet, and resources only required to get the job done. This made for some grumbling and complaining from the users initially, but it made for a much safer working environment; a safer computing environment, that is. Machines which had previously had access to the internet, such as a kiosk computer for example, were locked down. This meant it required specific user access and specific permissions, which made things a bit inconvenient for management, but in the long run ensured unauthorized users could not access the network from that PC. In the end it made for peace-of-mind for the IT staff and made our lives much easier so we could focus on the urgent support issues such as remote user connectivity and upgrades, which were in constant motion with the nearly 700 users at the location.
Now for a real world example of what should not be done, which my brother told me about and got my hackles up as I think about it. My dad was at a local hospital for a CT-scan. The software for the equipment ran on a Windows-based computer, most likely Windows XP or Windows 2000. Instead of this machine being isolated from the internet, meaning no browser allowed, or locked down only to allow intranet access, the operator and other staff were on Facebook browsing images that were sent to one of them. Seriously! How to infect the medical-device. It's cases such as this which most likely brought the systems down at the NHS in the UK, as well as in other organizations!