It would seem that a computer glitch has caused a large chunk of the RBS groups banking to have a few problems. Looking around this would seem to have been caused by systemic management failure at the bank.
Whilst this hasn't affected me personally yet, I wondered if anybody had any other opinions? Also if anybody has ideas on better organisations with whom to have bank accounts?
If I were a customer, the first thing I would do after re-gaining access to my funds would be to close the account tuite de suite and stuff the funds under the mattress.
The extra-ordinary incompetence shown in this affair suggests that they are not even capable of cutting notches in sticks....
should this kind of thing really be happening in 2012 from a multi billion pound company?
wow the banks opened on sunday woopeee. my daughter was supposed to have her wages paid in on friday and they were not there, she had to pay the balance of her holiday on the saturday. luckily she had me to pay it for her.
the banks as usual are a law unto themselves and i hope that there compensation bill runs into millions and that no bank exec gets any bonuses this year.
They have already said they will compensate anyone affected by the problem. Unforeseen problems can happen anywhere to anything and anyone, so the fact this happened to my bank hasn't changed my opinion of them whatsoever. The fact is they've actually made quite a great effort to sort the problem out and considering the massive scale of the problem itself, they're getting around it quite quickly.
I don't see how management are to blame for a computer glitch either, unless they're programming the networks themselves.
4 days isn't a week. But still I don't see how management are to blame. There may well have been contingency plans as I'd imagine you would have expected there to be in such a big company. I firmly believe that the blame lies with the person at the helm when the ship sinks. Whichever bod in the IT department pressed the wrong buttons and caused the problems should be held accountable for his/her errors.
Simplistic approach to blame an individual.
Certainly a full review as to who coded the new software, how was it tested, who signed it all off as fit for purpose, how it was implemented, together with items identified during the fix, all mean that lots of IT and Business people will be 'in the frame' for this, though a public 'execution' of those concerned is doubtful.
And it'll probably not just be confined to RBS, as it's understood parts of this may be due to outsourcing of technology work overseas to India and the loss of 20,000 UK-based roles & experience possibly being a factor.
Whilst it would seem that RBS have indeed outsourced a lot of jobs to India the one statement they have made about the failure is "the software error occurred on a UK-based piece of software". Mind they chose not to comment on where the staff were located who patched and maintained the software. Not quite sure what to make of that one myself.
"the software error occurred on a UK-based piece of software"
On first reading nothing incorrect with that statement given that the software runs in the UK at their Edinburgh computer/data centre.
However the Guardian's investigations, if correct, suggest that NatWest's problems began on Tuesday night when it updated a key piece of software – CA-7, which controls the batch processing systems that deal with retail banking transactions.
CA-7 is an 'off the shelf', if this term can be used at this level, software product that controls job submission and monitoring on mainframe computers. That's not to say that it's like Word or Excel, far from it as each company will have a dedicated IT team responsible for configuring it for that companies mainframe (s) and the running of what probably will be several million individual 'jobs' each year depending on time of day, day of week, month, etc.
Now consider "the software error occurred on a UK-based piece of software" alongside the RBS advert in Hyderabad for "Looking for candidates having 4-7 years of experience in Batch Administration using CA7 tool. Urgent Requirement by RBS" and there's a whole new slant as whilst the RBS mainframes may well be "UK-based" and CA-7 resides on them, there's an indication that the actual configuration may not be.
However is RBS's CA-7 customisation at fault per se, or did they as they say make a change to a Payment system that wasn't fully tested or have the necessary CA-7 job scheduling system changes made ?
Either way a small dedicated team will still be under an awful lot of pressure to unpick last Tuesday and subsequent nights job schedules, and then both validate both a BAU daily schedule as well as create a custom schedule in order to rectify the issues and get their core accounty system up to date so that the BAU version can run. Things that require cool heads and people with years of RBS intimate mainframe experience & knowledge.
I'm a motor engineer and as such don't really know the fine details of computing and I've not the first idea how to program or code anything... But is there any reason why these systems don't work in real time? Say I take £20 from an ATM which is presumably connected to some main database somewhere that tells it how much money is in my account ans thus how much it is allowed to give me - why can't this ATM then transfer a - £20 signal back to the database, and hey presto, it's all updated as it happens? I thought banking systems had been working like that for a while already?
Gee,
They sort of do and if you did a 'balance enquiry' staight after withdrawing £20 it would show your balance reduced by £20. However this is recorded against the records held within the ATM sub-system.
At the end of day all these sub-systems, e.g. ATM, Internet Banking, Payments, Direct Debits, Standing Orders, Cheque Clearing, Telephone Banking, Branch Counters, etc. have to update and reconcile against the central 'master' accounting system on the Mainframe that requires thousands of scheduled jobs to run in the correct order so that the master copy of the account is updated and 'feeds' are created to all the sub-systems for the next working day.
On a much smaller scale as a motor engineer/mechanic most modern cars have computers fitted that take feeds from various sub-systems, e.g. engine, exhaust, gearbox, suspension, air-con, wipers, etc. that are processed within that sub-system and then update the main processor. OK these are a lot smaller, the data volumes less and the timescales are therefore smaller that your garages diagnostic computer can process.
Now imagine your computer having to access and process, at the same time, for every vehicle in the workshop, it would struggle.
Now multiple this by a factor of several million and it's obvious that it couldn't cope 'real-time', hence the continued existence of core mainframe computers in Banks and other large organisations, which utilise the mainframes speciality of processing millions of items sequentially very quickly.
Organisations have different types of computer systems to do different things, each playing to their strengths, i.e. you wouldn't have a Ferrari for crossing the Sahara or taking 6 kids to school, hence the existance of 4x4's, people movers, vans, estate cars, etc.
Hope this helps.
Banks fuck it up .... well I never who'd have thought such a thing
J,
Think you'll find that it's not an 'automated', i.e. without human intervention fail over from primary to secondary data centres at that level of complexity, plus until they understand the root cause it's probably just as well as system replication between primary and secondary sites could well of corrupted both, as you alluded to.
I spent about 15 minutes in a phone queue last night to one of RBS competitors. I held on partly because of the 'RBS vodka horror' scenes played out in the chat room on Friday night. Now then fortunately my debit card still seems to work but I also found the suggestions of problems at RBS were due to CA7 scheduling software. I also found the advert for RBS CA7 operators in Indian which seems to have gone now, maybe they found somebody? A glitch in a banks computer systems you can just about understand but 6 - 7 days?
Any road d'up not being overly impressed with the customer service at this other bank I did think about giving up the wait on the phone. Having eventually got thru' it would seem that the 15 - 20 minute delay was due to an unprecedented no of calls from people wanting to transfer bank accounts. Funny that.
Now it would seem according to the FT it is Computer Associates fault. Well I assume if RBS are going to sue them it must be their fault. Looks like it wasn't a problem in India at all. Oh well I guess RBS aren't going to replace the CA7 system any time soon so hopefully their computers will stay up long enough so they can transfer my account.
Whilst you may have a mirrored computer system in the event of Fire, Flood etc, maybe they should have more than one.
If RBS do an upgrade in future perhaps they should have one disaster recovery system, and another system with a copy of stuff before the upgrade is applied? That way if the upgrade goes pear shaped they could just switch back to the spare copy?
This might cost some money, indeed possibly Millions of pounds, but I have seen reports suggesting that the cost of RBS' glitch is about 1.7 Billion.
What if the patch is applied to both A and B sides at the same time without waiting for the obviously negative effect to show on the A .. Think you will find this is called "Human Error" caused by "Human Intervention"
This isn't a comms issue (which you would expect the B side to instantly take over), this was a software issue caused by a patch that was applied without going through the necessary pre production testing. On one system only, but it happened to be on a key system (obviously). It is applied to the servers supplying the service. These are always applied to the B side servers initially then if all is well (Usually not in the same weekend) the service is swung over to the B side and its applied to the A. Don't mistake Comms and Data with Services. Completely different beast within the DC.
These things happen, in my learning years I managed to bring down HBOS cash machines at 1900 on a Friday evening for 3 hours. You will be amazed the power a temporary employee in the right sector in his second month at a large bank wields lol Needless to say I didn't see my 3rd there that time.
Rob,
Looking at the FT what is reported doesn't really follow the headline, as it goes on to read Royal Bank of Scotland is discussing at a senior level whether to take legal action against US software maker CA Technologies and they said the problems had been caused by a CA Technologies software system called CA-7 at an overnight upgrade one week ago.
“It was certainly an issue with this software. We will still have to establish if this was their fault or if it was our handling of the software,” one of those people said.
The "....or if it was our handling of the software" would seem to be the relevant piece of wording,
This would seem to concur with one of the few comments issued by CA Technologies that RBS's technical issues were "highly unique to their environment".
Additionallly talking with contacts at other companies I know that also use CA-7 there isn't a 'change freeze' on their CA-7 schedules, indicating a loss of confidence in the product itself, though reviews of access/management controls have been undertaken.
Which seems to concur from reading reports in the Mail & Telegraph, both reporting that a junior technician in India caused the when they 'cleared the whole queue... they erased all the scheduling.’ when backing out a CA-7 update. which was was applied to both the banks’ back-up systems and the live computer.
With households around the country are sitting on a combined paper loss of almost £25billion, Sir Mervyn King, the Governor of the Bank of England, yesterday called for a detailed inquiry into the debacle, which it one point saw over being wiped off the share value of RBS yesterday.