Join the most popular community of UK swingers now
Login

Natwest, RBS and Ulsterbank

last reply
44 replies
2.0k views
0 watchers
0 likes
It would seem that a computer glitch has caused a large chunk of the RBS groups banking to have a few problems. Looking around this would seem to have been caused by systemic management failure at the bank.
Whilst this hasn't affected me personally yet, I wondered if anybody had any other opinions? Also if anybody has ideas on better organisations with whom to have bank accounts?
If I were a customer, the first thing I would do after re-gaining access to my funds would be to close the account tuite de suite and stuff the funds under the mattress.
The extra-ordinary incompetence shown in this affair suggests that they are not even capable of cutting notches in sticks....
should this kind of thing really be happening in 2012 from a multi billion pound company?
wow the banks opened on sunday woopeee. my daughter was supposed to have her wages paid in on friday and they were not there, she had to pay the balance of her holiday on the saturday. luckily she had me to pay it for her.
the banks as usual are a law unto themselves and i hope that there compensation bill runs into millions and that no bank exec gets any bonuses this year.
Quote by starlightcouple
should this kind of thing really be happening in 2012 from a multi billion pound company?
wow the banks opened on sunday woopeee. my daughter was supposed to have her wages paid in on friday and they were not there, she had to pay the balance of her holiday on the saturday. luckily she had me to pay it for her.
the banks as usual are a law unto themselves and i hope that there compensation bill runs into millions and that no bank exec gets any bonuses this year.

But isn't the Bank owned by the taxpayer star dunno
You'll be paying for the compensation yourself :grin:
They have already said they will compensate anyone affected by the problem. Unforeseen problems can happen anywhere to anything and anyone, so the fact this happened to my bank hasn't changed my opinion of them whatsoever. The fact is they've actually made quite a great effort to sort the problem out and considering the massive scale of the problem itself, they're getting around it quite quickly.
I don't see how management are to blame for a computer glitch either, unless they're programming the networks themselves.
Quote by Gee_Wizz
They have already said they will compensate anyone affected by the problem. Unforeseen problems can happen anywhere to anything and anyone, so the fact this happened to my bank hasn't changed my opinion of them whatsoever. The fact is they've actually made quite a great effort to sort the problem out and considering the massive scale of the problem itself, they're getting around it quite quickly.
I don't see how management are to blame for a computer glitch either, unless they're programming the networks themselves.

Sorry, but bollocks come to mind here.
I ran a computer software company for more years than I care to remember. I always had a contingency plan when doing any sort of update... You can go back to where it was in a simple move.
This hasn't happened in this fiasco. It's been a week since the 'upgrade' and it hasn't been fixed. Heads must roll!!
4 days isn't a week. But still I don't see how management are to blame. There may well have been contingency plans as I'd imagine you would have expected there to be in such a big company. I firmly believe that the blame lies with the person at the helm when the ship sinks. Whichever bod in the IT department pressed the wrong buttons and caused the problems should be held accountable for his/her errors.
Quote by Gee_Wizz
4 days isn't a week. But still I don't see how management are to blame. There may well have been contingency plans as I'd imagine you would have expected there to be in such a big company. I firmly believe that the blame lies with the person at the helm when the ship sinks. Whichever bod in the IT department pressed the wrong buttons and caused the problems should be held accountable for his/her errors.

Precisely!
The person at the helm is the Captain, in this case the Chief Exec. His watch, his problem. You can't blame the guy with the oil can in the engine room.
Quote by GnV
4 days isn't a week. But still I don't see how management are to blame. There may well have been contingency plans as I'd imagine you would have expected there to be in such a big company. I firmly believe that the blame lies with the person at the helm when the ship sinks. Whichever bod in the IT department pressed the wrong buttons and caused the problems should be held accountable for his/her errors.

Precisely!
The person at the helm is the Captain, in this case the Chief Exec. His watch, his problem. You can't blame the guy with the oil can in the engine room.
Haha, poor analogy by me there. :P If the captain steered the ship into the iceberg because he made a wrong call, it's his fault. But if the guy with the oil can in the engine room didn't oil the bearings like he was told to and the engine seized on his side so the ship wouldn't steer, then it's his fault. ;)
Simplistic approach to blame an individual.
Certainly a full review as to who coded the new software, how was it tested, who signed it all off as fit for purpose, how it was implemented, together with items identified during the fix, all mean that lots of IT and Business people will be 'in the frame' for this, though a public 'execution' of those concerned is doubtful.
And it'll probably not just be confined to RBS, as it's understood parts of this may be due to outsourcing of technology work overseas to India and the loss of 20,000 UK-based roles & experience possibly being a factor.
Quote by HnS
Simplistic approach to blame an individual.
Certainly a full review as to who coded the new software, how was it tested, who signed it all off as fit for purpose, how it was implemented, together with items identified during the fix, all mean that lots of IT and Business people will be 'in the frame' for this, though a public 'execution' of those concerned is doubtful.
And it'll probably not just be confined to RBS, as it's understood parts of this may be due to outsourcing of technology work overseas to India and the loss of 20,000 UK-based roles & experience possibly being a factor.

Actually, thinking about it that seems a far more obvious answer than blaming the IT guy on the day or management.
Whilst it would seem that RBS have indeed outsourced a lot of jobs to India the one statement they have made about the failure is "the software error occurred on a UK-based piece of software". Mind they chose not to comment on where the staff were located who patched and maintained the software. Not quite sure what to make of that one myself.
It would seem that while the software and systems are indeed UK based RBS have outsourced maintenance of the critical piece of batch processing software that failed to Indian teams in recent years. Nothing especially wrong with the Indian IT industry, but it's relatively young. Whether programmers and sysadmins brought up on C++ and real-time systems are the best people for the job when the system in question goes back to 1980 and is running on machines for which most of the software is still written in mainframe assembly language and COBOL is a question that probably should have been asked. I suspect the question previously got lost somewhere when RBS management realised they could sack the experienced UK staff who'd been supporting it for over 30 years and hire instead recent Indian IT graduates for as little as £9-10000 a year. rolleyes
Good article on the failure in today's Guardian. The comment pages are even more enlightening.
"the software error occurred on a UK-based piece of software"
On first reading nothing incorrect with that statement given that the software runs in the UK at their Edinburgh computer/data centre.
However the Guardian's investigations, if correct, suggest that NatWest's problems began on Tuesday night when it updated a key piece of software – CA-7, which controls the batch processing systems that deal with retail banking transactions.
CA-7 is an 'off the shelf', if this term can be used at this level, software product that controls job submission and monitoring on mainframe computers. That's not to say that it's like Word or Excel, far from it as each company will have a dedicated IT team responsible for configuring it for that companies mainframe (s) and the running of what probably will be several million individual 'jobs' each year depending on time of day, day of week, month, etc.
Now consider "the software error occurred on a UK-based piece of software" alongside the RBS advert in Hyderabad for "Looking for candidates having 4-7 years of experience in Batch Administration using CA7 tool. Urgent Requirement by RBS" and there's a whole new slant as whilst the RBS mainframes may well be "UK-based" and CA-7 resides on them, there's an indication that the actual configuration may not be.
However is RBS's CA-7 customisation at fault per se, or did they as they say make a change to a Payment system that wasn't fully tested or have the necessary CA-7 job scheduling system changes made ?
Either way a small dedicated team will still be under an awful lot of pressure to unpick last Tuesday and subsequent nights job schedules, and then both validate both a BAU daily schedule as well as create a custom schedule in order to rectify the issues and get their core accounty system up to date so that the BAU version can run. Things that require cool heads and people with years of RBS intimate mainframe experience & knowledge.
I'm a motor engineer and as such don't really know the fine details of computing and I've not the first idea how to program or code anything... But is there any reason why these systems don't work in real time? Say I take £20 from an ATM which is presumably connected to some main database somewhere that tells it how much money is in my account ans thus how much it is allowed to give me - why can't this ATM then transfer a - £20 signal back to the database, and hey presto, it's all updated as it happens? I thought banking systems had been working like that for a while already?
Gee,
They sort of do and if you did a 'balance enquiry' staight after withdrawing £20 it would show your balance reduced by £20. However this is recorded against the records held within the ATM sub-system.
At the end of day all these sub-systems, e.g. ATM, Internet Banking, Payments, Direct Debits, Standing Orders, Cheque Clearing, Telephone Banking, Branch Counters, etc. have to update and reconcile against the central 'master' accounting system on the Mainframe that requires thousands of scheduled jobs to run in the correct order so that the master copy of the account is updated and 'feeds' are created to all the sub-systems for the next working day.
On a much smaller scale as a motor engineer/mechanic most modern cars have computers fitted that take feeds from various sub-systems, e.g. engine, exhaust, gearbox, suspension, air-con, wipers, etc. that are processed within that sub-system and then update the main processor. OK these are a lot smaller, the data volumes less and the timescales are therefore smaller that your garages diagnostic computer can process.
Now imagine your computer having to access and process, at the same time, for every vehicle in the workshop, it would struggle.
Now multiple this by a factor of several million and it's obvious that it couldn't cope 'real-time', hence the continued existence of core mainframe computers in Banks and other large organisations, which utilise the mainframes speciality of processing millions of items sequentially very quickly.
Organisations have different types of computer systems to do different things, each playing to their strengths, i.e. you wouldn't have a Ferrari for crossing the Sahara or taking 6 kids to school, hence the existance of 4x4's, people movers, vans, estate cars, etc.
Hope this helps.
Riiight ok. Seems obvious now actually. thanks. biggrin
Banks fuck it up .... well I never who'd have thought such a thing
I know the system which failed as worked for RBS and BoNY, spoke to a project manager there today and its still not completely fixed. All work on any system bank wide is halted and most worrying is that the multi multi million pound disaster recovery B site didn't kick in. Believe it or not it could be down to one guy applying a patch to both sites and that brought this about.
Friend said its almost back to normal but they can't open it up fully until they at least understand why it broke
RBS is pretty poor banking wise .. Barclays were unscathed and still are ... think they are one of only a couple who are and have been in good shape throughout .
Banks or some banks .. added to fucking things up .. but they are also the one establishment that can fix things quickly too.
Onwards and upwards people ... things have never been cheaper if you haven't noticed the recession... seize opportunity and by spending we will help get us back to Great Britain again
Enjoy the week all
J smile
J,
Think you'll find that it's not an 'automated', i.e. without human intervention fail over from primary to secondary data centres at that level of complexity, plus until they understand the root cause it's probably just as well as system replication between primary and secondary sites could well of corrupted both, as you alluded to.
Quote by VoyeurJ
most worrying is that the multi multi million pound disaster recovery B site didn't kick in. Believe it or not it could be down to one guy applying a patch to both sites and that brought this about.

A disaster recovery site is just that. It should be mirrored at the hardware and software levels so changes to the primary site will propagate to the secondary. It would be a lower capacity option available to step up in cases of physical disaster, eg. fire, flood, earthquake, terrorist attack etc. It's not intended to replace or mitigate bad code or, as appears to be the cause in this case, bad or missing change management controls.
I spent about 15 minutes in a phone queue last night to one of RBS competitors. I held on partly because of the 'RBS vodka horror' scenes played out in the chat room on Friday night. Now then fortunately my debit card still seems to work but I also found the suggestions of problems at RBS were due to CA7 scheduling software. I also found the advert for RBS CA7 operators in Indian which seems to have gone now, maybe they found somebody? A glitch in a banks computer systems you can just about understand but 6 - 7 days?
Any road d'up not being overly impressed with the customer service at this other bank I did think about giving up the wait on the phone. Having eventually got thru' it would seem that the 15 - 20 minute delay was due to an unprecedented no of calls from people wanting to transfer bank accounts. Funny that.
Now it would seem according to the FT it is Computer Associates fault. Well I assume if RBS are going to sue them it must be their fault. Looks like it wasn't a problem in India at all. Oh well I guess RBS aren't going to replace the CA7 system any time soon so hopefully their computers will stay up long enough so they can transfer my account.
Whilst you may have a mirrored computer system in the event of Fire, Flood etc, maybe they should have more than one.
If RBS do an upgrade in future perhaps they should have one disaster recovery system, and another system with a copy of stuff before the upgrade is applied? That way if the upgrade goes pear shaped they could just switch back to the spare copy?
This might cost some money, indeed possibly Millions of pounds, but I have seen reports suggesting that the cost of RBS' glitch is about 1.7 Billion.
What if the patch is applied to both A and B sides at the same time without waiting for the obviously negative effect to show on the A .. Think you will find this is called "Human Error" caused by "Human Intervention"
This isn't a comms issue (which you would expect the B side to instantly take over), this was a software issue caused by a patch that was applied without going through the necessary pre production testing. On one system only, but it happened to be on a key system (obviously). It is applied to the servers supplying the service. These are always applied to the B side servers initially then if all is well (Usually not in the same weekend) the service is swung over to the B side and its applied to the A. Don't mistake Comms and Data with Services. Completely different beast within the DC.
These things happen, in my learning years I managed to bring down HBOS cash machines at 1900 on a Friday evening for 3 hours. You will be amazed the power a temporary employee in the right sector in his second month at a large bank wields lol Needless to say I didn't see my 3rd there that time.
Quote by VoyeurJ
These things happen, in my learning years I managed to bring down HBOS cash machines at 1900 on a Friday evening for 3 hours. You will be amazed the power a temporary employee in the right sector in his second month at a large bank wields lol Needless to say I didn't see my 3rd there that time.

rotflmao
Not on that scale but I did bring down a distribution centre for a whole weekend.............and I only pressed one button. My phobia of computers and buttons that cause damage are still with me 20 years later.
I remember in the past, when we only had one chatroom, a chatroom op pressed a button and kicked 97% of the users in one go. The next words he typed were.............OOOPS!!! lol
Dave_Notts
Rob,
Looking at the FT what is reported doesn't really follow the headline, as it goes on to read Royal Bank of Scotland is discussing at a senior level whether to take legal action against US software maker CA Technologies and they said the problems had been caused by a CA Technologies software system called CA-7 at an overnight upgrade one week ago.
It was certainly an issue with this software. We will still have to establish if this was their fault or if it was our handling of the software,” one of those people said.
The "....or if it was our handling of the software" would seem to be the relevant piece of wording,
This would seem to concur with one of the few comments issued by CA Technologies that RBS's technical issues were "highly unique to their environment".
Additionallly talking with contacts at other companies I know that also use CA-7 there isn't a 'change freeze' on their CA-7 schedules, indicating a loss of confidence in the product itself, though reviews of access/management controls have been undertaken.
Which seems to concur from reading reports in the Mail & Telegraph, both reporting that a junior technician in India caused the when they 'cleared the whole queue... they erased all the scheduling.’ when backing out a CA-7 update. which was was applied to both the banks’ back-up systems and the live computer.
With households around the country are sitting on a combined paper loss of almost £25billion, Sir Mervyn King, the Governor of the Bank of England, yesterday called for a detailed inquiry into the debacle, which it one point saw over being wiped off the share value of RBS yesterday.
The most recent statement from RBS I could find about where the problem occurred -
As we've said our priority is getting back to delivering normal service for our customers. We'll then do a full investigation into what went wrong.
The management and execution of the batch process is based in Edinburgh at Fettes Row, as is all the current work to resolve the problem.
Guardian - NatWest glitch: Q&A with RBS director of customer services

Which suggest that this had nothing to do with inexperienced Indian CA7 operators at all. If you were very very cynical you might look at it and note that it doesn't as such say where the group responsible for the causing the problem in the first place were located smile. You might wonder how many of the experienced CA7 administrators were recalled from gardening leave in Edinburgh......
Quote by VoyeurJ
..... in my learning years I managed to bring down HBOS cash machines at 1900 on a Friday evening for 3 hours..........

Lucky for me the bank I'm planning to move my accounts to isn't either HBOS, or indeed Lloyds.
Quote by Robert400andKay
..... in my learning years I managed to bring down HBOS cash machines at 1900 on a Friday evening for 3 hours..........

Lucky for me the bank I'm planning to move my accounts to isn't either HBOS, or indeed Lloyds.
you wanna hope its not barclays either looks like they have a problem with fraud towards customers lol
Quote by Lizaleanrob
..... in my learning years I managed to bring down HBOS cash machines at 1900 on a Friday evening for 3 hours..........

Lucky for me the bank I'm planning to move my accounts to isn't either HBOS, or indeed Lloyds.
you wanna hope its not barclays either looks like they have a problem with fraud towards customers lol
If you listen to gulson, all bankers are frauds