From around 19:00 on Wednesday 27th November the MOLE front ends started to exhibit software failures which are still continuing.
The failures are the same on all four front ends. The failed component is detected & automatically restarted - taking about 10 mins to return to service.
The impact on staff and students is difficult to determine. When a front end node is about to fail responses get slower. When it does fail user sessions are redirected to one of the remaining servers. However, if more than two servers fail at the same time then users may experience poor performance - or in an extreme situation - no performance.
The failures have been raised with the suppliers, Blackboard. We're continuing to try help to diagnose the root cause and/or detour the problem.
Thursday, 28 November 2013
Wednesday, 27 November 2013
This morning the work on MOLE that required the system to be taken offline was completed successfully.
We moved MOLE onto new hardware with updated operating systems and new load balancer (the bit of the system that spreads the users round the servers). A number of small changes were also made to the database server. This should provide a more resilient/expandable environment for the servers - with a more rapid start up time - which enables subsequent changes/upgrades to MOLE be made more swiftly. It also gives us easier monitoring and diagnostic information to help in resolving issues.
On Friday morning, the second period of downtime we announced on Monday will allow us to upgrade the database, and other upgrades are being planned at the moment that will will announce shortly.
Tuesday, 26 November 2013
As part of our programme of maintenance on MOLE we will be working on MOLE from 6am-9am Wednesday 27 November and 6am-9am Friday 29 November.
MOLE will be unavailable during this time but will be restored to service before 9am each day. There will be additional periods of downtime in the coming weeks as part of our ongoing programme to restore MOLE to full performance.
If you are interested, the downtime on Wednesday will involve moving the MOLE service onto new hardware and the downtime on Friday will involve us upgrading the database behind MOLE to a new version.
Monday, 25 November 2013
Email to University learning technologists and MOLE contacts in departments sent today Mon 25 Nov
As you will be aware, MOLE continues to suffer performance problems and we are putting together a comprehensive ongoing plan to address this. As part of this process we will be taking MOLE down on several occasions to perform maintenance. The first two periods of downtime are Wednesday 27 and Friday 29 November, this week.
Maintenance will take place 5am-9am this week but it should be considered at risk until midday, and as such no critical activities should be performed during that time. There will be subsequent periods of downtime, which will be communicated nearer the time, but we will try to restrict these to the regular MOLE ‘at-risk’ period of 6-9am on Friday mornings.
The maintenance work will involve moving MOLE onto new hardware with additional capacity. We will also be performing an iterative sequence of software updates and modifications, taking time to monitor the impact of each individual change we make.
We cannot say for sure when MOLE will be returned to reliable full performance service but we will continue to investigate, implement changes and then monitor in a measured, ongoing programme of work until our VLE has been restore to full service.
We are keen to provide regular updates on our progress but do not wish to bombard you with information, so we have have set up an opt-in MOLE-users email list which we will post regular updates about MOLE, including the current work. Subscribe to this list using the link below:
We have also set up a dedicated MOLE maintenance blog which you can revisit at your convenience to be updated on progress:
We apologise for the ongoing poor-performance of MOLE and for the inconvenience caused by the imminent periods of downtime. We are dedicated to resolving this issue and we thank you for your patience.