Timeouts and slow load times - RESOLVED

At 11:05 am EDT today, June 2nd, 2017, we noticed that certain pages in select customer instances were not loading properly and resulting in timeout errors. We quickly isolated the issue as caused by the Azure SQL Database service and opened a ticket with Microsoft Support to investigate. It appears that customers overall are experiencing sluggish application performance with a few isolated customer instances where certain pages won't load at all. We are working with Microsoft to resolve the issue as quickly as possible and will post further updates to this announcement. 


3:52 pm EDT: We are awaiting further test results from Microsoft.


5:03 pm EDT: It looks like the performance issue is caused by one of the Azure SQL Database Elastic Pools. Microsoft is investigating the issue further. 


5:11 pm EDT: The performance issue is caused by 100% memory usage of one of our Azure SQL Database Elastic Pools. Microsoft is escalating the issue and should have it resolved within the next two hours. 


8:19 pm EDT: Microsoft's Engineering team is still working on the issue. We will provide further updates once we receive them.


10:01 pm EDT: Microsoft has resolved the issue and all Channeltivity services should be working normally. We will be posting the root cause analysis in the next week when we receive it from Microsoft. 


June 8th, 1:42 pm EDT: It looks like the memory issue with Azure SQL Database Elastic Pools came back this morning and is again leading to slow load times. We are waiting for a response and resolution from Microsoft.


2:16 pm EDT: Microsoft has identified the issue and is working on a permanent fix. We provide further updates once we receive them. 


10:14 pm EDT: Microsoft is still working on a permanent fix with high priority. We will post updates as we receive them.


June 9th, 3:36 am EDT: Microsoft has mitigated the issue and is working on a permanent fix.


June 12th, 12:49 pm EDT: Microsoft reports that the issue is caused by a known bug where Auditing XE session memory isn't cleared appropriately in certain configurations, resulting in high rates of memory consumption. The fix is scheduled for deployment by the month's end, but we've updated our auditing and threat detection configuration to circumvent the issue today.


June 13th, 10:31 am EDT: The memory issue with Azure SQL Database Elastic Pools has returned and is again leading to slow load times. We are waiting for a response and resolution from Microsoft.


11:18 am EDT: We've implemented further configuration changes that mitigated the memory consumption issue. We are waiting for a permanent fix from Microsoft.


12:56 pm EDT: Microsoft reports the issue as fixed. All services are working.


1 person likes this