At approximately 2:49 pm EDT today, Friday October 10th, one of our deployments shut down, affecting many of our customers. We're actively working with Microsoft to determine the cause of the issue and will post updates as we find out more.
Update 3:23 pm:
Microsoft Azure's North Central US data center is experiencing a partial service disruption. Here is their official posting:
Starting on the 10th Oct, 2014 at 18:51 PM UTC a subset of customers using Virtual Machines and Cloud Services in North Central US may be unable to access a small number of services. Any services impacted by this issue will be frozen at the Starting state. Our engineers are currently investigating. The next update will be provided in 60 minutes.
Update 4:19 pm:
Services are starting to come back and Microsoft updated its advisory to the following at 4:03 pm:
Starting on the 10th Oct, 2014 at 18:51 PM UTC a subset of customers using Virtual Machines and Cloud Services in North Central US may be unable to access a small number of services. Any services impacted by this issue will be frozen at the Starting state. Our engineers have deployed a mitigation for this incident and are monitoring progress. Service availability is currently improving. The next update will be provided in 60 minutes.
Update 4:39 pm:
All customer instances are back online. We will continue to monitor the issue until we get the All Clear from Microsoft.
Update 5:10 pm:
Microsoft reports the issue to be completely resolved:
From the 10th Oct, 2014 at 18:51 PM to 21:10 PM UTC a subset of customers using Virtual Machines and Cloud Services in North Central US may have seen services stuck in a Starting state. This incident has now been mitigated.
Final report from Microsoft:
On October 10, 2014, an update in process of deploying to the global Azure infrastructure, triggered a failure in a subset of the US North Central region. All other deployments of this component were also stopped at that time as a precaution, but no other regions were impacted. The failure resulted from a version dependency that was isolated to a portion of only US North Central. The deployment was rolled back to previous known good version in the impacted region, processes were run to expedite recovery post rollback, and Compute services and Virtual Machines were recovered.
Zach Smith
At approximately 2:49 pm EDT today, Friday October 10th, one of our deployments shut down, affecting many of our customers. We're actively working with Microsoft to determine the cause of the issue and will post updates as we find out more.
Update 3:23 pm:
Microsoft Azure's North Central US data center is experiencing a partial service disruption. Here is their official posting:
Starting on the 10th Oct, 2014 at 18:51 PM UTC a subset of customers using Virtual Machines and Cloud Services in North Central US may be unable to access a small number of services. Any services impacted by this issue will be frozen at the Starting state. Our engineers are currently investigating. The next update will be provided in 60 minutes.
Update 4:19 pm:
Services are starting to come back and Microsoft updated its advisory to the following at 4:03 pm:
Starting on the 10th Oct, 2014 at 18:51 PM UTC a subset of customers using Virtual Machines and Cloud Services in North Central US may be unable to access a small number of services. Any services impacted by this issue will be frozen at the Starting state. Our engineers have deployed a mitigation for this incident and are monitoring progress. Service availability is currently improving. The next update will be provided in 60 minutes.
Update 4:39 pm:
All customer instances are back online. We will continue to monitor the issue until we get the All Clear from Microsoft.
Update 5:10 pm:
Microsoft reports the issue to be completely resolved:
From the 10th Oct, 2014 at 18:51 PM to 21:10 PM UTC a subset of customers using Virtual Machines and Cloud Services in North Central US may have seen services stuck in a Starting state. This incident has now been mitigated.
Final report from Microsoft:
On October 10, 2014, an update in process of deploying to the global Azure infrastructure, triggered a failure in a subset of the US North Central region. All other deployments of this component were also stopped at that time as a precaution, but no other regions were impacted. The failure resulted from a version dependency that was isolated to a portion of only US North Central. The deployment was rolled back to previous known good version in the impacted region, processes were run to expedite recovery post rollback, and Compute services and Virtual Machines were recovered.