Some systems are experiencing issues
Scheduled Maintenance
[PAR] Security maintenance on 4 hypervisors

For security reasons, we will update the kernel of 4 Hypervisors in the Paris (PAR) region, more precisely in the PAR6 datacenter. Services (in particular databases) hosted on those hypervisors will be impacted : they will be unavailable between 5 and 10 minutes. Impacted hypervisors are:

hv-par6-008 hv-par6-011 hv-par6-012 hv-par6-020

Affected clients are directly and individually contacted by email with the list of impacted services, and options to avoid any impact. The maintenance will be planned in 2 operations of 2 hypervisors each, during the week of 18 to 22 Novembre 2024 between 22:00 and 24:00 UTC+1.

Past Incidents

Tuesday 15th December 2020

No incidents reported

Monday 14th December 2020

No incidents reported

Sunday 13th December 2020

No incidents reported

Saturday 12th December 2020

No incidents reported

Friday 11th December 2020

No incidents reported

Thursday 10th December 2020

Reverse Proxies Retroactive: Sozu reverse proxy TLS and HTTP errors

Today, between 17:24 UTC and 17:34 UTC, customers using our Sozu reverse proxies may have noticed errors when connecting through one of the proxies. An upgrade maintenance was ongoing which led to stop the Sozu service and a reboot of the machine. Unfortunately, the traffic wasn't correctly redirected to an alternative instance, leading to various TLS errors or HTTP errors when connecting to the non-healthy instance. Once the machine was up again, the traffic would be correctly handled.

The root cause have not yet been found but this shouldn't have happened as we routinely do such maintenance operations without any issues. We will look further into this. Apologies for the inconvenience.

Reverse Proxies Retroactive: Missing reverse proxy configuration updates

Today, between 16:00 UTC and 17:50 UTC, some reverse proxies configurations updates went missing. Applications that redeployed during this time frame may have not been correctly updated on some of our reverse proxies leading to HTTP 503 / This application is redeploying or HTTP 404 / Not Found error alongside the regular applications responses.

The root cause of this is still unclear, additional investigations will be performed. A bit before 16:00, we had an incident on an internal tool that may be related.

Wednesday 9th December 2020

Deployments Deployments delayed

A part of the deployment system is experiencing higher load than usual which may incur some delay before deployments actually start.

We are working on it.

16:23 UTC: This incident is over.

Access Logs Metrics ingestion delay

We are experiencing significant delay on the ingestion pipeline of Metrics.

The original incident started at around 05:15 UTC and we have been containing it since then with a lag under tens of seconds at worst.

It's now getting worse due to attempts at fixing the issue which are currently doing the opposite. This will take a while to solve.

11:17 UTC: The ingestion delay is now reduced to about 15 seconds. The issue is not completely solved, this is only a first step.

11:58 UTC: The ingestion delay is now back to normal. The root cause is not entirely fixed so this may come back but we will consider this incident as resolved for now.