Some systems are experiencing issues

Past Incidents

Tuesday 3rd September 2024

MTL: MySQL and PostgreSQL DEV clusters unavailable, scheduled 3 months ago

Due to a maintenance from our infrastructure provider, the MySQL and PostgreSQL DEV clusters of the Montreal (MTL) region will be unavailable on Tuesday, September 3, 2024 starting at 12:00 UTC.

The maintenance is expected to take around 1 hour. During that time, the MTL MySQL and PostgreSQL DEV add-ons will not be available.

This incident will be updated to reflect the maintenance status.

[30/08/2024 15:00 CET] Both cluster are available

Monday 2nd September 2024

MTL: FSBuckets maintenance, scheduled 3 months ago

Due to a hardware maintenance from our provider planned in the next few days, we will need to migrate the FSBucket service of the Montreal (MTL) region on Monday, September 2, 2024 starting at 08:00 UTC.

The maintenance is expected to take less than 1 hour. During that time, the FSBucket service will be read-only. Write operations will be denied. Read operations will continue to work as expected.

All applications linked to an FSBucket add-on on the Montreal region will be redeployed so they can reconnect to the server with read/write rights.

This incident will be updated to reflect the maintenance status.

EDIT 2024-09-02 08:08 UTC: The maintenance is starting. FSBucket are now read-only.

EDIT 2024-09-02 08:24 UTC: Applications are redeployed and should now be able to access their FSBucket.

EDIT 2024-09-02 09:10 UTC: All applications have been redeployed since 08:40 UTC and the maintenance is over. We are still having an issue with the web interface, we are looking into it.

EDIT 2024-09-02 12:16 UTC: The web interface issue has been fixed.

Sunday 1st September 2024

No incidents reported

Saturday 31st August 2024

No incidents reported

Friday 30th August 2024

MTL: Git repositories maintenance, scheduled 3 months ago

Due to a hardware maintenance from our provider planned in the next few days, we will migrate the Git repositories service of the Montreal (MTL) region on Friday, August 30, 2024 starting at 08:00 UTC.

The maintenance is expected to take less than 1 hour. During that time, the Git repositories service will be read-only. Git push operations will be denied. Pull operations will continue to work as expected.

This incident will be updated to reflect the maintenance status.

EDIT 2024-08-30 08:30 UTC: The maintenance is now over. Applications Git deployment URL have changed from push-n1-mtl-clevercloud-customers.services.clever-cloud.com to push-n2-mtl-clevercloud-customers.services.clever-cloud.com. SSH identity should be the same. Using the old domain will keep working for backward compatibility.

Thursday 29th August 2024

No incidents reported

Wednesday 28th August 2024

No incidents reported

Tuesday 27th August 2024

No incidents reported

Monday 26th August 2024

Infrastructure A hypervisor on gra1hds is not responding properly

A hypervisor is not responding. A VM seems to be stealing all the cpu.

We are force rebooting this hypervisor.

21:09 status: the server refuses to reboot. We asked the OVHCloud support for help.

A technician is having a look at that server. We are waiting for the result of their analysis.

21:33 status: the technician came back to us and signaled a hardware issue. We are waiting for further update and actions.

2024-08-27 07:15 : OVHCloud support finished replacing the motherboard and give us back the server. It fails to reboot outside of rescue. While some are working on getting the kernel to boot, others are moving all the data outside to restore the impacted services for our customers.

09:50 : all services are back up and running for our customers.

Reverse Proxies Add-ons Reverse-proxies partially down

(Times are in UTC)

  • At 14:24, two of the add-ons reverse proxies of the PAR region stopped responding. After investigation, we found out that the two failed to reconfigure correctly, due to a "stucked" port: the port was considered still used and fail to switch between the old process and the new.
  • At 14:34, we decided to fully reboot these two reverse proxies. It successfully fixed the issue.

The consequence of this incident is that some applications that were trying to use one of these two reverse proxies (of a total of 7 proxies) lost their connection to the database for 10 minutes.