Investigating issues with Mews Services

Incident Report for Mews

Postmortem

Problem

For 11 minutes between 1:43 PM and 1:54 PM UTC on November 3, the PMS users and Open API clients observed general slowness and an elevated rate of timeouts.

Action

The performance degradation resolved on its own as we started the investigation, prompted by an alert from our automatic monitoring system.

Causes

One of the database replicas became overwhelmed with compiling execution plans for the incoming queries. The degradation was triggered by a diagnostic query inspecting the replica’s performance data.

Solutions

The database performance data catalog was made read-only as an immediate prevention measure.

We are designing a safer way to inspect database performance data, without any impact on the live workload.

Posted Dec 01, 2025 - 16:24 CET

Resolved

Between approximately 12:43 and 12:55 UTC on 3 November, users experienced degraded performance (slow loading and API timeouts). Services have returned to normal and remained stable since.

We continue to investigate the root cause and contributing factors.
Learnings will be incorporated into preventative measures to reduce the likelihood of recurrence.
Posted Nov 04, 2025 - 10:19 CET

Update

Between approximately 12:43 and 12:55 UTC, users experienced degraded performance (slow loading and API timeouts). Systems returned to normal and have remained stable since.
We’re continuing to investigate the root cause and contributing factors.
We will share more details if a specific trigger is identified.
Posted Nov 03, 2025 - 18:47 CET

Monitoring

We've spotted that something has gone wrong. We're currently investigating the issue, and will provide an update soon.
Posted Nov 03, 2025 - 14:23 CET

Investigating

We are currently investigating reports of an issue affecting Mews. The Mews team is actively working to identify the cause, and we will provide updates as soon as possible.
Posted Nov 03, 2025 - 14:00 CET