Between 11:27 AM and 05:08 PM UTC-3, we experienced delays in orders indexing. The orders search service was partly down after a failure in the cache layer and to avoid more damage some features was stopped. The index was slowly self-recovery and until it the system was throttled. After the recovery we start to index orders queue, which took more around 3 hours to process whole queue. The service is now operating normally.
No components marked as affected
Resolved
Between 11:27 AM and 05:08 PM UTC-3, we experienced delays in orders indexing. The orders search service was partly down after a failure in the cache layer and to avoid more damage some features was stopped. The index was slowly self-recovery and until it the system was throttled. After the recovery we start to index orders queue, which took more around 3 hours to process whole queue. The service is now operating normally.
Monitoring
We can confirm an improvement on the extended delays in our platform. We are monitoring the result of our actions.
Identified
We recovered from the increased error rates in OMS administrative environment. But still working with extended delays in Orders indexing. We continue to work towards full recovery.
Identified
We are experiencing an increased error rates in Administrative Environment of Orders Management System. We are working to resolve this issue.
Identified
We have identified the root cause of the extended delays to indexing orders. We are working to resolve this issue.
Investigating
We are experiencing delays in orders indexing. We are working to normalize service.