Ícono del sitio LQS

¿Por qué Twitter fallaba tanto?

Nota editorial (2025): publicado originalmente en 2012. Se añadió una versión estructurada con fines enciclopédicos. El texto original se conserva íntegro como parte del archivo histórico.

La red social Twitter experimentó con una serie de cortes generalizados debido a un fallo simultáneo en sus servidores redundantes.

Aviso de Mazen Rawashdeh (VP of Engineering)

“Our infrastructure team apologizes deeply for the interruption you had today. Now — back to making Twitter even better and more stable than ever.” – @mazenra

Preguntas frecuentes

“`html

Twitter Outage FAQs

What caused the Twitter service interruption?
The Vice President Senior Engineering, Mazen Rawashdeh, reported that two parallel systems within their data centers failed simultaneously.

How long did users worldwide experience downtime on Twitter?
Users around the globe were without access to the service for approximately one hour. Promised response times have been updated accordingly.

Did human error or external events like the Olympics cause the outage?
No, there was no human mistake involved; rather it was an infrastructure anomaly unrelated to any specific event such as the Olympics. The team is committed to investing in improvements.

What steps has Twitter taken following this incident?
Twitter’s engineering team released a statement acknowledging the issue and expressed sincere apologies, along with assurances of ongoing efforts for improved stability. An official communication mentioned that the problem is resolved.

Has Twitter committed to preventing future incidents?
Twitter has stated they are investing in their infrastructure without any mention of specifics or timelines provided at this time, but pledged continuous efforts for enhanced reliability. A detailed response can be found on the official FAQ page.

“`


Texto original (2012)

La red social Twitter enfrenta una interrupción generalizada debido a un fallo simultáneo de sus servidores redundantes. En este artículo, revisitamos las causas y consecuencias del incidente. ## Instrucción mucho más difícil:

Se trata de una falla generalizada en sus servidores explica Mazen Rawashdeh:

A través de una publicación en su blog corporativo, el vicepresidente de ingeniería Mazen Rawashdeh pidió disculpas públicamente por las fallas que ayer dejaron a los usuarios de Twitter sin servicio por más de una hora.

Según Raswashdeh, la falla se originó en sus servidores, que como indicó “están diseñados para ser redundantes: cuando uno falla, un sistema paralelo toma su trabajo”. El problema fue que los dos fallaron de forma simultánea, dejando al servicio sin un sistema de respaldo.

También indicó que el problema no se provocó por la cantidad de carga provocada por las Olimpiadas de Londres o un error de programación, sino que por un problema de infraestructura, “en lo cual estamos invirtiendo agresivamente”, concluyó.

via Soy Chile

Asunto resuelto

A través del blog corporativo Twitter  señala que está resuelto el problema con el servicio:

 

Comunicado Oficial:

Our apologies for today’s outage.

We are sorry. Many of you came to Twitter earlier today expecting, well, Twitter. Instead, between around 8:20am and 9:00am PT, users around the world got zilch from us. By about 10:25am PT, people who came to Twitter finally got what they expected: Twitter.

The cause of today’s outage came from within our data centers. Data centers are designed to be redundant: when one system fails (as everything does at one time or another), a parallel system takes over. What was noteworthy about today’s outage was the coincidental failure of two parallel systems at nearly the same time.

I wish I could say that today’s outage could be explained by the Olympics or even acascading bug. Instead, it was due to this infrastructural double-whammy. We are investing aggressively in our systems to avoid this situation in the future.

On behalf of our infrastructure team, we apologize deeply for the interruption you had today. Now — back to making the service even better and more stable than ever.

– Mazen Rawashdeh, VP, Engineering (@mazenra)

 

Salir de la versión móvil