

By about 22:45 UTC, Facebook and related services were generally available again. įacebook gradually returned after a team got access to server computers at the Santa Clara, California, data center and reset them. While Facebook's DNS servers ran on a separate network, they were designed to withdraw their BGP routes if they could not connect to Facebook's data centers, making it impossible for the rest of the internet to connect to Facebook. During maintenance, a command was run to assess the global backbone capacity, and that command accidentally disconnected all of Facebook's data centers. On October 5, Facebook's engineering team posted a blog post explaining the cause of the outage. A little before 21:00 UTC, Facebook resumed announcing BGP updates, with Facebook's domain name becoming resolvable again at 21:05 UTC. By 15:50 UTC, Facebook's domains had expired from the caches in all major public resolvers. This made Facebook's DNS servers unreachable from the Internet. Ĭloudflare reported that at 15:39 UTC, Facebook made a significant number of BGP updates, including the withdrawal of routes to the IP prefixes, which included all of their authoritative nameservers. Effects were visible globally for example, Swiss Internet service provider Init7 recorded a massive drop in internet traffic to the Facebook servers after the change in the Border Gateway Protocol. Security experts identified the problem as a Border Gateway Protocol (BGP) withdrawal of the IP address prefixes in which Facebook's Domain Name servers were hosted, making it impossible for users to resolve Facebook and related domain names, and reach services. Major DNS resolvers returning “SERVFAIL” status for
