A little while ago, we told you about the schmick, fancy data centres at NEXTDC in Sydney and Melbourne that we use to keep your data and websites safe. But we know some of you have been super keen to learn more about the hardware we keep tucked away.
Strap in, friends!
The beginning of our network starts with our transit and peering providers. Using our Sydney data centre as an example, we’ve got 2 x transit providers (with 2 x redundant links each), and 3 x peering providers.
Each of the providers has a 10Gbps link, which means we can have a total of 70Gbps coming in and out of the Sydney data centre at any given moment. To put that into context, that means you could download the latest iOS 13 update (~2GB) onto your iPhone in 0.22 seconds. Pretty quick!
The next step in our network is our DDoS mitigation. We use the Corero SmartWall Protection devices for this. These tiny pieces of magic protect our customers from DDoS attacks. We use Simple Network Management Protocol (SNMP) to set up specific alerts and monitoring so we can see what is happening at any time of day and take action accordingly.
Our Brocade routers are the next in the stack, and we’ll be honest, there’s not too much to say about them! They pretty much do what routers do – they work out the best route from our network in Sydney to the customers’ ISP, and tell other ISPs the best/preferred way to reach us.
Once we’ve passed through the routers, we end up at the first switch stack. These switches are provided by Extreme Networks. This stack is owned by Synergy Wholesale, our wholesale provider (and sister company). There’s a redundant set of 2 switches in this stack that have multiple links to both of the routers. This means that any of the links can fail, or one of the switches in the stack can fail entirely, and we would have no downtime or outages that would affect our customers.
The second switch stack is allocated to the VentraIP Australia network specifically. Once again, this hardware is made up of Extreme Networks switches.
We have 2 x redundant firewalls. In Sydney, there are currently 5 racks for our new hosting services, and 3 racks for our legacy hosting services.
Each new hosting rack has hardware firewalls at the same level as the TOR switches. This means that any traffic destined to any new hosting server, Fully Managed VPS, or Self Managed VPS with the security suite pack, will be inspected by the firewall.
The firewall first checks to make sure that the traffic from the IP/source is allowed, based on our fairly large blocklist that we maintain with information from multiple sources. Then, the firewall checks the traffic for any known malicious signatures (which are updated multiple times per day), and if so, blocks that request and logs it for future reference.
This process is similar to ModSec on our shared hosting servers, but it’s done before it even reaches the hosting server/VPS.
After the firewalls, there are more switches. These ones are connected directly to the Dell servers themselves.
All of our new hosting servers and VPS servers are connected to two different TOR switches for redundancy and added performance. The connections are load balanced based on multiple factors, so both links are constantly used. This also means that if either of these TOR switches has a hardware failure, our customers can still access their websites/emails.
Both of these TOR switches (and all of our switches, in fact) have dual power supplies and uplinks to the next switch stack. This means it’s very unlikely for one of these devices to fail. In fact, this hasn’t happened in the multiple years that we have been using these switches.
Our Dell servers have Redundant Array of Independent Disks (RAID). In our case, we use multiple Samsung SAS SSDs in a RAID10 array, which gives us a performance boost as well as multi-disk redundancy.
In terms of redundant power, NEXTDC has feeds from multiple power grids, and each of these feeds goes to one of the power rails in our racks. If one of the grids goes out, only one of our power rails will go down. All of our servers are plugged into both. If both of these go down, there are battery packs within the data centre which will last until the diesel generators come online, which will continue to power the data centre until the mains power comes back.