Roy
Content Type
Profiles
Articles
Updates
Projects
Twitch
Website Bugs/Suggestions
Guides
Newsletters
About GFL
Knowledge Base
Expenses/Hardware
Server Comparisons
Routing
Form Bugs
Community Representative Applications
Development Request
Forums
Events
Posts posted by Roy
-
-
3 hours ago, VilhjalmrF said:
Ye people were able to do it on console so they are definitely allowed on PC. And legendary is painful.
And btw you get these gametypes/maps from other players who have made them. So you have to search for a specific player that made the gametype/map you want and then save it. So just in case you don't know how or where to get them I'm gonna leave this video here:
I'll be taking a look shortly
I'd love to play Infected again. I actually remember finding some custom games earlier this year playing Infected and it brought back a lot of memories haha.
-
Hey everyone,
I decided to make this thread to show people who are interested what plans I have for our Anycast network expansion along with the paths we can go down. This post will also include the challenges we have to face with expanding the network.
You may also be interested in @Dreae and I's plans for BiFrost (formerly known as Compressor V2) which are detailed here. This includes what we plan on doing for application layer attacks.
This thread will be more focused on the infrastructure of the network.
What Is Anycast?
Before going into our plans, I just wanted to briefly go over what an Anycast network is. Basically, it's a network we fully own that is put in-front of our game servers. We have multiple POPs (Point of Presences) which are servers scattered around the world all announcing the same IPv4 block (92.119.148.0/24). All of our game servers have a public IP from the mentioned IP block under this network. The client routes/connects to the POP server based off of the AS-Path and/or BGP hop count. Usually, this is the closest POP server to them physically. From here, the POP server forwards the traffic to our game server machines that hosts the game servers themselves.
This network came with plenty of pros, but it's also a lot to maintain since we're fully responsible for the network (e.g. forwarding traffic, upkeep, (D)DoS protection/filtering, and more). This is pretty much the reasoning as to why I'm swamped all the time in GFL. Thankfully, this is something I'm very interested in and also something that has taught me A LOT about programming and networking.
For a more in-depth explanation, I'd recommend reading this CloudFlare post.
BiFrost
I figured I'd start off with explaining what BiFrost is. BiFrost will be the new open-source packet processing/filtering software that @Dreae and I will create and maintain. This was formerly known as Compressor V2 since we're using Compressor right now made by Dreae. This software will be responsible for forwarding traffic our clients send to our game servers to the game servers themselves. Additionally, it will also be responsible for the following:
- Filtering and dropping malicious traffic from (D)DoS attacks.
- Caching specific packet types to fight against low-throughput attacks.
- Sending non-client outbound traffic from our game servers.
For more information, I'd recommend taking a look at this thread which goes in-depth on what type of filtering we'll be performing with BiFrost and more.
Our Current Setup
Before going into detail on our expansion plans, I wanted to give an overview on our current setup.
Currently, we have 14 POPs scattered around the world. Each POP is a VPS from a single hosting provider. We have three POPs in Asia, four POPs in Europe, and seven POPs in North America.
Each POP runs Compressor which is packet processing software made by Dreae. I've made changes to this software and added filtering as a temporary solution to mitigate (D)DoS attacks. Currently, I am still expanding these new, temporary filters to our POP servers.
With my new filters (which again, are a temporary solution until BiFrost is completed), most (D)DoS attacks should be dropped at the POP. Therefore, if anything, there may be some POPs that get over-flooded during attacks that affects a portion of our players. However, there's a high chance there will be POPs that do not go down. Therefore, not affecting the players routing through those POPs. We've also mostly eliminated a single-point-of-failure on the network by having the game server machines send traffic back to the client directly via my IPIP Direct program instead of having all outbound traffic from the game servers go back through a POP on the network.
Some Notes On Finding Hosting Providers
I just wanted to point out some things on what we need to look for when searching for new hosting providers for the network. I won't mention pricing since that's self explanatory.
Firstly, we need to make sure the hosting provider supports BGP sessions (this allows them to announce our IPv4 block). Some hosting providers do BGP sessions for free while others have either one-time fees or monthly fees. Obviously, I'd prefer finding providers that do these for free. However, if the fees aren't too pricey, I'd consider going with them depending on the quality of their service. Thankfully, there's a helpful Google Spreadsheet here that lists hosting providers known for offering BGP sessions. There are still some hosting providers I've found that support BGP sessions, but weren't added to this list. So that's something to keep in mind. The list is maintained by the community which is neat, though.
Secondly, I usually try to find hosting providers that support both GTT and NTT as direct peers. I've found that primarily using these peers on our current network results in better routing along with it being easier to maintain. If we supported a bunch of direct peers, there would be a lot of sub-optimal routes due to the amount of direct peers we have. A lot of hosting providers including our current does support BGP communities (example). This allows us to tune routing easier via setting BGP communities in our BIRD config (the BGP daemon running on our POPs). However, we still don't have full control over the BGP routing. Therefore, there would likely be sub-optimal routes still. This is why I'd suggest just using one - two direct peers for the network.
Lastly, the bandwidth limits. Our game servers consume a lot of bandwidth (probably around ~40 TBs a month or more). Usually VPSs have lower bandwidth limits. Therefore, something to check is the bandwidth overage fees and ensuring those aren't too high in the case we do exceed the bandwidth limits.
Direct Transit
I also wanted to mention the possibility of us purchasing direct transit in the future. Direct transit would allow us to peer with an upstream directly instead of going through our current hosting provider. For example, here's a BGP route in our current setup:
Our network (AS398129) -> Our POP Hosting Provider Exchange (ASxxxx) -> GTT/NTT (Direct Peers) -> Other ISPs/peers to destination.
If we were to purchase direct transit from GTT or NTT, our route would look like this instead:
Our network -> GTT/NTT -> Other ISPs/peers to destination.
This would remove the hop in-between our network and the current direct peers we use to my understanding. Therefore, improving routing and potentially latency.
Unfortunately, direct transit is VERY expensive and I don't believe this is something GFL will be able to afford. At least from a tier 1 ISP. Especially considering bandwidth is the main factor in pricing and our game servers consume a lot of bandwidth. Though, it's something I still plan on looking into. What would be neat is some of these peers we can purchase direct transit from gave us control over blocking traffic at the upstream level. Therefore, we'd be able to leverage their network capacity to block high-volume attacks.
What Approaches We Can Take
I wanted to go into detail on the two likely approaches we can take with this project. As of right now, I'm honestly not sure which approach we'll be taking since it depends on many factors, in which, we won't know until we ask the hosting providers we want to go with.
Quantity Over Quality
The quantity over quality approach is where we would be expanding with a high amount of cheaper POP servers. We would likely be renting cheaper VPSs from multiple hosting providers around the world and using them as our POP servers on the network. We'd likely get above 100 POP servers in total if we went with this approach.
One of the key reasons this approach is considered and beneficial is because we wouldn't be relying as much on the hosting provider's built-in (D)DoS protection against high-volume attacks. Currently, most VPSs include a one gigabit link. Let's say we had 10 POP servers with one gigabit links, if the hosting provider didn't do anything about high-volume attacks, we'd basically have 10 gigabits total network capacity. Unfortunately, any filtering we'd apply to BiFrost wouldn't be able to block attacks exceeding one gigabit in this case. Therefore, if we decided to buy 100+ cheaper VPSs with one gigabit links, this would potentially give us over 100 gigabits total network capacity (there's obviously a lot of other factors here such as the actual network capacity at each data center, multiple POP servers being on the same node, and so on).
The other pro to this approach is there would be fewer clients routing to each POP. Therefore, if a POP is over-flooded by an attack, less players would be affected.
The one potential con to this approach is the amount of maintenance required. Managing 100 servers is quite a task. We were planning to make as many things automated as possible with BiFrost. Therefore, if we did a good job with BiFrost, I do believe we could maintain it without too much work. The other thing is, since these POPs would be cheaper VPSs with less power, we'd need to ensure we have enough power to support BiFrost since that won't be as light-weight as our current software.
Quality Over Quantity
The next approach I want to talk about is quality over quantity. This is self explanatory for the most part. Basically, we'd be likely renting out dedicated machines as our POP servers. Instead of having many cheaper POP servers, we'd have fewer very powerful POP servers.
The one pro about this is it'd be easier to maintain. However, there is a very important factor here. Most dedicated machines include a one gigabit NIC and link still. Unfortunately, machines with 10 gigabit NICs or higher are typically expensive. Therefore, we would NEED the hosting provider to be able to filter high-volume (D)DoS attacks for us. If the attack exceeds the POP's NIC/link, there is absolutely nothing our packet processing/filtering software (BiFrost) can do to mitigate the attack unless we have access to place filtering rules on the upstream level.
With that said, since there would be fewer POPs, more players would be routing through each POP. Therefore, if a POP was over-saturated or went down, more players would be impacted by this.
One other thing I may consider in the future if this all works out is doing colocation. This means we'd rent racks at multiple data centers, build our own machines, ship them to the DCs, and use them as POP servers. Before doing this, I'd need to learn how to build servers themselves (the hardware) which is something on my to-do list anyways since I want to build two personal home servers to do pen-testing with via XDP, DPDK, etc. With that said, there would be a higher up-front cost with this since we'd have to purchase the hardware to build the machines. However, the monthly cost should be a lot cheaper. Therefore, we'd save money in the long-run. We might also be able to find a data center that sells 10 gigabit links for somewhat cheap and we'd be able to build a machine with a 10+ gpbs NIC giving us the power to mitigate larger attacks on that specific POP.
A Mix
The last approach would be mixing our POPs with quality and quantity. Therefore, for example, we'd have powerful POPs in locations we see a lot more traffic/attacks in. However, we'd have cheaper POPs in locations that don't have as much traffic.
Since BiFrost will be including POP monitoring and statistics in the future, one thing I considered is having all of our base POPs pretty powerful. However, if BiFrost detects a (D)DoS attack causing a specific POP to go over 70% CPU or network usage, we could automatically spin up cheaper POPs (VPSs) in that location to absorb the attack and bring the load down temporarily.
The Plans
I'd like to now go over the possible plans for the first two approaches mentioned above.
Quantity Over Quality
As of right now, if we went with a quantity over quality approach, we'd be looking for three - four solid hosting providers with our POP servers for redundancy and coverage reasons. We would be purchasing cheaper VPSs and each VPS would hopefully be less than $10/m. Each VPS would need 2 GBs of RAM to support BiFrost as well. However, they can include low storage space and one CPU core.
I was hoping to aim for 100+ POP servers at $700 - $900/m or so which isn't too bad considering the protection we'd gain from it.
As stated above, this would definitely be more maintenance. However, we'd have better damage control since less clients would be routing through each POP (so if one goes down, less players would be impacted). One thing we'd need to make sure of is having BiFrost handle everything on each POP automatically. This would result in less maintenance on our part.
Quality Over Quantity
This is the approach I have a bit more planned out since I've been looking into it the past couple weeks.
Recently, Hivelocity introduced more locations for their bare metal machines (AKA dedicated servers). You can find a list here. We did use Hivelocity as a POP hosting provider at one point in the beginning of the year in NYC. However, we needed to cancel the machine due to reasons not related to them. When we did have the POP with them, it did work very well, though. The only complaint I had was the machine's NIC driver not supporting XDP-native (supported driver list can be found here). In fact, the NIC driver didn't even have enough headroom for XDP-generic causing the XDP program to process packets slower than IPTables which was eventually patched thanks to the XDP maintainer (more information can be found in this thread). I would need to discuss this with them and see if we can get a NIC that does support XDP-native. Either that or work to see if we can get XDP-native support added to the NIC drivers they currently use which is a more complicated process (I may be able to implement support, but that's pretty complex and requires understanding the Linux kernel code).
With that said, we would NEED to have Hivelocity filter high-volume attacks for us since these machines only include one gigabit NICs/links.
If everything went well with Hivelocity, I'd like to purchase their $50/m dedicated machines and use them as POP servers in most locations other than our prime locations.
As for our prime locations, I've been in-talks with another hosting provider who has a very nice setup. They claim they can easily mitigate high-volume attacks and they also offer some neat tools such as BGP Flowspec which would allow us to add filters to our upstreams via BGP. This would allow us to potentially block high-volume attacks ourselves by passing filters to the upstreams. This would be pretty advanced, but something I'm definitely interested in doing with BiFrost if possible. Unfortunately, they don't have many locations. However, they do have locations in Dallas and London at the moment along with plans to expand into New York City and Chicago. I consider these cities our prime locations and if we went with a quality over quantity approach, I'd like to use this hosting provider for our prime locations. They also offer colocation and custom builds if necessary.
We'd most likely have 10 or so $50.00/m POPs from Hivelocity and 3 - 4 $75.00/m POPs from the hosting provider mentioned above. This should range from $750 - $850/m or so which is my goal for this Anycast expansion.
Conclusion
I hope this thread interests some people and also helps you understand what thought process I have going into expanding the network. There are obviously a lot of factors that goes into all of this and to be honest, I'm still not sure which plan we'll go with. It's pretty exciting planning all of this out, but I will admit it is a huge project and consumes a lot of time. Especially when you need to write many emails to hosting providers discussing our options, etc.
These expansion plans won't be executed until Dreae and I have completed BiFrost as well.
If you have any questions, please feel free to ask!
Thank you for your time.
-
14 hours ago, VilhjalmrF said:
A shit ton: Fat kid, 5 Levels, GET OUT OF MY HOUSE!!!, Drag Race, Cops n' Robbers, Duck Hunt, Jenga, Clogged, Michael Myers, etc.
Sounds neat, I'll definitely have to check it out. I'm trying to finish the Halo 2 campaign on heroic which more feels like expert (I can't imagine legendary)

P.S. I didn't notice you can literally hit TAB in these campaigns to switch between the classic and anniversary graphics. I think that's an awesome feature they added.
-
What type of custom games can you play? I used to find Infected very fun back in the Xbox 360 days on Halo 3 while I was in elementary school lmao.
-
I've also removed the following maps due to them causing server crashes:
- bhop_abyss
- bhop_limbo
- bhop_lost_temple
- bhop_affliction
If we find the cause to these maps crashing the server, we'll add them back.
Also, thank you to @Enricofor the reports!
Thanks.
-
Hey everybody,
I just wanted to let you know I've updated our CS:S BHop server. The following changes have been made:
- Changed the timer's MySQL database and tables character sets and collations to utf8mb4 and utf8mb4_general_ci. This resolves player's ranks/points not updating after completing maps. After moving CS:S BHop's database over to the new web machine, it used a different collation which were resulting in errors when the map points were being recalculated.
- Updated Shavit's timer to the latest version to hopefully correct other issues.
- Removed some conflicting plugins related to SourceBans.
I wasn't able to beat a map while testing that resulted in additional points being given to me. However, after I recalculated all the map points, my points changed and this didn't result in an error after changing the collation (non-like before). Since Shavit's timer has more of a skilled ranking system:
QuoteThis system doesn't allow "rank grinding" by beating all of the easy maps on the server but instead, awards the players that get the best times on the hardest maps and styles.
I wasn't able to test getting points on map completions directly. However, this should be working now regardless as stated above.
With the new updated timer, some HUD changes were made that I've noticed. I didn't see these changes as negative so I've kept them.
If you notice any bugs or have suggestions, feel free to reply here.
Tagging @Reeve and @Khel for visibility.
Thank you.
-
Hey everyone,
I just wanted to give everyone a heads up I will be expanding the new filters mentioned here to the rest of our POPs in Europe this weekend. I've already expanded the new filters to our Amsterdam POP. I forgot to announce this here earlier this week. There were some issues with some of our game servers when expanding the new filters to this POP because some services weren't on the whitelist (e.g. the CS:GO KZ server broke along with one other small service on another server), but this was corrected later on.
There is a chance this will break some services on our game servers. However, @Dreae, @Xy, and I all have access to whitelist specific outbound services. Xy is also working to give others more restricted access to do this as well. If something breaks, we will correct it ASAP and if it ends up taking too long, we'll add the older filters back and come up with a game plan later.
If you have any questions or concerns, please let me know!
Thank you.
-
Hey everyone,
I decided to make a guide on running an MTR or trace route to troubleshoot general networking issues. This guide is mostly focused on running these tools against GFL's infrastructure (e.g. our CS:GO ZE server). However, these tips can easily translate to the Internet as a whole.
These tools are needed when a player is having network-related issues on our servers and website.
In this guide, we're primarily going to focus on Windows since that's what most of our users use. However, in the video and at the bottom of the written guide, I also provide some information for Linux (which can also be used for Mac users as well since Mac is Unix-based).
Video
Here's a video I attempted to make on explaining an MTR and trace route. I made this quickly since I was in a rush, but I believe it goes over a lot of valuable information.
Since some users don't want to follow along, don't like long videos, or generally understand written guides better than videos (like myself), I made a written guide below as well.
What Is A Trace Route?
A trace route is a tool/command you use against a specific host name or IP address to see what route you take to get to the destination. One thing to note is when you connect to anything on the Internet, you're going through "hops" to get there. These hops are routers that basically forward each packet/frame to the the next hop until you've reached your destination.
What Is An MTR?
An MTR stands for "My Trace Route". It was formerly known as "Matt's Trace Route" a long time ago. An MTR is basically a trace route tool, but offers the following changes compared to a trace route:
- Continuously sends replies until the tool is stopped.
- Shows packet loss at each hop.
- Doesn't wait for three replies from each hop. Therefore, the route populates nearly immediately after executing the command.
The three changes I mentioned above I consider all pros. Therefore, I highly recommend using an MTR instead of a trace route when troubleshooting networking issues.
Running A Trace Route
Running a trace route on Windows is fairly simple. To start out, I'd recommend searching for "Command Prompt" in Windows and opening a cmd.exe window (Command Prompt). Alternatively, you may hit the Windows Key + R to display the "Run" box and enter "cmd" to open a Command Prompt window. From here, you'll want to run tracert <Hostname/IP> where <Hostname/IP> is either the host name of the destination or IP address.
In this guide, we're going to be using CS:GO ZE's host name which is goze.gflclan.com. You may also use the IP address which is 216.52.148.47. Here's an example:
tracert goze.gflclan.com
One thing to note is you will not want to provide a port in the IP/host name (e.g. tracert goze.gflclan.com:27015). This is because by default, trace routes use the ICMP protocol which doesn't include a port non-like the UDP/TCP protocols. There are trace route/MTR tools that allow you to use the TCP/UDP protocols which uses a source/destination port (the mtr command on Linux comes with these options built-in which is discussed at the bottom of this post a bit). However, that is outside the scope of this guide.
After running, depending on the route you take to the destination, it will take some time to complete. It takes time to complete because it waits for three ICMP replies from each hop. This isn't what an MTR does (which is one of the features listed above for it). If a hop times out, it's going to take even longer since there's a specific timeout for it.
Here are the results from my trace route:

Now the information here may be overwhelming to some users. Therefore, I will try to break things down the best I can.
QuoteTracing route to goze.gflclan.com [216.52.148.47] over a maximum of 30 hops:
This line is basically for visibility/debug. If a host name is specified, it'll attempt to resolve to the IP address associated with the host name and output it in brackets after the host name as seen above. With that said, this also tells us the maximum hop count that is specified in this trace route (which is 30 by default). This means if there are more than 30 hops the route needs to take, it will not include any hops over the 30 mark in our results. Typically, routes should never exceed 30 hops and in most cases where they do, this is usually due to routing loops, etc.
Note - Usually the more hops you take, the further away your destination is from you. However, there are many sub-optimal routes on the Internet. Therefore, this isn't always true.
Now, let's go over the results themselves which are the following:
Quote1 <1 ms <1 ms <1 ms 10.1.0.1
2 22 ms 18 ms 14 ms cpe-173-174-128-1.satx.res.rr.com [173.174.128.1]
3 32 ms 33 ms 37 ms tge0-0-4.lvoktxad02h.texas.rr.com [24.28.133.245]
4 20 ms 9 ms 18 ms agg20.lvoktxad02r.texas.rr.com [24.175.33.28]
5 12 ms 16 ms 12 ms agg21.snantxvy01r.texas.rr.com [24.175.32.152]
6 35 ms 28 ms 31 ms agg23.dllatxl301r.texas.rr.com [24.175.32.146]
7 25 ms 27 ms 21 ms 66.109.1.216
8 42 ms 23 ms 24 ms 66.109.5.121
9 25 ms 27 ms 26 ms dls-b21-link.telia.net [62.115.156.208]
10 166 ms 39 ms 32 ms kanc-b1-link.telia.net [213.155.130.179]
11 146 ms 83 ms 51 ms chi-b2-link.telia.net [213.155.130.176]
12 39 ms 193 ms 128 ms chi-b2-link.telia.net [62.115.122.195]
13 43 ms 43 ms 48 ms telia-2.e10.router2.chicago.nfoservers.com [64.74.97.253]
14 49 ms 45 ms 46 ms c-216-52-148-47.managed-ded.premium-chicago.nfoservers.com [216.52.148.47]As you can see, we have five columns here. I will explain each column below:
- The first column indicates the hop number. This is auto-incremented on each hop and starts from one. I'll be using this number to indicate hops below (e.g. hop #x).
- This is the latency (in milliseconds) of the first ICMP response received back. Basically the timing from when you've sent the packet to when you've received it. Typically, the lower the latency is, the better. If there was no response (e.g. a timeout), it will output an asterisk (*) instead of the latency.
- Similar to the second column, this is the latency of the second ICMP response received back.
- Similar to the second column, this is the latency of the third ICMP response received back.
- This shows the IP address and/or the host name of the hop. I believe the trace route tool performs a rDNS lookup on the IP address to see if there's a host name associated with it on record. If there is, it will display the host name and then the IP address in brackets.
One thing to note is when you look at the host names, usually they give an indication of where the hop is located. For example, in my route we see hop #4 that has the host name agg20.lvoktxad02r.texas.rr.com. I believe this host name indicates the hop is located in Live Oak, TX, which is the current city I'm living in (which is technically inside of San Antonio, TX). The next hop (#5) has a host name of agg21.snantxvy01r.texas.rr.com and I believe this indicates the hop is in San Antonio, TX. I know for sure hops #6 to #9 are all located in Dallas, TX. You can usually confirm by looking at the latency you get to the hop as well (e.g. I'm only getting 12ms latency to the San Antonio hop which would make sense since I'm located in San Antonio, TX).
Anyways, we can see towards the end that we start routing to Telia in Chicago based off of the host name (e.g. telia-2.e10.router2.chicago.nfoservers.com) . NFO has a router with Telia which is hop #13 and hop #14 is our actual destination (the NFO machine that hosts our CS:GO ZE server).
If NFO or Internap (the data center NFO has their machines hosted in) blocks your IP, you will start seeing timeouts after hop #12 more than likely since you start getting blocked when trying to route into the NFO network. It will output an asterisk (*) in-place for the latency when there's a timeout. This is why we ask users who aren't able to connect to our CS:GO ZE server for trace routes to the server. In cases like these, this is usually due to the player's network performing port scans against NFO's network. Unfortunately, this usually indicates the player's computer or another device on the network is compromised (most likely) or the router is configured to perform port scans against networks (least likely).
I believe that's all you need to know for a trace route. Our Technical Team will be able to assist you with the results if you need any clarification or help on them.
Running An MTR
Now it's time to learn about running an MTR. As mentioned before, an MTR is similar to a trace route, but includes some pros in my opinion. To my knowledge, Windows doesn't include an MTR tool by default. Therefore, I'd recommend using a third-party tool named WinMTR. This tool comes with a GUI making it more user-friendly.
After installing, you may run either the 32-bit or 64-bit versions (I'd suggest just using 64-bit since you're most likely running 64-bit nowadays). Just like a trace route, you may specify a host name or IP address under the "Host" field. Afterwards, feel free to hit "Start".
Here are my results:

You'll notice that the route populates nearly instantly non-like a trace route. This is because it's not waiting for three replies from each hop. With that said, it also is continuously sending ICMP requests until you hit "Stop".
There are a few new columns when using this tool compared to a trace route. The most important column is "Lost %" which indicates how much packet loss we're getting to that specific hop. Now, the closer we are to 0%, the better. The percentage indicates how many requests we've sent that didn't get a reply back.
One big thing to note about packet loss to each hop is some hops do rate-limit ICMP responses or have them turned off entirely (meaning you'll have 100% loss to that specific hop). So if you see packet loss on a hop, but all hops after that still display 0% packet loss, this is most likely the reasoning and this is also nothing to worry about. If you see a hop with packet loss and after that hop, each other hop going all the way down to the destination has packet loss, that's when you know you're experiencing actual packet loss and the first hop experiencing the packet loss is probably the one dropping the packets (this isn't always true though).
In the above example, I have packet loss on some hops due to rate-limiting. However, you can see hop #14 (the destination), doesn't have any packet loss indicating there aren't any dropped packets to the destination itself. This was actually the results from what I did in the video above where I sent requests every 0.2 seconds instead of every second (I didn't have any packet loss when sending a request each second, but due to rate-limiting, I did have packet loss on some hops when sending a request every 0.2 seconds).
Other than that, the rest of the new columns are related to latency to each hop. Since we are continuously sending requests and measuring the latency for each response, this allowed for more columns such as "Best" (which is the lowest latency to the hop), "Avrg" (which is the average latency to the hop), "Worst" (which is the worst latency to the hop), and "Last" (which is the latency of the latest request sent out and received to the hop).
The "Sent" and "Recv" columns indicate how many ICMP requests we've sent to the hop and received. The packet loss column's ratio is (Packets Sent - Packets Received) / Packets Sent.
For outputting MTR results, you can either take a screenshot, use the "Copy Text to clipboard" button, or "Copy HTML to clipboard" buttons. You may also use the export buttons as well to output to a file. Typically, I'd suggest just using "Copy text to clipboard" and pasting the results.
Additionally, you may also hit the "Options" button and it'll show a box like this:

The interval indicates the time in-between sending ICMP requests. The lower this number is, the more requests you'll send obviously. However, as explained above, lower values will lead to more rate-limiting on certain hops, etc. Usually there's no reason to change this from one second.
The ping size is the ICMP packet length in bytes. This is something you won't typically need to change, but if you want to send bigger packets, you may change this to anything under your MTU limit (I don't believe this would support fragmentation, so you'll need to have it under your MTU limit).
I believe LRU (least-recently-used) indicates how many hosts it can store in one route (similar to the max hop count in trace routes). I'd suggest just leaving this to default (128). And the "Resolve names" box indicates whether to do rDNS lookups on the IP address to get a host name for the specific hop. In WinMTR, when a host name is found via rDNS lookup and "Resolve names" is checked, it does not display the IP like a trace route does. Sometimes rDNS lookups are inaccurate. Therefore, this would be a good reason to disable resolving host names in certain cases where needed.
I believe that's about all you need to know for an MTR.
Linux/Mac
You can execute an trace route or MTR on Linux/Unix-based systems (e.g. Mac) usually by installing the correct packages. I know for Debian-based systems such as Ubuntu, you can install these using apt (e.g. apt install mtr). On most distros, these tools are included by default. However, for minimal installations, you may need to install these packages manually.
I personally like using MTR on Linux due to all the options it comes with along with it just being a lot better than WinMTR. I'd suggest executing man mtr on your Linux OS to see what options you have. For example, take a look at this for Ubuntu (or run man mtr in your Linux terminal). You'll see many more options such as being able to perform MTRs over the UDP/TCP protocols (e.g. mtr --udp <host> for UDP-based MTRs or mtr --tcp <host> for TCP-based MTRs). Sometimes this is useful when a hop completely disables ICMP replies and you want to see if you can get a response from the hop using the UDP/TCP protocols instead.
Conclusion
I understand this is a pretty long post and it's pretty in-depth as well, but I hope this does educate some of you on how to use a trace route or MTR along with what they're good for.
As always, if you need any help with running these tools or inspecting the results, you may reach out to anybody on our Technical Team including myself.
If you see anything inaccurate in this post, please let me know! I'm always willing to learn new things and while I'm pretty sure everything is pretty accurate, there's a chance I missed something.
Thank you for reading!
-
1 hour ago, mbs said:
Or just use an auto refresher

You're just lucky PHP + IPS 4 + the old web machine without rate limiting was easy to take down
-
Hey all,
I just figured I'd share a video I made that includes me pen-testing my home server using my Packet Flooding tool to generate a DoS attack against my Barricade Firewall project.
The Packet Flooding tool sends packets to the VM running the Barricade FW directly by using its MAC address. This makes it so it doesn't have to go through my router/gateway (which can rarely handle ~50+K PPS). The packet flooding tool is able to generate 3.4 - 4.0 gbps running on an older Intel Xeon clocked at 2.2 GHz (12 cores and 24 threads) when sending packets with a payload of 1400 bytes (equally). For an older Xeon CPU, I feel it being able to push 3.5 - 4 gbps is pretty impressive. When sending packets with no payload, I'm able to send over 500K+ PPS on my home server (demonstrated in the video as well).
Here's the video with the results:
Before running the Barricade Firewall tool on the victim VM, the TCP SYN flood was able to cause a lot of packet loss on my VM (~50%). However, after enabling the firewall, I saw no packet loss whatsoever and everything worked fine.
It's pretty neat doing this pen-testing knowing I've made the tool that generates the DoS attack and also the tool to block the attack
Doing things like this helps me understand (D)DoS attacks along with understanding how to block them. This is pretty important because we're responsible for filtering on our Anycast network and I'm currently rolling out filters that should drop most malicious traffic unless if the attacker knows exactly what they're doing. After BiFrost is released, I'm confident we'll be blocking all malicious traffic since we'll be accepting legitimate traffic only and dropping the rest.
If you have any questions, feel free to respond
I made this thread for those who are interested in the networking and programming I'm into.
Thanks!
-
I've added the net_splitpacket_maxrate CVar to the config since this is very useful for reducing choke for non-CS:GO game servers (mostly when there are bandwidth spikes such as going through an area portal).
Thanks.
-
Also wanted to say thank you to the players who provided MTRs and trace routes to @Skittlez and @Frozen! It didn't take long to figure out the issue after discovering two users having the timeouts/packet loss when routing to the Dallas POP (the Dallas POP doesn't forward ICMP replies, so it easily allowed me to tell that it was the POP dropping the traffic itself).
-
Hi,
I'm making this thread for documentation purposes.
Last night when setting up the new game server machine, I decided to add new IP allocations and sync the POP configs (which included the latest config from POPs running the new filters). The new filters config would work with POPs not running the new filters. However, the one thing I forgot about is POPs running the old version didn't have the Rust rate limit multiplier which was needed (since we had a relatively low standard rate limit which worked fine for Source Engine servers, but not for Rust servers).
This resulted in players easily being rate limited and causing timeouts when routing to POPs not running the new filters.
I've pushed a new update to all POPs raising the standard rate limit for the time being. When all POPs are running the new filters, I will be lowering the standard rate limit, but Rust servers will have a multiplier of four which will be fine.
This issue should be resolved. I haven't received any complaints since.
I just wanted to apologize for the inconvenience. It was something I overlooked and since I was in a rush to try to get the new machine running and tested last night (along with Xy), I didn't catch it while updating the POPs last night.
Thank you for understanding.
-
Hey everyone,
I just wanted to see if there were any web designers who could give input or contribute to an open-source project I'm working on called Barricade Firewall. This project can be located on GitHub here.
This is a neat personal project I'm working on and is based off of the XDP Firewall I made here. The Barricade Firewall offers a performance improvement to the XDP program itself (located in this commit). With that said, the firewall will be able to connect to a backbone to sync configs/filters along with reporting stats to (this isn't finished yet). As of right now, the XDP firewall itself works fine without the backbone and you can set filters and config options in the config file using the JSON format. An example is included in the README on the Firewall GitHub repo here.
I'm currently in the process of creating the web design, but I'm honestly not a web designer (I prefer back-end programming). So far, here's what I have:
https://g.gflclan.com/A917JW63Hw.mp4
I haven't messed with the content on the page which is why it looks bad at the moment. However, I do believe the nav-bar looks somewhat decent at least.
I just wanted to see if there were anybody who was interested. You can view the CSS/SCSS code here and HTML code through the Elixir templates here (I use the Bootstrap CSS framework).
This project also has potential to turn into a full-fledged firewall. However, I'm going to push out a basic version and switch focus to BiFrost with Dreae before implementing more complex features (e.g. forwarding rules that are synced on firewalls via NFTables API). Dropping traffic using XDP-native is one of the fastest ways in the Linux networking path at this moment which is why I feel the firewall itself has a lot of potential.
Thanks!
-
1 minute ago, Skittlez said:
I'm sorry 😥
IT'S ALL YOUR FAULT! HOW DARE YOU MANAGE SERVERS THAT HAVE A BIG SPIKE IN POPULATION!
-
2 minutes ago, Xy said:
I wonder why servers on that machine are suffering from performance issues
Also, I'm going to talk to our staff to ensure machine load is monitored before putting servers on future machines. We've had a big spike in Rust population recently which is why we're seeing such high load.
Thanks.
-
I've ordered the machine from the Texas hosting provider after reviewing the benchmark results. We're going to be offloading servers from GS12 (probably all Rust servers) and ensure to keep the server at <50% load so we'll get the best clock speeds possible.
It is likely we'll have this machine setup by tonight. ACLs are already removed for this hosting provider since we have GS08 and GS09 with them.
Thanks!
-
One other thing I wanted to add is we are also looking into using OVH. However, I'd like to know if we can:
- Overclock their servers with the KVM access given. OVH does water-cooling to my understanding. Therefore, temperatures shouldn't be a big issue.
- If they're able to remove ACLs preventing us from using the IPIP program I made that sends traffic back to the client directly by spoofing the traffic as our Anycast network.
My biggest concern is #2. I know OVH can do it since they own all of their infrastructure. However, I'm not sure if they'd be willing to do this given they're a very big company. We would need the ACLs removed since we currently don't have a POP server in the any locations OVH supports. Otherwise, we'd be looking at an additional 10 - 20ms latency being added to the network depending on the POP the OVH machine routes to. If we could get an OVH machine, that'd be awesome though since OVH is one of the most stable hosting providers I've ever used.
Would be nice if we could colocate somewhere too. However, I don't trust myself to build a production server and overclock it since I haven't done that before.
Thanks!
-
Hey everyone,
I just wanted to briefly address recent performance issues with our GS12 machine. Unfortunately, the machine's processor down-clocked a lot more than I expected at higher load. As of right now, it clocks to 4.3 GHz on all cores at ~50 - 60% load. This is because the processor is getting too hot running all cores at > 4.3 GHz and it needs to down clock. At 20 - 30% load, we see 4.6 - 4.9 GHz on all cores which is what I was hoping we'd get at 50 - 60%+ load. This is resulting in bad performance with servers on GS12 that include all of our Rust servers.
@Xy and I have been talking for the last few hours in voice and we've been talking to our current hosting provider in Texas. We believe we've found a solid solution and will be looking to order a new machine in the next day or so depending on the benchmarks we get back from the hosting provider (which are likely going to suit our needs). Our hosting provider also has better solutions coming at the end of the month that'll result in better cooling. Therefore, we'll be able to get higher clock speeds with the same processor. We will be using the same processor as the GS12 machine (the Intel i9-9900K). However, we'll have higher clock speeds at the same load GS12 is running at (likely > 4.6 GHz).
Once I have another update, I will let you know. I apologize for the poor performance as well. I wasn't expecting the machine to down-clock this much at 50 - 60% load.
Thank you for understanding.
-
-
A few pics I took today after attempting to push a ~271 pound treadmill up the stairs solely (that didn't work out well). I also gave myself a haircut last week for the first time and I think it's looking okay lmao.



-
8 minutes ago, Domps VERO.co said:
He has returned https://youtu.be/IuysY1BekOE
The start to the best video on the Internet:
-
8 minutes ago, williampickthall12 said:
+1 i got no idea who they are but their pfp is cool and they made the title a lot better than what it sounds normally!
They were a Council member (back when Council was nearly equal to Director nowadays) from 2012 to 2016 or something like that. I don't remember the time frame of Council but he joined GFL in 2011.
P.S. I think @Floopyhiggle is the only person who got to see my bunny back in 2011 or early 2012.
-
+1 has classic profile pic








Anycast Expansion Plans/Approaches
in GFL's Network
Posted
I agree and am aware we would need to face application-layer attacks on the network. That's why BiFrost is being created to fight against these application-layer attacks (which will include in-depth filtering for SRCDS as well). The attacks I'm more concerned about are high-volume based since many of the hosting providers I've found that support BGP sessions don't fight well against larger-volume (D)DoS attacks. This is why I'm leaning towards a quantity over quality approach to the network, but I'm still not sure.
In regards to the A2S_INFO caching, I made a Google Doc here that addresses my PoV on it and why we have it enabled to begin with. I shared that document with Admins+ (it was a thread), but I don't mind having it public at this point. Of course, I'm not expecting people to agree with it and I understand the reasons why. But it's the decision I went with.
We've already been blacklisted late last year from FacePunch and after we got blacklisted, we turned the caching off for our GMod servers. However, we were the only servers blacklisted at the time. A majority of servers using GMCHosting do the same thing as well and they eventually got blacklisted. But after an outrage, their bans were lifted.
If there are any statements made regarding the caching from Valve or Facepunch, we'll be complying. I've also heard there are plans to revamp the server browser in GMod, so I'm hoping that goes well. I've been trying to think of something to propose to Valve for the server browser, but it's something I need to think more about.
In terms of the Asia POPs, I'm not expecting to have any with the new expansion. However, having them now is actually beneficial in terms of global network capacity since we only use one hosting provider at the moment and I don't believe the network capacity of each of their DCs are much. It's also good for damage control because I've noticed a lot of malicious traffic coming through these POPs and at times taking them down. Therefore, I'd prefer if those POPs experienced potential issues over the other POPs getting a majority of our traffic we have for the time being (until this expansion rolls out, of course). With that said, apparently our current hosting provider doesn't do load balancing according to some people who have came to me regarding it (e.g. you can't spin up more than one POP in the same location to extend capacity). Something I've considered is turning off the caching on these specific POPs, but I was initially waiting for BiFrost to be made because updating the config on each POP is a big pain right now.