Jump to content

Roy

Banned
  • Posts

    2,853
  • Joined

  • Last visited

  • Days Won

    383

Posts posted by Roy

  1. 14 hours ago, Tunks37 said:

    Oh no I meant I don't have packet loss in other games. The only community servers I play on CS go is the GFL ZE one. On official servers I get 0% loss. Do you want me to join another server and run the MTR on that if I don't experience any packet loss?

    Yeah, try playing official servers and run an MTR to both Google and the CS:GO ZE IP if you can.

  2. On 12/4/2020 at 7:09 PM, Tunks37 said:

    It's all good I'm just glad you're still willing to help. Also what do you mean when you say run the MTR against the IP of the server?

    This would be the server's IP address you weren't receiving packet loss on according to your earlier posts. I'd like to see if you can run an MTR to that server's IP while experiencing packet loss on CS:GO ZE. And then after that, join the server that doesn't have any issues and run the same MTR to see if it shows any packet loss.

     

     I'd like to see if it could be somehow taking a different route after hop two to this IP which may not have packet loss.

  3. Friday night and yesterday morning, I made many changes to my Packet Sequence program.

     

    I moved as many static instructions outside of the while loop as possible and also added a feature for static payload. The move of many instructions outside of the while loop heavily improved performance. With that said, what the static payload does is it generates payload between the minimum and maximum lengths once before the while loop and uses that same payload repeatedly. If this isn't set, it generates random bytes of payload between min and max lengths for every packet. Depending on how big you want the packet to be, this can consume a lot of resources, especially if you're looking to do 64 KB packets. When using static payload with 61 KBs of payload, I went from 10 - 20 gbps to 50 - 60 gbps. This is a major difference and I find it cool my super old Intel Xeon CPU can push 50 - 60 gbps!

     

    I made a video here demonstrating this as well for those interested.

     

     

    I also tried messing with some of the read and write limits for sockets in the video to see if they would have any effect. I know when increasing the write limit, it results in the program using more of its CPU, but I didn't see any major differences with how much bandwidth I'm able to send. If I set the write default limit to a very high value, it resulted in the program using 100% of each thread, but the amount of bandwidth I was able to send out was heavily lowered due to the CPU bottlenecking. 

     

    I also made a hard-coded program for trying to push as much data as possible with Linux sockets over UDP which can be found here. I used this program to compare performance against my Packet Sequence program and they were both having similar results (when I had the static payload set of course). Therefore, I believe it's safe to say the packet submission process in the program has very great performance!

     

    Additionally, I've also documented everything finally and configuration for these YAML config files may be found here.

     

    I'm pretty happy with what I was able to accomplish with this program and I have a feeling once people start finding it, it'll get some attraction since I'm not able to find any other tools on GitHub similar to this. Of course, as I stated many times before, I do not support using this tool for malicious use in any way. I made it for pen-testing purposes and still have plans to make it into a network monitoring tool once I implement sequences that can read incoming packets.

  4. On 12/1/2020 at 3:00 PM, reme leader 049 said:

    Yeah, but there’s always the chance that those servers could die out leaving us near the bottom of the browser. Like breach 

    That's true, which is why management would want to monitor the server browser frequently to ensure that isn't the case. If that does happen, we could just put the server back into the regular DarkRP category or another more popular one.

     

    In the end, I think putting it into the less saturated (but higher up in the list) categories will be useful to 'kick-start' the server. Once the server is kick-started, I believe it'd keep striving even while in the standard DarkRP category because the server's rating is better when there are >16 players and >32 players (both separate thresholds that results in the server's rating increasing).

  5. On 11/29/2020 at 9:15 PM, Tunks37 said:

    First one is in game the second is out of game. Though I don't get why I have higher loss out of game lol. Also I want to mention I only have this problem on ZE. As other games aren't effected like ZE is.

    Ingame.png

    Out of game.png

    I'm sorry for the delay. Been busy with work recently :( 

     

    Can you try connecting to one of the game servers not having issues and run the MTR against the IP of the server not experiencing issues?

     

    With that said, can you run another MTR against the same server IP not having issues when you're having packet loss on our CS:GO ZE server?

     

    It wouldn't make much sense to me if WinMTR was showing a good amount of packet loss at the destination, but you weren't receiving any in-game. I suppose it's possible your ISP or modem rate-limits ALL ICMP requests for each specific code/type so that it won't even forward ICMP packets to the next hop if the TTL is above 1. But I've never seen that before and very doubtful that would be the case.

  6. I made many threads suggesting things here. May be worth a read to you.

     

    With that said, I agree with @reme leader 049. This is why I've been suggesting to the Server Managers for us to get under the "DarkRP Revamp" category which has around 6+ servers at 100+ players constantly from what I've seen.

  7. On 11/27/2020 at 3:44 PM, Tunks37 said:

    Here is me running it in-game with %20-%40 packet loss hope this helps. Also thank you a lot of constantly trying to help me, I really appreciate it.

    Screenshot (32).png

    Thank you for the extended MTR and you're welcome.

     

    I still believe the issue is something on Spectrum's side here (or possibly the modem if you're running a router/modem combo). The reason I say this is within the above MTR, you're having packet loss throughout the entire route (each hop) and it starts from the first hop that sends any replies after your gateway. It's still unfortunate the second hop has ICMP responses disabled entirely because that could be the hop experiencing issues. I also doubt every hop in your route would have rate-limiting applied to ICMP packets (the default protocol used in WinMTR/MTRs). I use Spectrum myself as an ISP and I haven't seen any hops from them that are rate-limited personally.

     

    The reason you're seeing 20% - 40% packet loss in-game is because of how many more packets per second you're sending/receiving compared to the MTR that only sends one PPS per hop by default (this is configurable within WinMTR, but the chances of rate-limiting being applied at lower intervals is much higher). Therefore, typically, you're going to send more packets and have a higher packet loss rate, especially if something is potentially bottlenecking here.

     

    One thing I'd suggest trying here is instead of using CS:GO ZE's IP, try using Google's IP as the "Host" in WinMTR (8.8.8.8 or you can type google.com for DNS to resolve). Do this while experiencing packet loss on the CS:GO ZE server. Afterwards, stop the MTR (screenshot the results) and run another MTR after disconnection to see if you're still receiving packet loss. This could be a way for us to see if there's something bottlenecking your network here.

     

    I hope the above helps!

  8. @Tunks37 I'm not sure if you're still having this issue, but if you are, can you try running another MTR while experiencing packet loss in-game, but run it until at least 300 or so packets are sent (the "Sent" column)?

     

    One thing I overlooked was how many packets were actually sent and received (37 or so is a very low number for MTRs). There's a high possibility this could be an issue with hop #11, but I'd need some more evidence (e.g. MTRs with much higher packet counts). I apologize for overlooking this as well, not sure how I missed it.

     

    Thank you.

  9. 51 minutes ago, Tunks37 said:

    Alright so here is me running the MTR while in-game. Also I didn't really notice any difference when I put in the console command.

    Screenshot (30).png

    In that MTR, you are definitely experiencing a good amount of packet loss to the destination (12% at least and probably more in-game since you're sending/receiving a lot more packets per second). Given your router/gateway isn't receiving any packet loss (the first hop) and the rest of the hops have packet loss, what I'm thinking is:

     

    • One of the hubs/routers/hops within Spectrum is over-saturated.
    • I'm not entirely sure how the MTR works with this, but if you have a router in bridge-mode that is bridged to your modem (a very common setup), while you aren't receiving packet loss from the gateway (router most likely), your modem may be dropping packets that are received by your router/gateway.

     

    Unfortunately, the second hop appears to have ICMP replies completely disabled (hence why it's at 100% packet loss). So we're not able to see if this is the hop dropping packets. You could try running a tool that sends TCP/UDP packets instead to see if that hop will give a response, but I'm not sure what tools exist for Windows (WinMTR doesn't support this IIRC) and there's a high chance it won't respond to those since you need to specify an open port on that hop that'll reply.

     

    I would expect this to also impact all services/servers you connect to. If this only happens on CS:GO ZE, it's probably due to the high amounts of bandwidth you're receiving/sending compared to other servers (the rate limiting should help with this, but as I said before, we may enforce a high minimum rate limit which is still too much for your network to handle). I suppose you could also forcefully limit your speed via Windows or a third-party program (which will result in higher latency, but it'll allow us to see if packet loss improves).

     

    Just to confirm, you're connected to your router/gateway via Ethernet, correct? I'd assume so since the max latency you had to your router is only 1 - 2ms (most wireless setups I've seen typically have higher average latency unless if they're right next to the router/gateway).

     

    My suggestions would be:

     

    • Try ensuring the coax cable from your modem to the next splitter is secure and tight. Maybe try unscrewing it, cleaning anything off of it, and then screwing it back in again.
    • Try rebooting your modem to see if the packet loss goes away temporarily (if the packet loss goes away temporarily, it could indicate there was an issue with the modem).
    • Contacting your ISP (Spectrum), schedule an appointment, and have they come out to check the signal strength, cabling, and so on.

     

    I understand this is a pretty complex explanation, but I hope this helps a bit. I really think your best bet is contacting your ISP as stated above, getting an appointment scheduled, and having them check everything. Make sure to provide them with that MTR as well, because it should be pretty eye opening when all the hops after the 2nd have packet loss while there are no packet loss to the router/gateway itself.

     

    I hope the above helps!

  10. 12 hours ago, Tunks37 said:

    Okie here is what I screenshotted. I would like to say that your video explanation was very easy to follow.

    Screenshot (28).png

    I'm glad it was easy to follow! Did you perform this MTR while having packet loss in-game?

     

    The destination doesn't have any packet loss in the MTR you provided. I've seen it before where MTRs via ICMP don't show any packet loss, but players still receive heavy packet loss in-game. In these cases, the only things I can think of are:

     

    1. Something is overloading your CPU or NIC to the point where it cannot process packets, so it drops them.
    2. There's a program on your computer or something ISP-related that is rate-limiting UDP traffic.

     

    It's hard to pinpoint these issues usually if you aren't getting any packet loss via MTRs, but are getting them in-game.

     

    Another question I have is do you have heavy amounts of packet loss at lower amount of players on the server? CS:GO ZE is a very hectic game mode and some maps push a lot of bandwidth to our clients. You could try setting rate 30000 in the in-game console to see if this improves the packet loss at all (this'll limit you to 30000 bytes per second). Though, I need to check what the minimum rate is set to on the server.

  11. Hey everyone,

     

    This thread is serving as a low and high level overview of Bifrost. The reason the thread contains the tag 'unapproved' is because I haven't talked to @Dreae entirely about each aspect of the plan yet. I hope to talk to him today and once everything is finalized, I'll be changing the title of the thread. After everything is approved, I plan to document our plans into either a separate repository here or as a "Wiki" in an existing repository (using Markdown).

     

    These plans may be slightly altered as well, but I believe this is very close to being completely finalized.

     

    What Is Bifrost?

    Bifrost is a project @Dreae and I are creating that will act as a firewall and will also be used on GFL's Anycast network. This project will be responsible for forwarding legitimate traffic to our game server machines and dropping malicious traffic as fast as possible (originating from (D)DoS attacks for example). Bifrost was formerly known as CompressorV2 which was supposed to be a revamped version of Compressor created by Dreae. Compressor is currently utilized on our Anycast network and POPs in Europe/Asia have filters I've implemented into Compressor outlined here and here.

     

    Why Has Development Taken So Long?

    Development for Bifrost started around a year ago or so. However, it has taken a very long time to come up with a game plan and a lot of this has been my fault along with time restraints on both Dreae and I's side (I still have my full-time job, GFL itself, and the Anycast network's infrastructure to maintain). With that said, a year ago when development started, I was not experienced with low-level C programming and network programming. In fact, I didn't start digging deeply into this until March of this year and I've made all of my progress mostly open-source via my GitHub profile here.

     

    With that said, the last few months I've been trying to look into the fastest ways possible to drop packets and to be honest, this has overcomplicated the project since.

     

    Thank You For The Support!

    Before going into my plans for Bifrost, I just wanted to thank everybody including those on Linkedin for the amount of support they've shown regarding this project. The last few months I've been posting updates regarding Bifrost and network/programming in general which started receiving more attention/support than I had imagined. This has given me a huge motivation boost and I've also been learning a lot from others as well (I still feel I have so much to learn and I'm sure I'll never stop feeling that way which is good and bad sometimes haha, but I feel I've made great progress as well since March). I wanted to give shout outs to the following individuals for being specifically helpful to Bifrost and GFL's Anycast network development:

     

    • @Dreae - Dreae is who introduced me to network programming and somebody I highly look up to. If it weren't for him, I likely wouldn't have made the Anycast network and have such a big passion for mixing networking and programming today.
    • Renual - Renual is the owner of Gameserverkings (GSK). We've started using GSK in the middle of this year for our game server infrastructure along with some of our Anycast POPs. Renual has also been extremely helpful in regards to advice for Bifrost, BGP, (D)DoS mitigation/prevention, networking in general, and so much more (he is also very busy in general and still takes the time to go into in-depth detail on so many things I ask him about). He is one of the smartest people I've talked to and I've been learning so much from him. I'd suggest giving this thread a read to see how beneficial GSK has been to us.
    • Pavel Odintsov - Pavel is a CloudFlare Engineer who connected with me after discovering one of my posts on Linkedin and has since then been very supportive of my projects. It meant a lot after seeing him support my projects and I knew of him beforehand when I discovered his (D)DoS Analyzer software called FastNetMon. He also recently suggested FreeBSD's netgraph for the project which I will go into detail below.

     

    Choosing What To Drop Packets With

    The most complicated part when coming up with a game plan for Bifrost was choosing what networking path/hook/library to use to drop packets as fast as possible. I made a thread here that went into detail on XDP vs the DPDK from my point-of-view. However, recently, after talking to Pavel as mentioned above, he introduced me to FreeBSD's netgraph (this gives an easier overview of what netgraph is in my opinion).

     

    To be honest, all three of these were really great options and in regards to GFL, all perhaps overkill for what we have right now (we're not making this firewall for just GFL, though). The reason why Pavel suggested netgraph specifically was because Dreae and I had plans to make Bifrost's back-end basically a packet engine that would be able to use ingress and egress hooks from XDP, the DPDK, TC (Traffic Control), netfilter, and more. We had also planned to make our own programming language that would be used to create modules within Bifrost (these modules would utilize networking hooks as mentioned earlier). It seems netgraph would have been great to use for this, though, I'm unsure how this would be implemented into the DPDK for example.

     

    Unfortunately, due to FreeBSD's netgraph being very new to me and its complexity, I've decided not to use this for Bifrost. It also didn't have much documentation, but I was able to find many open-source modules made for it on Google here. However, I do plan on using it in a future firewall I plan to build after Bifrost.

     

    This narrowed down the choices to the DPDK or XDP. They both have their pros and cons. When it comes to performance specifically, the DPDK would be the better choice due to busy-polling, etc. However, this would require dedicated cores and since GFL's Anycast network has many POPs with two cores, to my understanding we'd only be able to utilize one for the DPDK application. With that said, we will likely be getting NICs for our GSK POPs that support XDP offload (which compiles into the NIC's hardware to my understanding) and therefore, I believe this would be even faster than the DPDK (though, I haven't dug deeply into this yet).

     

    With the above points stated and also the fact that I still haven't dug deeply into making applications for the DPDK, I decided it'd be best if we used a combination of XDP, TC/BPF (Traffic Control), netfilter, and NFTables for Bifrost. The DPDK is still something I will be learning in the future and making firewalls with.

     

    What Will Make Bifrost Stand Out?

    I personally haven't seen many open-source firewalls that aim to do what Bifrost intends to. However, a few key things I believe will make Bifrost stand out include:

     

    • Using XDP (in either offload, native, or generic modes depending on what the NIC/NIC driver supports) to drop malicious traffic via firewall rules using most packet characteristics besides dynamic payload matching which was attempted here, but failed. The XDP program will also include rate-limiting.
    • BGP Flowspec support for attacks detected with source/destination IPs, source/destination ports, and more. This will result in upstreams dropping the specific traffic instead of the server itself which is better.
    • TC/BPF/netfilter modules for whitelisting traffic via handshake sequences (for game servers) and more. These TC BPF programs will add to the XDP BPF map if a certain packet's characteristic isn't changing that the XDP program supports blocking for a certain amount of time. These modules can be enabled on a per destination IP basis (or depending on other characteristics within the packet) within our back-end portal for servers that are supported.
    • Hoping to make a netfilter kernel module that inspects all traffic and looks for anomalies that can be caused by a (D)DoS attack. These inspections can happen for all traffic or traffic containing certain characteristics.
    • Creating a netfilter kernel module to inspect certain packets depending on the configuration via payload and pushing firewall rules if we find a common characteristic so XDP can drop the packets. Otherwise, netfilter will be dropping the packets.
    • An IP/port forwarding module via NFTables that can be used to forward traffic to services that cannot be completely Anycasted (such as game servers for example) or for load-balancing purposes.
    • The ability to cache certain packets for a certain amount of time in seconds which should help limit damage done by layer-7 attacks on certain applications.
    • Being able to manage global/server-specific configuration in a back-end web portal that the servers communicate with to retrieve information. I want as many things being configurable from this back-end portal as possible.
    • Server-specific statistics that will be reported to and graphed in Bifrost's back-end including network statistics such as packets per second and bytes per second (conversions will be supported within the back-end portal for kbps, mbps, and gbps).
    • A very flexible REST API that can be used to change any global or server-specific configuration in the matter of seconds.

     

    With the above points outlined, I feel this will be a very unique open-source firewall.

     

    Communication Between The Server And Bifrost's Back-End

    I'd like to make the main application that runs on the servers themselves in C because this will allow us to read/write from the BPF maps easier. In addition, I'd like to use an HTTP/HTTPS library for talking to Bifrost's back-end/API. I found a C library named libhttp and I think it'd be beneficial using this to send regular HTTP/HTTPS requests or setting up a web socket.

     

    I believe using a web socket would be better because to my understanding, it'd keep the connection alive and in return, will result in (on a very micro level) less bandwidth/resource consumption since we aren't establishing a three-way TCP handshake each HTTP/HTTPS request we send out.

     

    Since the three-way TCP handshake cannot be spoofed, I do believe just having an IP/port validation on the Bifrost back-end will be good enough for validation from the server to Bifrost's back-end. However, we'll also look into implementing more security checks if we can including a token, certificates, and more.

     

    The XDP Program

    The XDP program will contain two BPF maps that contains all the firewall rules for passing and dropping traffic besides payload matching (since as explained above, I wasn't able to find a way to get dynamic payload matching to work in XDP here). The passing BPF map will be checked first to ensure attackers don't spoof as a service we need and get the source IP rate-limited which would cause service interruption. From here, the blocking BPF map and rate limits will be checked. Afterwards, the BPF map containing all drop rules from the firewall will be inspected.

     

    This XDP program will also include a rate-limit feature so we can block source IPs or other characteristics based off of rate limits (per second, per minute, and more).

     

    Two other BPF maps will need to be created. One BPF map will use an unsigned 32-bit integer as the source IP lookup key and an unsigned 64-bit integer as the value to indicate when the source IP should be unblocked. This will operate as the source IP blacklist map. We may also add additional blacklist maps for specific characteristics so we don't have to inspect deeper into the packet each time (which allows us to drop packets even faster).

     

    The last BPF map will have an unsigned 32-bit integer as the source IP again and then how many packets it has sent in the last second (PPS) which will be used for rate-limiting. We can also do some math and make it so we can block based off of packets per minute and a certain amount of time as well. I'm also going to add rate-limiting for the amount of bytes per second the source IP sends (or in minutes or a custom timeframe as well).

     

    Other than dropping specific traffic within our upstreams by BGP Flowspec policies, we would want to drop as much malicious traffic as possible within the XDP program since that's faster than anything else we utilize within Bifrost at this moment.

     

    BGP Flowspec

    BGP Flowspec support is only a feature certain hosting providers such as GSK will allow us (GFL) to benefit from. I'm hoping to make it so the BGP Flowspec part of the C program will parse all firewall rules and push BGP Flowspec policies for any firewall rules that have the "Push BGP Flowspec Policy" option enabled on.

     

    BGP Flowspec will allow us to push policies to our upstreams and have them filter certain traffic instead. This is truthfully the fastest way to drop malicious traffic and since the upstreams are blocking them, this would result in no resource consumption from the attack on our POP servers.

     

    While BGP Flowspec is a very powerful tool, it can also do a lot of harm if used incorrectly. For example, we need to ensure we aren't pushing any policies that could take down our BGP sessions (e.g. such as blocking TCP/179). There have been many outages in the past (most recently, an outage that occurred with CenturyLink) caused by BGP Flowspec policies that have impacted a good part of the Internet. Thankfully, we won't have to worry about taking down a good percentage of the Internet as a whole, but we could take down our entire Anycast network if used incorrectly.

     

    I'll be making sure we take into account the whitelisting aspect of Bifrost when pushing out BGP Flowspec policies.

     

    Firewall Rules

    Bifrost will include a module that supports setting up firewall rules. Firewall rules can be applied globally or on a per-server basis. Each firewall rule will also include an option to push via BGP Flowspec if available (most characteristics should work besides payload since that isn't supported within BGP Flowspec at the moment).

     

    The following options will be supported with firewall rules to match on. Please note that all of these besides "Enabled" and "Action" are optional.

     

    • Enabled - Whether to enable the rule or not.
    • Action - Whether to pass or drop the packet matching.
    • Source IP addresses (or CIDR ranges).
    • Destination IP addresses (or CIDR ranges).
    • Source ASN (this will support ASN prefix lookup so you can match on all prefixes from a specific ASN).
    • Minimum/Maximum IHL - IP header's min/max IHL to determine the IP header's length (IHL * 4).
    • TOS - Type of Service to match on within IP header.
    • Minimum/Maximum ID - IP header's min/max ID to match on.
    • Minimum/Maximum fragment offset - IP header's min/max fragment offset.
    • Minimum/Maximum TTL (Time To Live).
    • Minimum/Maximum packet length.
    • IP protocol - E.g. TCP, UDP, and ICMP.
    • IP flags - IP header's flags as a 3-bit integer).
    • Additional IP header options.
    • UDP source ports.
    • UDP destination ports.
    • TCP source ports.
    • TCP destination ports.
    • TCP Flags (SYN, ACK, PSH, FIN, RST, and URG).
    • Minimum/Maximum TCP Sequence and Acknowledgement numbers.
    • Minimum/Maximum TCP Window size.
    • Additional TCP header options.
    • ICMP code/type.
    • Minimum/Maximum PPS (Packets Per Second).
    • Minimum/Maximum BPS (Bytes Per Second).
    • Minimum/Maximum payload length.
    • Payload exact/partial match (a string in hexadecimal indicating bytes to match on).
    • Possibly more.

     

    For more in-depth packet inspection, creation of modules using TC/BPF or netfilter are encouraged. Payload inspection will occur inside a Linux kernel module utilizing the netfilter networking hook (examples I made may be found here). The main thing I need to figure out is how to interact with the XDP BPF maps from within the Linux kernel module itself.

     

    Initially, I had planned to add these firewall rules to NFTables or IPTables and using IPTable's NFQueue to add the source IP, etc. to the XDP BPF block map. However, this wouldn't support zero-copy. Therefore, there's a high chance an attack would consume all of the server's resources just from these actions. Therefore, I decided we should do everything we can within the XDP program or inside the kernel.

     

    Cached Packets

    Bifrost will have the ability to cache specific packets. What Bifrost will do is have a process in the background within the back-end that sends the request for a certain packet type on a certain interval and store the response somewhere for the server's themselves to retrieve. From here, when an incoming packet matches the request, the server will respond with the cached response instead and drop the initial incoming packet. This will help mitigate damage from layer-7 attacks.

     

    These packet types will be defined in the back-end portal and either on exact/partial matches within the packet/payload using a string using hexadecimal to represent each byte.

     

    IP/Port Forwarding

    For services that cannot be Anycasted or for load-balancing purposes, Bifrost will be able to forward traffic on a destination IP/port match via NFTables. We're going to call these rules "connectors" and each connector will start off with a destination IP to match on and another IP to forward the traffic to. From here, you'll be able to add port mappings inside this connector to map the bind port and port we're forwarding the traffic to. You'll also be able to use wildcards/port ranges in these mappings if you want to forward all traffic on an IP level or use port ranges.

     

    Additionally, we'll also have some modules to handle other traffic such as the FOU Wrapper/Unwrapper module which can be found here (it needs some changes, but should give an overview of what we're planning to achieve and will be documented later on).

     

    Game Server Handshake Validating

    We're going to be making TC BPF programs (both ingress and egress) to validate handshake sequences our clients make to our game servers. I won't go into much detail here since I already did on this thread here for those interested. However, this will help us a lot with preventing malicious traffic from being forwarded to our endpoint game server machines.

     

    Additionally, I also plan on pushing any validated source IPs to a global list provided by the back-end that the TC BPF maps will be checking for (to ensure the client is on the handshake-validated map). This is so if somebody's route starts going through another POP server while connected to our game servers, they won't timeout and need to reconnect to reinitiate the handshake sequence again (this is a problem at the moment with our temporary filters for Compressor).

     

    Conclusion

    This is a general overview (both high and low level) of our Bifrost project. I'm pretty excited for this and believe this is really close to what we're rolling with. There may be some things that need to altering depending on how we approach certain problems. However, this is the closest we've gotten to finalizing our game plan.

     

    As stated at the beginning of this post, once everything is finalized, we will be storing our game plan inside either a separate repository under the Bifrost organization here or under "Wiki" within a repository (all plans will be written in Markdown).

     

    If you have any questions, suggestions, or want to be involved with the creation of Bifrost, please reach out to @Dreae or I.

     

    Thank you for your time!

  12. Hey everyone,

     

    I just wanted to post an update regarding our Anycast network.

     

    Paris POP Fixed

    Our Paris POP server hasn't been announcing any of our IP blocks for a while now. When modifying our BIRD configuration to announce the two new IPv4 blocks tonight, I found out why our Paris POP wasn't announcing any blocks beforehand. I was missing this part of the BIRD config which is needed for scanning:

     

    protocol device
    {
            scan time 5;
    }

     

    One thing to note here is because our Paris POP has my filters deployed within Compressor, anybody who was rerouted to the Paris POP while on our game servers would have lost connection until they reconnected (since the filters are based off of the handshake sequence between the client and game servers). I didn't expect my fix to work since I thought I attempted this months ago. However, it did and apologize for the inconvenience for those players impacted.

     

    3514-11-21-2020-oW6O0K3j.png

     

    It is now receiving a good amount of traffic :)

     

    New /24 IPv4 Blocks Announced

    I modified our BIRD configuration tonight on all of our Anycast POP servers and started announcing HG's /24 IPv4 block (185.141.204.0/24) and our new /24 IPv4 block (185.240.217.0/24) being used for our Europe expansion. This was smooth because I used birdc configure to reload the BIRD configuration. This would check for syntax errors and any other issues before applying the new config and also wouldn't restart the BGP session itself which could cause BGP session flaps (this could happen if we restarted via the systemd service itself).

     

    I've confirmed both IPv4 blocks are being announced on all our POP servers/hosting providers by executing birdc show route on the POPs themselves and also using this website to perform MTRs to IPs on these blocks from many locations around the world.

     

    3507-11-21-2020-VOwOu51H.png

     

    3506-11-21-2020-8DjagU2W.png

     

    There does seem to be some sort of strange routing loop going on for these new IP blocks on some providers:

     

    3510-11-21-2020-WDfC8ZT3.png

     

    3512-11-21-2020-X2qeiw0K.png

     

    I'm still investigating this, but there's a good chance this will correct itself. I've also asked the GSK owner, Renual, how they'd go about troubleshooting this issue. If it continues, I'll be using popular Looking Glass websites to perform MTRs from major ISPs and see if I can narrow it down to the specific POP or upstream causing this issue.

     

    Note - Some locations timeout because ICMP replies are disabled on those specific POPs. This is because beforehand, we were forwarding ICMP requests to our game server machines and attackers could use this vector to over-saturate the 1 gbps NICs our game server machines have (a single point of failure). The locations responding to ICMP replies have the POP server itself respond to these requests, so it isn't as big of a deal (though, it could still consume more resources on the POP doing it this way rather than dropping the request entirely and not sending back a response. But it allows us to check for loss packets on the POP itself which is very helpful for debugging performance issues with the network).

     

    If you have any questions, please let me know!

     

    Thank you for your time.

  13. On 11/20/2020 at 1:49 PM, Pachimo said:

    LOL, i love how many people fell for it...

    i was joking around.

    A few thing here:

    - i spent money on gfl so why would i leave?

    - i like the players here well most of them well...

    - i had a fun time on the forum..

     

    Sorry if i got people so surprised XD

    I just did it to see how many people would miss me.. Not that much i guess would miss me if i left ;(

     

    PS: sorry for the people that wanted me gone  I am guessing alot of people wanted me gone but i am still back ;)

    I honestly forgot I posted that and had to look it back up. Damn lmao

     

    Oh, and I figured I'd let everybody know since I'm sure there are people confused still. She's mocking a thread I posted in the first gaming community I joined, SteamGamers. The thread was from 2011 and can be found here (quite a fun read).

     

    How it went was:

     

    1. I got mad at something, wanted attention, and made a thread stating I'm leaving.
    2. Got extremely butthurt about the negative comments.
    3. Took back everything I said and tried turning it into a "I was just joking" thread which resulted in even more negative comments, which was to be expected.

     

    Good timesssss. I was 12 - 13 years old at this time by the way.

  14. @Caution Thank you for this :) I really appreciate it!

     

    SG (SteamGamers) was the first gaming community I was ever a part of which was back in 2010 (I played on the CS:S Zombie Escape server a lot at the time) and inspired me a lot with projects like GFL (along with other communities like HellsGamers). I know back then I was childish in the community and apologize for that (I'm not sure if you read the threads I posted there back then like this and this, but I was 12 or 13 years old or so at the time and always have a laugh looking back at it now personally). However, I've always respected SG and glad to see it's still doing well!

     

    I would also be in full support of doing collaborations in the future!

  15. Creating this topic to provide useful links/guides to our players and server admins. Will update as time goes on.

     

    Administration

     

    Players

    Discord

  16. On 11/11/2020 at 11:37 PM, Alexis said:

    Yea. The FPS drops were insane, I would get stuck at 30 FPS and it would make shooting sometimes unbearable when it would drop below 30 FPS. This would help dramatically and would probably make the player base rise up.

    Sadly I don't think this would improve your client-side FPS in-game :( 

     

    Client-side optimization needs to go into the server for that (e.g. optimizations with the map such as using area portals correctly if they aren't already and optimizing GUIs, etc). Though, I also need to know your computer's specs.

     

    This is just a bug that makes movements in-game feel really jittery, etc. 

×
×
  • Create New...