Jump to content

Roy

Administrator
  • Content Count

    2,287
  • Joined

  • Last visited

  • Days Won

    320

Roy last won the day on March 28

Roy had the most liked content!

Community Reputation

9,064 Rainbow

About Roy

Personal Information

  • Location
    San Antonio, TX

Computer Information

  • Operating System
    NoobOS
  • CPU
    Intel i7-8700K @ 3.7 GHz
  • GPU
    RTX 2070
  • Ram
    16 GBs DDR4
  • Power Supply
    750W
  • Monitor(s)
    1 x 2K 144 HZ, 1 x 1080p 60 HZ
  • Hard Drive(s)
    1 x 512 GBs SSD, 1 x 3 TBs HDD

Recent Profile Visitors

158,977 profile views
  1. Hey everyone, I just wanted to share a big project I've been working hard on the last few days that will highly benefit GFL's Anycast network once fully completely. You can read the benefits of this program for GFL in this post. Description A program made to attach to the TC hook using the egress filter. This program makes it so any outgoing IPIP packets are sent directly back to the client instead of back through the IPIP tunnel. In cases where you don't need the end-application replies to go back through the forwarding server/IPIP tunnel, this is very useful and will result in less load on the forwarding server. With that said, in other cases it can result in less latency and more. Usage Usage is as follows: ./IPIPDirect_Loader <Interface> [<Interface IP>] You shouldn't need the second argument (Interface IP) since the program is supposed to get the interface's IP automatically. Example: ./IPIPDirect_Loader ens18 Installation Use the MAKE file to install the program. These commands should do: make make install You may also clean the installation by executing: make clean Systemd File A systemd file is located in the other/ directory and is installed via make install. You will need to edit the system file if you are using an interface other than ens18. You may enable the service by executing so it'll start on bootup: systemctl enable IPIPDirect You may start/stop/restart the service by executing: systemctl restart IPIPDirect # Restart service. systemctl stop IPIPDirect # Stop service. systemctl start IPIPDirect # Start service. Kernel Requirements Kernel >= 5.3 is required for this. Newer kernels add the BPF_ADJ_ROOM_MAC mode to the bpf_skb_adjust_room() function which is needed for this program to work correctly. Notes When compiling, you may need to copy /usr/src/linux-headers-xxx/include/uapi/linux/bpf.h to /usr/include/linux/bpf.h. For some reason, newer kernels don't have an up-to-date /usr/include/linux/bpf.h file. I'm unsure if this is intentional or a bug. However, I got the program to compile properly by copying that file. Credits @Roy - Creator. GitHub Link/Download Thank you!
  2. Update I am happy to announce that I've created a project that achieves what I've wanted! This project uses the TC egress hook/filter and searches for outgoing IPIP packets. Once an outgoing IPIP packet is found, it strips the outer IP header, changes the inner IP header's source address to the forwarding server (game server IPs in our case), recalculates all the header's checksums, and sends the packet to the upper layers. This results in the packet being sent back directly to the client instead of going back through the forwarding server (in our case, the game server's closest POP). I've been working very hard on this the last couple days and while I've ran into multiple headaches, I'm very happy I was able to get everything pretty much working. I still need to do more testing, specifically with TCP traffic, but so far everything is good! I tested this on my local environment and everything worked. I was able to play on the game server fine and there were no performance issues. The TC hook is pretty fast as well. Therefore, this program should achieve high performance. One thing I want to note is a newer kernel version is required to run this program on our game server machines. Therefore, once I get done testing everything and know for a fact that it'll run well, I will need to schedule maintenance for each game server machine and upgrade the kernel. Advantages To Running This Program Here are the benefits to running this program on our game server machines: Less load on our POP servers since all the game server traffic won't be routed to the server's closest POP server on our network. Less latency since packets won't be routed back through the closest POP. Instead, they'll be directly sent back to the client. Our game servers have good routing. So I'm not concerned there. Eliminates a single-point-of-failure. Due to less load on our POP servers, performance should be increased a ton and the lag issues we've been experiencing in NYC should come to an end. Since all game server traffic won't be going through the closest POP, we won't need a beefy POP server in each locations our game servers reside in. Less overage fees (this is a killer for us right now and costing us $300 - $400/m). There aren't really any cons if this program works properly and TC does a good job at modifying the packets. I've released the project here on GitHub. Once I have another update, I will let you all know. Thank you!
  3. who said you could visit my profile

  4. Hey everyone, I just wanted to provide an update on resolving performance issues with our NYC servers. You can read more about the issue itself at the end of this post. Recently, I've been trying to figure out how to get the game server machines to send traffic back to the clients directly instead of back through the IPIP tunnel (and closest POP server). This would result in less load on our POP servers which appears to be the main issue right now. With that said, this would result in less latency and overall better performance/consistency. Plus less bandwidth overage fees that we're paying at the moment Unfortunately, SRCDS doesn't appear to support binding to two separate interfaces (one for sending and one for receiving). Therefore, the easier solution of just adding a veth pair inside of the network namespaces the IPIP endpoint tunnel and game servers reside in, making the veth peer the default route (along with next-hop IP to the bridge on the main host), and adding an SNAT rule to the POSTROUTING IPTables chain under the NAT table is sadly not possible. Instead, I am trying to create a program that modifies outgoing IPIP packets. Since the IPIP tunnel sends traffic back out the default interface on the server, I can just attach to that interface. Originally, I was trying to make a program that uses AF_PACKET sockets. However, the initial approach was somewhat incorrect since I wasn't binding to the default interface, but instead binding to the IPIP tunnel itself inside the namespace. For some reason the program wasn't able to send packets back out through the network namespace. I made a Stack Overflow thread here, but didn't receive any response (probably because not many people do this and I'm sure I was missing some sort of IPTables rule, lol). The other problem with this approach is using standard AF_PACKET sockets results in the kernel copying each packet to the user space (not zero-copy). This is A LOT slower. However, I just wanted to see if my theory would work. A couple hours after trying to create the program above, I found out about the default interface sending IPIP outgoing packets and I also thought of a better idea. What if I made an XDP program that modified these outgoing packets?! XDP would be a lot faster than standard AF_PACKET sockets and does its work inside the kernel. Last Sunday I spent most of the day making this program. However, after debugging and research, I found out that XDP does NOT support the TX path at the moment which is needed for outgoing packets. This was a huge upset for me since I had the program pretty much made and good to go. I even released the source code on GitHub for when XDP does support the TX path (read below). Some changes will probably need to be made once support is added, but there shouldn't be many. After this upset, I decided to make a thread on the XDP mailing list here. I just wanted to know what others thought and if AF_XDP sockets (XDP userspace/ex-AF_PACKETv4 sockets) would support modifying outgoing packets. Unfortunately, but as expected, AF_XDP sockets only support receiving traffic from the XDP program via the redirect function. However, XDP TX path support will be added in the future and it's currently being worked on by David Ahern here. I really appreciate all the work and help these people put into this project! Anyways, there is no ETA on when XDP TX path/egress support will be added. Therefore, I continued asking for more help here and clarified our situation. I received the following response this morning from Toke Høiland-Jørgensen: While I've heard of the TC Hook before, I haven't actually made any programs that uses it. This will be my first attempt and since there is no ETA on the XDP TX path support, I'm going for it. I found this useful guide on making TC programs (along with XDP, but I already know how to make those). I'll have to make the BPF program and load it onto the interface. This shouldn't be too hard once I learn how to load TC BPF programs onto the interface using a loader C program along with learning how SKBs work (something I've seen, but wanted to learn for quite a while now). I will read this article after work today or during my lunch break. I have a skeleton of the TC BPF program made now: #include <linux/bpf.h> #include <linux/pkt_cls.h> #include <iproute2/include/bpf_elf.h> #include <linux/if_ether.h> #include <linux/udp.h> #include <linux/tcp.h> #include <linux/icmp.h> #include <linux/ip.h> #include <linux/in.h> #include <inttypes.h> #include <bpf/bpf_helpers.h> struct bpf_map_def SEC("maps") interface_map = { .type = BPF_MAP_TYPE_ARRAY, .key_size = sizeof(uint32_t), .value_size = sizeof(uint32_t), .max_entries = 1 }; SEC("egress") int tc_egress(struct __sk_buff *skb) { } In conclusion, I understand this issue is impacting our NYC servers quite a bit (especially Rust Modded). I hope you can understand I am trying the best I can to get this issue resolved in a timely manner. Unfortunately, the issue itself is very complex. I might try something else in the meantime which is booting up more POP servers in NYC specifically. Technically, each game server should route to different POP servers in that location that is handled by a load-balancer (round robin). This may resolve the performance issues as well, but we'll have to see. Thank you for reading.
  5. roys boobies 🤤

    1. Roy

      Roy

      my milkshake brings all the boys to the yard ;) 

  6. happy birthday :) 

  7. You didn't wish me happy birthday... 😔

  8. To be honest, nothing has changed for me since I'm already working from home and spend most of my time at home anyways doing technical work. Though, I have noticed my job has been a lot more busier and stressful recently since everybody is moving to WFH and the technical admins needs help setting up things.
  9. Hey everyone, Since we've migrated to our new web machine, we've been having issues sending outbound mail to clients. This should be resolved as of an hour ago after making some changes to our mail server on our web machine. If you continue to experience issues, please let me know. I confirmed I am able to receive emails from GFL now on my GMail account and the SPF record and so on passes. It also didn't arrive in the SPAM/junk folder. Thank you!
  10. We beat our record since mid-2016 again :)

     

    2277-03-28-2020-CUTrgNow.png

     

    Hopefully soon we can get to 1000. Keep up the great work everyone 😄

     

    Also another update on the performance issues with the IPIP tunnels, I thought of a great idea while in the shower. I plan to make an XDP program that'll make it so outbound IPIP tunnel traffic will be sent to the clients directly instead of back through the POP servers. XDP is the fastest thing we can use for this right now. Basically the XDP program will capture all packets going in and out the game server machine's main interface (the IPIP outbound traffic goes back out this interface). The program will check the outer IP header's protocol and if it matches IPPROTO_IPIP, it will continue. It will then check the outer IP header's source address and if that matches the IP of the machine, this tells us it's an outgoing IPIP packet. From here, it'll save the outer IP header's destination address which should be the game server IP. We will then strip the outer IP header and replace the inner IP header's source address with the saved outer IP destination address. From here, we'll recalculate the IP and transport protocol header's checksums and do return XDP_TX; within the XDP program which will modify the original packet with our changes. Any other traffic will hit return XDP_PASS; and be passed down the network stack. I plan to get this completed by mid-next week. @Dreaealso had another idea for the forwarding program itself. We'd be doing plain NAT rather than IPIP packets. This will result in more consistency and better performance, but I'm unsure how long this will take to implement. Therefore, I will be making the XDP program as a temporarily solution.

     

    Looking pretty good :) 

    1. Suicide

      Suicide

      Good stuff man, great progress since 2011.

    2. k2nod

      k2nod

      omg I just checked 953 that's insane almost a thousand people playing on our server at the same time. Congrats!!!

  11. Hey everyone, I've been seeing a high number of reports recently regarding users experiencing lag on our game servers. If you see that your ping is higher than normal or you're receiving packet loss, this thread is for you. If you're seeing these symptoms, please perform a trace route or an MTR (recommended) to the server IP you're playing on. On Windows, you can perform a trace route by executing the following command in Windows Command Prompt: tracert <server IP> You will need to replace <server IP> with the server's IP address you're playing on. For an MTR, you may download a tool such as WinMTR to achieve this. You will have to fill out the text field at the top of the program with the server IP's address and click start. Please allow the program to send 100 - 200 requests and try to do this while you're experiencing high latency/packet loss on the server. An MTR is A LOT better than a trace route in this case since it shows packet loss at each hop along with continuously sends requests to the destination. Therefore, I HIGHLY recommend using MTR over trace route for this. For Linux, you can just install the needed packages for the traceroute and mtr commands (probably net-tools). Once you have these results, please PM me them. I've gotten results from one individual (Lurn) from CS:GO Arena so far. According to their results, one of our direct peers (NTT) is experiencing issues and I'm suspecting this is due to the recent traffic spike caused by the coronavirus lockdown. I've been seeing similar issues at my job recently, especially with Microsoft, etc. I'm suspecting other users to be experiencing this same issue and I just want to confirm this. If I am able to retrieve a direct peers list from our POP hosting provider, I may be able to take off NTT from our direct peers list temporarily. However, our POP hosting provider still hasn't given me that list after asking for it three - four times now. If you're experiencing performance issues on our Rust servers, this may be caused by the issue I explain at the end of this thread. I am working to find a fix for this and feel free to read my latest status comment for an update on that. Thank you for your time.
  12. I'm so close to getting my theory to work that'll result in improving the Anycast network and eliminating performance issues discussed at the end of this thread. I'm having issues getting my test program to send outbound IPIP tunnel packets through a veth pair. I made a Stack Overflow thread here and hope to receive a reply on what I'm doing wrong. If I can get this theory working properly, the next step is finding a faster solution than AF_PACKET sockets. After this is discovered, I should be able to make a program that'll be able to send outbound packets from our game servers to the clients directly instead of through the POP servers. This will result in better performance on our network and lower latency 😄 

  13. high ping

    You will want to perform a trace route or MTR (recommended) to the server's IP (92.119.148.20). For a trace route, you may open Windows Command Prompt and execute the following command: tracert 92.119.148.20 For an MTR, I'd suggest installing a tool such as WinMTR. After opening the program, you'll want to enter the server's IP in the text field at the top (92.119.148.20) and click start. I'd suggest letting it send 100 - 200 packets. I would prefer an MTR over a trace route since MTRs include packet loss at each hop along with continuously sends requests to the destination. The above is for Windows. If you're using another operating system, please let me know which one and I'll give you instructions for that. I've noticed one of our direct peers (NTT) has been overloaded recently. There are users experiencing packet loss and high ping at their hops from this. If I had to guess, it's probably from the large spike in traffic recently due to the coronavirus, but I'm not entirely sure. Once you have this information, feel free to PM me and I'll look into it further. Thank you.
  14. Miboi still making donations to the community ❤️ 

    Screenshot_20200325-235201.thumb.png.32cf1adba8bd8e3739b9b0f01d39f13c.png

  15. Hey there old pal.

    1. Roy

      Roy

      Wadduppp, it has been a while!

    2. Suicide

      Suicide

      It def has! I'm gonna add you on Steam, we'll catch up.

    3. Roy

      Roy

      Do you have a Discord? If so, add me through there instead - cdeacon#6401

       

      I'm not very active on Steam tbh lol

×
×
  • Create New...