<?xml version="1.0"?>
<rss version="2.0"><channel><title>Updates Latest Topics</title><link>https://gflclan.com/forum/204-updates/</link><description>Updates Latest Topics</description><language>en</language><item><title>Update: Introducing Our Newest Public Server Machine!</title><link>https://gflclan.com/topic/83648-update-introducing-our-newest-public-server-machine/</link><description><![CDATA[<p>
	Hi everyone!
</p>

<p>
	 
</p>

<p>
	Just a quick update on something cool - we have recently taken delivery of a brand new dedicated machine from GSK:
</p>

<ul>
	<li>
		Ryzen 9 7900X CPU
	</li>
	<li>
		128GB DDR5 RAM
	</li>
	<li>
		2TB NVMe SSD
	</li>
</ul>

<p>
	 
</p>

<p>
	This machine will replace two of our existing public server machines in Dallas, GS15 and GS18, and will be labelled GS-US3 (following the new machine naming convention). This is quite the upgrade from GS15/GS18, both running an Intel i9-10900K CPU!
</p>

<p>
	 
</p>

<p>
	A selection of servers will be transferred to this new machine and will benefit greatly from the performance improvement:
</p>

<ul>
	<li>
		CS:GO Surf #1 Beginner, #2 Novice, and 24/7 Utopia
	</li>
	<li>
		CS:GO KZ #1 Easy &amp; #2 Expert
	</li>
	<li>
		CS:GO Bhop #1 &amp; #2
	</li>
	<li>
		GMod CWRP
	</li>
	<li>
		GMod TTT Anarchy
	</li>
	<li>
		GMod TTT Rotation
	</li>
	<li>
		GMod Prop Hunt
	</li>
	<li>
		TF2 2Fort Vanilla
	</li>
	<li>
		TF2 2Fort Improved #1 &amp; #2
	</li>
	<li>
		TF2 Hightower and Hightower #2
	</li>
	<li>
		CS:S Zombie Escape
	</li>
</ul>

<p>
	 
</p>

<p>
	This performance upgrade is much needed for quite a few parts of GFL, for instance, this should allow CS:GO Surf to increase the total player capacity on some of the bigger servers.
</p>

<p>
	 
</p>

<p>
	We will be migrating these servers over time as we can, most migrations only result in a total downtime of 10-15 minutes, so these can be done pretty quickly! This new machine is spec'd with quite a lot of memory and disk, so we should be able to use it for future expansions and will have quite some space remaining once the initial migrations are completed. (We will have ~8 CPU threads unallocated once these migrations happen!)
</p>

<p>
	 
</p>

<p>
	 
</p>
]]></description><guid isPermaLink="false">83648</guid><pubDate>Wed, 19 Apr 2023 03:42:21 +0000</pubDate></item><item><title>12 Years of GFL</title><link>https://gflclan.com/topic/82844-12-years-of-gfl/</link><description><![CDATA[
<p>
	Hi everyone!
</p>

<p>
	 
</p>

<p>
	The 25th was GFL's 12-year anniversary! It is almost surreal to think about how far GFL has come over the years, through its various ups and downs (of course, it wouldn't be the GFL we know and love without all the drama), and just how many generations of people have come and gone through all this time. GFL continuing to thrive after all these years wouldn't be possible without you all - thank you all so much for being here and being supportive!
</p>

<p>
	 
</p>

<p>
	Last year, we celebrated GFL's anniversary by having <a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/1623-annoying-furry/?do=hovercard" data-mentionid="1623" href="https://gflclan.com/profile/1623-annoying-furry/" id="ips_uid_7524_6" rel="">@<span style="color:rgb(178,34,34)">annoying furry</span></a> resign from her Director spot, fun! This time, I would like to do something a little less entertaining - reflect on some of our achievements over the past year, and talk about a few plans for the coming year!
</p>

<p>
	 
</p>

<p>
	 
</p>

<p style="text-align: center;">
	<span style="font-size:36px;">-  2 0 2 2  -</span>
</p>

<p>
	 
</p>

<p>
	Of course, I need to start this section with xQc's visit to GFL a few months back during one of his streams:
</p>

<div class="ipsSpoiler" data-ipsspoiler="">
	<div class="ipsSpoiler_header">
		<span>Spoiler</span>
	</div>

	<div class="ipsSpoiler_contents ipsClearfix" data-gramm="false">
		<div class="ipsEmbeddedVideo" contenteditable="false">
			<div>
				<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="" frameborder="0" height="113" src="https://www.youtube-nocookie.com/embed/q5gScrph0pA?start=760&amp;feature=oembed" title="xQc Surfing on GFL Highlights" width="200"></iframe>
			</div>
		</div>
	</div>
</div>

<p>
	 
</p>

<p>
	On that note, we've managed to achieve quite a few things this past year! This won't be an exhaustive reflection on the whole year by any means, but just a short list of things we did:
</p>

<ul>
	<li>
		We hosted our first ever major event, the GFL CS:GO Surf Tournament, in June of 2022! We had a prize pool of $2606.69 USD that was distributed to the Top 3 players in the event, making it the biggest tournament Surf has ever seen:
		<div class="ipsSpoiler" data-ipsspoiler="">
			<div class="ipsSpoiler_header">
				<span>Spoiler</span>
			</div>

			<div class="ipsSpoiler_contents ipsClearfix" data-gramm="false">
				<div class="ipsEmbeddedVideo" contenteditable="false">
					<div>
						<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen="" frameborder="0" height="113" src="https://www.youtube-nocookie.com/embed/uPOA19ywN-A?start=887&amp;feature=oembed" title="GFL Surf Tournament 2022 - Grand Final: bzukey VS. Liquidator" width="200"></iframe>
					</div>
				</div>

				<p>
					 
				</p>
			</div>
		</div>
	</li>
</ul>

<ul>
	<li>
		Speaking of events - our Christmas Santa event, where we gave out a wide variety of gifts, had over 200 participants! We gave out about 95 Steam keys, 31 months of VIP, 10 months of Supporter, and many more in-game rewards such as 160,000 GMod CWRP credits and 80+ CS:GO Surf player models!
		<ul>
			<li>
				We also hosted our first Minecraft event in a very long time, <a href="https://gflclan.com/topic/79150-event-minecraft-factions-april1st-april30th/" rel="">the Factions event!</a>
			</li>
		</ul>
	</li>
	<li>
		We sponsored two CS:GO Mapping Contests, the <a href="https://gflclan.com/topic/80596-announcement-mapeadores-zombie-apocalypse-ze-map-contest/" rel="">Mapeadores CS:GO Zombie Escape</a> and the <a href="https://gflclan.com/topic/81005-announcement-gamebanana-csgo-jailbreak-mapping-contest-2022/" rel="">GameBanana CS:GO Jailbreak</a> mapping contests!
	</li>
	<li>
		The forums have received a ton of much needed love from <a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/1028-liloz01/?do=hovercard" data-mentionid="1028" href="https://gflclan.com/profile/1028-liloz01/" id="ips_uid_6059_14" rel="">@<span style="color:#ff0000">Liloz01</span></a> throughout this year! Here are some statistics on the forums for 2022:<iframe allowfullscreen="" class="ipsEmbed_finishedLoading" data-embedauthorid="1028" data-embedcontent="" data-embedid="embed9618935243" id="ips_uid_5163_4" src="https://gflclan.com/applications/core/interface/index.html" style="overflow: hidden; height: 210px; max-width: 502px;" data-embed-src="https://gflclan.com/topic/82678-gfl-forums-2022-in-numbers/?do=embed"></iframe>
	</li>
	<li>
		<p>
			Our infrastructure has evolved quite a bit over the past year, both in terms of cost saving and in capacity. We significantly <a href="https://gflclan.com/topic/82253-update-machine-upgrade-for-web-4/" rel="">upgraded our web infrastructure</a> and <a href="https://gflclan.com/topic/81822-update-2-new-machines-1-current-machine-and-1-new-region/" rel="">have grown our Private Server infrastructure</a> due to the program's growth! On the other hand, we have saved the community quite a bit on monthly expenses by <a href="https://gflclan.com/topic/81115-physion-migration-72322-through-12722/" rel="">consolidating Physion's infrastructure with GFL's</a> core existing hardware.
		</p>
	</li>
</ul>

<p>
	 
</p>

<p style="text-align: center;">
	<span style="font-size:36px;">-  2 0 2 3  -</span>
</p>

<p>
	 
</p>

<p>
	2022 was a good year for us in terms of overall stability and growth! We were able to try a few things we had never done before, and we intend on pushing forward with the many things we've learned. Here are a few of our rough plans that we intend on pursuing over the course of the year:
</p>

<p>
	 
</p>

<ul>
	<li>
		To keep up with the growth of the Surf Private Servers program, we're currently awaiting delivery of a dedicated machine in London! This won't be a traditional expansion as we're starting out only with Surf private servers, but this may change down the line.
	</li>
	<li>
		We aim to release GFLStore within the first half of this year, if all goes well! This new store will allow for players to donate for specific packages and premium perks on many of our servers.
	</li>
	<li>
		Physion is planning on major plugin upgrades as well, mostly pertaining to quality of life fixes as well as some new features for players of all ranks.
	</li>
	<li>
		The Events Team is hard at work planning and preparing for more large scale events, similar to the aforementioned Minecraft event!
	</li>
	<li>
		Last, but certainly not least, we will be working on our next iteration of the <strong>CS:GO Surf Tournament</strong>, with the goal of it being even bigger than the last one! Keep your eyes peeled, we will have more information later in the year. <span class="ipsEmoji">😄</span>
	</li>
</ul>

<p>
	 
</p>

<p>
	We'll be sure to post regular updates as we progress through our plans for the year! Once again, a huge thanks to everyone that has supported us all this time, we wouldn't be here without you all. <span class="ipsEmoji">💖</span>
</p>

<p>
	 
</p>

<p>
	Oh, also, we're running a small giveaway for 3-months of VIP and some Discord Nitro in our main Discord server to celebrate: <a href="https://discord.gg/njWJkR55Z3" ipsnoembed="true" rel="external nofollow" target="_blank">https://discord.gg/njWJkR55Z3</a>
</p>

]]></description><guid isPermaLink="false">82844</guid><pubDate>Thu, 26 Jan 2023 07:34:49 +0000</pubDate></item><item><title>Update: Machine Upgrade for WEB-4</title><link>https://gflclan.com/topic/82253-update-machine-upgrade-for-web-4/</link><description><![CDATA[
<p>
	Hi everyone!
</p>

<p>
	 
</p>

<p>
	Just wanted to share that we recently picked up a new dedicated machine to replace one of our existing web machines - the new machine is spec'd with a Ryzen 9 5950X (32 threads), 128 GB of DDR4 RAM, and 2 x 2TB NVMe SSDs! Compared to our previous machine from OVH, which was equipped with a Ryzen 5 5600X (12 threads) and 64GB DDR4 RAM, this is an upgrade several times more capable than the current machine.
</p>

<p>
	 
</p>

<p>
	WEB-4 is home to a lot of our key web services, such as our MariaDB database server dedicated to game servers and other vital services like the SurfAPI and our Discord bots. This machine upgrade should allow us to allocate far more resources to services than before, which means we should see significant performance improvements across the board along with being able to handle far more load.
</p>

<p>
	 
</p>

<p>
	We've acquired this machine in Los Angeles from <a href="https://tempest.net/dedicated-servers" rel="external nofollow">Tempest Hosting</a>, who were kind enough to introduce us to their network with a very good deal. Unfortunately, I'm unable to share the details on pricing of this machine, so all I can really say is that it is a very good price that should fit right in with our current monthly expenses on infrastructure. They've been very good to us throughout this and we're excited to see where this takes us!
</p>

<p>
	 
</p>

<p>
	<em>Also, seeing 32 available CPU threads in htop never gets old:</em>
</p>

<p>
	<img alt="2dt71.png" class="ipsImage" data-ratio="9.40" height="51" style="height: auto;" width="1000" data-src="https://i.rebooti.ng/f/2dt71.png" src="https://gflclan.com/applications/core/interface/js/spacer.png">
</p>

<p>
	 
</p>

<p>
	Yesterday, we scheduled a maintenance period to begin migrating services over to the new machine. <a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/1623-annoying-furry/?do=hovercard" data-mentionid="1623" href="https://gflclan.com/profile/1623-annoying-furry/" id="ips_uid_6399_14" rel="">@<span style="color:rgb(178,34,34)">annoying furry</span></a> was able to get quite a few things across, however, we will need more time in the near future to move the remaining services - announcements will be made in Discord when this is scheduled to happen. Once we have everything migrated, we'll be canceling the old machine from OVH.
</p>

<p>
	 
</p>

<p>
	 
</p>

<p>
	<span style="font-size:24px;"><strong>An Update on Migrating PHYSION-1</strong></span>
</p>

<p>
	 
</p>

<p>
	<a href="https://gflclan.com/topic/81822-update-2-new-machines-1-current-machine-and-1-new-region/" rel="">In a past update,</a> we mentioned our plans to merge Physion's Unturned servers with GFL's main game server infrastructure. Our goal at the time was to have <a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/6-nick/?do=hovercard" data-mentionid="6" href="https://gflclan.com/profile/6-nick/" id="ips_uid_6399_5" rel="">@<span style="color:rgb(178,34,34)">Nick</span></a> custom build a bunch of required plugins so we didn't need to depend on third-parties and any potential DRM issues when moving, however, after speaking to GSK (our main hosting provider), we have figured out a way to instead move the IP address from the Physion machine directly to one of our primary game server machines in Dallas. This means we no longer need to wait for every plugin to be remade and can instead proceed with migrating as soon as possible!
</p>

<p>
	 
</p>

<p>
	<a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/421-salad/?do=hovercard" data-mentionid="421" href="https://gflclan.com/profile/421-salad/" rel="">@<span style="color:rgb(255,0,0)">Salad</span></a> and I will soon be working together on planning out the migration. Once completed, we should see significant performance improvements on all the Physion Unturned servers along with cutting $150 USD from monthly expenses.
</p>

<p>
	 
</p>

<p>
	<em>---</em>
</p>

<p>
	 
</p>

<p>
	Short update compared to the previous one, but that's mostly because myself and many others have been super busy with university and other IRL obligations - now that we're heading into the holiday season, everyone should have far more time and hopefully we can get some cool things going! The stuff mentioned in this update is part of preparations to pursue the previously mentioned region expansion and a few other things, hopefully I'll have more exciting stuff to share down the line. <span class="ipsEmoji">😄</span>
</p>

<p>
	 
</p>

<p>
	As always, feel free to send me a message on Discord if you have any questions or ideas for things we should do!
</p>

<p>
	 
</p>

]]></description><guid isPermaLink="false">82253</guid><pubDate>Sun, 06 Nov 2022 01:15:02 +0000</pubDate></item><item><title>Update: 2 New Machines, 1 Current Machine, and 1 New Region</title><link>https://gflclan.com/topic/81822-update-2-new-machines-1-current-machine-and-1-new-region/</link><description><![CDATA[
<p>
	Hi everyone,
</p>

<p>
	 
</p>

<p>
	I have a few cool updates I'd like to share with you all today. These updates involve two new machines we have acquired, our planned obsolescence of PHYSION-1 (Physion's dedicated machine for game servers), and progress on a potential region expansion!
</p>

<p>
	 
</p>

<p>
	It's been a while since one of these has been done. I will be working on changing that, so you should be hearing from me a lot more with giant (and probably boring) walls of text summarizing what GFL is up to! As always, if you have any questions, please don't hesitate to leave a comment or message me on Discord.
</p>

<p>
	 
</p>

<p>
	 
</p>

<p>
	<strong><span style="font-size:36px;">A New Machine Naming Convention</span></strong>
</p>

<p>
	 
</p>

<p>
	Before we get into the new dedicated machines, Aurora and I sat down and discussed revamping our current naming scheme for dedicated machines. We felt that it was misleading (with machine numbers starting from a weird index instead of reflecting the actual count of machines) and that we needed to have the convention be far more consistent across the board along with showing useful regional information. Thus, I'm covering a brief overview of the proposed changes as it'll give good context to why some references later in this thread are with different names than ones we have usually used.
</p>

<p>
	 
</p>

<p>
	With that, we have decided on a simple naming convention that addresses the aforementioned issues. I will spare you all the exact schema and instead show some of the potential machine name changes:
</p>

<p>
	 
</p>

<p>
	- GS15 -&gt; GS-US1 (GS = Game server, US = United States, 1 = Machine count based on identifier.)
</p>

<p>
	- WEB1 -&gt; WS-CA1 (WS = Web server, CA = Canada, 1 = Machine count based on identifier.)
</p>

<p>
	 
</p>

<p>
	The machine counts will not be incremented when a machine is replaced, rather, machine replacements will count as replacements for the same count identifier. For example, a replacement of GS-US1 will have the identifier remain as GS-US1 instead of changing to GS-US2. This should resolve an inconsistency we have with our current naming scheme, such as there being a gap between GS16 and GS18 (fun fact: there is no machine in between!)
</p>

<p>
	 
</p>

<p>
	This naming schema is still a work-in-progress as we try to iron out some larger details to ensure it doesn't need to be worried about in the future. That said, it will take a while before we slowly ease our infrastructure to the new scheme and we're not too worried about rushing it - however, we will be using it immediately for any new machines we get.
</p>

<p>
	<br>
	 
</p>

<p>
	<strong><span style="font-size:36px;">Upgraded Machine for GS-US4</span></strong>
</p>

<p>
	 
</p>

<p>
	<em>(In this context, GS-US4 is our fourth game server dedicated machine in the US, provided by NFO)</em>
</p>

<p>
	 
</p>

<p>
	We have recently acquired a new dedicated machine to replace our current NFO machine, replacing our Xeon E-2288G CPU (5GHz turbo) with the newer Xeon E-2388G (5.1GHz turbo) and faster DDR4 RAM, up to 3200 MHz from 2666 MHz!
</p>

<p>
	 
</p>

<p>
	This should be a decent upgrade for our CS:GO Zombie Escape server, which is our primary resident on this machine. We're still in the process of setting it up and ensuring a smooth transition, with Vauff and Snowy hard at work preparing the move for the ZE server, but we should hopefully have everything ready to go soon! 
</p>

<p>
	 
</p>

<p>
	The new machine will cost us $279.99 USD/mo, which is an increase from our previous machine's rate of $249.99 USD/mo.
</p>

<p>
	<br>
	 
</p>

<p>
	<strong><span style="font-size:36px;">Our Very First Dedicated Machine for Private Servers!</span></strong>
</p>

<p>
	 
</p>

<p>
	For context, GFL began offering CS:GO Surf Private Servers on April 3rd this year after quite a lot of demand from the community. The private servers aim to provide players and their friends an opportunity to privately grind out world records on our collection of maps without needing to worry about other players, while simultaneously supporting GFL.
</p>

<p>
	 
</p>

<p>
	Since we started the project, we have been using third-party hosting providers to host our game servers in our supported locations. While this worked great for us in the short term as we could support more locations without needing a significant presence around the world, we very quickly ran into issues keeping up with the demand and automating required processes. It was clear that we needed to ramp up our efforts to ensure everyone gets a chance at this, so we decided to set some goals and try pushing onward so we could start stocking locations with our own hardware.
</p>

<p>
	 
</p>

<p>
	I'm happy to announce that we've finally hit those goals, and we have placed the order for our very first machine dedicated solely to private servers! We are getting this machine from GSK and it will be set up with the rest of our current infrastructure in the Dallas data center. It's a moderate machine designed to keep costs low while ensuring it can support a fair number of servers, with a 12-thread AMD Ryzen 5 3600X that should comfortably support 10-12 low-pop CS:GO servers without any issues. We will be paying very close attention to how we allocate CPU resources/threads to each server, just like we do for our public game servers, to ensure that we make the most of the CPU without going overboard.
</p>

<p>
	 
</p>

<p>
	As a neat bit of insight, here is a snippet of our spreadsheet that shows how we allocate resources on our GS machines at the moment, with GS-US4 as an example (it's a new machine, so it's very empty!):
</p>

<div class="ipsSpoiler" data-ipsspoiler="">
	<grammarly-extension class="cGcvT" data-grammarly-shadow-root="true" style="position: absolute; top: 0px; left: -4px; pointer-events: none;"></grammarly-extension><grammarly-extension class="cGcvT" data-grammarly-shadow-root="true" style="position: absolute; top: 0px; left: -4px; pointer-events: none;"></grammarly-extension>

	<div class="ipsSpoiler_header">
		<span>Spoiler</span>
	</div>

	<div class="ipsSpoiler_contents ipsClearfix" spellcheck="false">
		<p>
			<img alt="yznoo.png" class="ipsImage" data-ratio="40.20" height="277" style="height: auto;" width="1000" data-src="https://i.rebooti.ng/f/yznoo.png" src="https://gflclan.com/applications/core/interface/js/spacer.png">
		</p>
	</div>
</div>

<p>
	 
</p>

<p>
	Another great benefit of having our own hardware for this is that we can work on automating more of the process. This takes a lot of stress off of everyone responsible for it, our "ideal" situation would be one where the system basically maintains itself. As of right now, our automation consists of a lot of GitLab CI/CD pipelines that allow <a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/45745-dini/?do=hovercard" data-mentionid="45745" href="https://gflclan.com/profile/45745-dini/" id="ips_uid_5222_4" rel="">@<span style="color:rgb(30,233,227)">Dini</span></a> to make mass changes and updates to the servers with one single move, as well as a Discord bot to help with automatically setting up and deploying servers with all needed files:
</p>

<div class="ipsSpoiler" data-ipsspoiler="">
	<grammarly-extension class="cGcvT" data-grammarly-shadow-root="true" style="position: absolute; top: 0px; left: -4px; pointer-events: none;"></grammarly-extension><grammarly-extension class="cGcvT" data-grammarly-shadow-root="true" style="position: absolute; top: 0px; left: -4px; pointer-events: none;"></grammarly-extension>

	<div class="ipsSpoiler_header">
		<span>Spoiler</span>
	</div>

	<div class="ipsSpoiler_contents ipsClearfix" spellcheck="false">
		<p>
			<img alt="xmgiu.png" class="ipsImage" data-ratio="101.32" height="461" style="height: auto;" width="455" data-src="https://i.rebooti.ng/f/xmgiu.png" src="https://gflclan.com/applications/core/interface/js/spacer.png">
		</p>
	</div>
</div>

<p>
	 
</p>

<p>
	GSK has been brilliant with ensuring that we can get any future machines with exact/similar specs activated in Dallas within short time-frames, in the case we ever need to deal with any future surges in demand. <span class="ipsEmoji">😄</span>
</p>

<p>
	<br>
	 
</p>

<p>
	<span style="font-size:36px;"><strong>Planned Removal of the PHYSION-1 Machine</strong></span>
</p>

<p>
	 
</p>

<p>
	We mentioned this in a Council meeting previously, and I'm happy to say that we are getting much closer to our goals to eventually drop the PHYSION-1 machine from our line-up. Part of this includes migrating all of our Unturned servers to our existing game server machines, which will result in a significant performance upgrade and save us approximately $150 USD/mo in expense!
</p>

<p>
	 
</p>

<p>
	As of right now, <a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/6-nick/?do=hovercard" data-mentionid="6" href="https://gflclan.com/profile/6-nick/" rel="">@<span style="color:rgb(178,34,34)">Nick</span></a> is hard at work re-creating our premium plugins from the ground up to replace the ones we currently have. This will give us full control over the quality of experience we offer along with freeing us from DRM restrictions currently in place with a lot of these plugins. Once this is complete, we should be able to begin the process of slowly migrating servers over to the new machine.
</p>

<p>
	 
</p>

<p>
	This move is a win-win for us, as our Unturned servers receive a huge performance boost (moving from an AMD Ryzen 5 PRO 3600 @ 4.1 GHz to an Intel i9-10900K @ 5.3 GHz) along with saving GFL at least $150 USD/mo in expense. You can read more about this in a document we shared during a past Council meeting here: <a href="https://docs.google.com/document/d/1cg2GpqOpxK4nN367tTmZVU-_JDiMVeXfDbVEoip5CIk/edit?usp=sharing" ipsnoembed="true" rel="external nofollow">https://docs.google.com/document/d/1cg2GpqOpxK4nN367tTmZVU-_JDiMVeXfDbVEoip5CIk/edit?usp=sharing</a>
</p>

<p>
	<br>
	 
</p>

<p>
	<strong><span style="font-size:36px;">Our Potential Region Expansion - Australia!</span></strong>
</p>

<p>
	 
</p>

<p>
	I am beyond excited to announce that we will officially be looking into expanding into Australia! We're working on planning out our approach over the long term, and we are looking to commit to the expansion for at least 6 months - 1 year (we'll confirm all of this as we plan stuff out over time).
</p>

<p>
	 
</p>

<p>
	For our initial approach, we're probably going to be entering the region with a line-up of CS:GO servers, as the game is very popular in the region. CS:GO Surf will likely be our flagship going into the region! That said, our intention here is to start small and grow as we build up in the region, so we likely will be starting with only a few specific servers that have higher chances of succeeding.
</p>

<p>
	 
</p>

<p>
	We're already working on setting up our foundations here, with work being done to set up new web infrastructure to host an extended wing of the Surf API network that currently powers our many CS:GO Surf servers around the world. We will likely be implementing Cloudflare's Geo-Steering load balancing features to ensure that the network performs to high standards and operates on a good level of redundancy (i.e, one region's web services going down will not halt operations as traffic will be automatically routed to the other regions).
</p>

<p>
	 
</p>

<p>
	To start, we have set up a CS:GO Surf invite-only server in Sydney, hosted on our Private Server infrastructure. We have been inviting players to come and try out our Surf experience and begin competing in our diverse community for about a month now, with over 35 people on the list at the moment with many more to come! This was done to help promote ourselves in the region as well as gauge interest from players - that said, things seem to be going rather well, as we already have a few private server orders lined up in Sydney while they wait for us to plan out our full expansion with public servers.
</p>

<p>
	 
</p>

<p>
	The promotional surf server is also a great way for us to benchmark performance with our accessory services and the SurfAPI network as we plan out our expansion.
</p>

<p>
	 
</p>

<p>
	We are still evaluating potential hosting providers for dedicated machines in the region. That said, our base of operations will likely be in Melbourne and/or Sydney - this is where our infrastructure is likely to be located. This should cover a good part of Oceania, with great coverage of both Australia and New Zealand.  
</p>

<p>
	 
</p>

<p>
	This is still a work-in-progress, and we're hoping to have many more updates to come! We would like to take it slow this time instead of rushing it, this way we'll have more time to spend on ensuring that everything goes well. Our next steps will be to continue evaluating what we will need to make this happen, along with what servers we should try our hand at in the region.
</p>

<p>
	 
</p>

<p>
	<em>P.S: If you're from the region and want to help out/be a part of this, feel free to shoot me a message! </em>
</p>

<p>
	 
</p>

<p>
	---
</p>

<p>
	 
</p>

<p>
	As mentioned near the start of this post and in the previous Council meeting, I will be doing more of these! I haven't been doing these at all when I really should have been, so please let me know if anyone has any ideas to make these better and/or more interesting for everyone.
</p>

<p>
	 
</p>

<p>
	One thing I would like to note is that these won't take away from any updates provided in the Council meetings, so we will continue to host updates on everything GFL at those meetings so everyone can ask questions live.
</p>

<p>
	 
</p>

<p>
	 
</p>

]]></description><guid isPermaLink="false">81822</guid><pubDate>Sat, 10 Sep 2022 12:04:07 +0000</pubDate></item><item><title>Website Updates</title><link>https://gflclan.com/topic/10908-website-updates/</link><description><![CDATA[<p>
	Every update to the website such as the addition of emotes or changes to groups will be documented here.
</p>
]]></description><guid isPermaLink="false">10908</guid><pubDate>Sun, 29 Jan 2017 05:50:13 +0000</pubDate></item><item><title>A2S_INFO Responses Being Malformed Should Be Fixed</title><link>https://gflclan.com/topic/75153-a2s_info-responses-being-malformed-should-be-fixed/</link><description><![CDATA[
<p>
	Hi everyone,
</p>

<p>
	 
</p>

<p>
	For a while now, people may have noticed servers being password-protected randomly, showing 255/255 player count, and the host names were cut off. We had this issue last year as well, but it had to do with our Sydney edge server and was corrected <a href="https://gflclan.com/forums/topic/55971-invalid-a2s_info-responses-bug-fixed/" rel="">here</a>.
</p>

<p>
	 
</p>

<p>
	I wasn't sure what was wrong this time, so I started a packet capture on our Redis server that handles distributing cached responses to all of our edge servers to send to our clients. I stopped the packet capture as soon as the issue was reported again and the packet capture file contained 2 million+ packets. I then filtered out all the packets besides the one containing the reported server's IP (in this case, was CS:GO Surf Timer #1, 92.119.148.18) and found this:<br>
	<br>
	<img alt="linux-laptop-bigmode-11-12-17.png" class="ipsImage" data-ratio="20.67" height="161" style="height: auto;" width="779" data-src="https://g.gflclan.com/linux-laptop-bigmode-11-12-17.png" src="https://gflclan.com/applications/core/interface/js/spacer.png">
</p>

<p>
	 
</p>

<p>
	These were the packets being distributed from Redis to the other edge servers meaning one of our edge servers reported this response to our Redis server. It didn't take me long to figure out our Chicago edge server was the one (and only) edge server reporting these malformed responses. I then ran <strong>uname -r</strong> on all of our edge servers and every edge server besides the Chicago edge server reported Linux kernel 5.4.0+ whereas Chicago was still running on 4.18. The reason the server was running such an outdated kernel was due to this being our oldest edge server to date and still running Ubuntu 18.04. I decided to create a new edge server in Chicago, set it up, stop announcing/destroy the old server, and start announcing the new one. This one is running a similar kernel to the others and Ubuntu 20.04. Things appear to be running okay so far, but the malformed A2S_INFO responses issue should be fixed.
</p>

<p>
	 
</p>

<p>
	This also wouldn't have only impacted Surf Timer. This would have impacted all of our servers. Basically, when a server's A2S_INFO cache time was expired and the next A2S_INFO request went through our old Chicago edge server, there was a high chance it would have been malformed. It didn't do this for every single request, but I'd say a good chunk of them.
</p>

<p>
	 
</p>

<p>
	If you start seeing malformed responses again, please let me know!
</p>

]]></description><guid isPermaLink="false">75153</guid><pubDate>Sat, 26 Jun 2021 16:24:08 +0000</pubDate></item><item><title>Public Discord Change Log</title><link>https://gflclan.com/topic/27495-public-discord-change-log/</link><description><![CDATA[<p>
	This will serve as a change log for features that were either added or removed from Discord.
</p>

<p>
	 
</p>

<p>
	13/05- Removed several emotes that weren't being used, added MonkaS. Changed where Dyno sends his welcoming message, added some custom commands, removed how Manager+ could get away with breaking the rules.
</p>]]></description><guid isPermaLink="false">27495</guid><pubDate>Mon, 14 May 2018 01:43:17 +0000</pubDate></item><item><title>Downtime April 25th, 2021 (Yes It Was My Fault)</title><link>https://gflclan.com/topic/73062-downtime-april-25th-2021-yes-it-was-my-fault/</link><description><![CDATA[
<p>
	Hi everyone,
</p>

<p>
	 
</p>

<p>
	Around two - three hours ago, we started seeing many networking issues and slowly players were timing out along with services our servers connected to.
</p>

<p>
	 
</p>

<p>
	When I first started investigating this, it appeared completely random packet types were dropping somewhere. When I mean random, I mean completely random. For example, I was able to connect to our game servers fine, but my <a href="https://developer.valvesoftware.com/wiki/Server_queries#A2S_PLAYER" rel="external nofollow">A2S_PLAYER</a> challenge/requests were being dropped somewhere since that query never responded for me. However, this query worked on some of our other servers as well in Dallas, TX. With that said, I performed packet captures for our services our game servers connected to that were failing, we were seeing packets in both directions. This further proved it was random packet types being dropped somewhere.
</p>

<p>
	 
</p>

<p>
	I then started talking to our hosting provider in Dallas, TX because I wasn't seeing any issues with our NYC servers. From what I saw, certain packets weren't making it back from our hosting provider which are sent directly from our Anycast ranges on our machines (<strong>92.119.148.0/24</strong> and <strong>185.240.217.0/24</strong>). We were working together for a thirty minutes to an hour on this and then we discovered our BGP session on our GSK POP server wasn't active. This must be active in order to send traffic out as our Anycast ranges due to uRPF policies (basically we have a switch that our GSK POP server and game server machines plug into, since the POP server is announcing the IP ranges via BGP, our game server machines are able to send traffic out directly).
</p>

<p>
	 
</p>

<p>
	Now, an hour or so before all of this happened, I fixed an issue with our Squad server <a href="https://gflclan.com/forums/topic/72981-lets-make-squad-great-again/?tab=comments#comment-341009" rel="">here</a> and this was related to the Anycast network as well. At first, I suspected the changes I made to somehow be breaking stuff (even though the changes I made had nothing to do with the type of random packets being dropped). However, I enabled debugging within the XDP/BPF program which prints to the Linux kernel trace pipe when a packet is dropped (along with the unsigned 32-bit integer in network byte order, port, etc). The packets that weren't making it back weren't being filtered by the XDP/BPF program. Therefore, I thought things breaking soon after was just a coincidence. I also tried reverting back to the previous versions on the POP servers I was routing to and see if there were any changes. Unfortunately, I wasn't routing through the GSK POP server, so this didn't allow me to see the issue quicker.
</p>

<p>
	 
</p>

<p>
	Anyways, the packet processing/filtering software running on our POP servers is managed in a private GitHub repository. This is how I update everything, etc. The reason the BGP session broke is because I forgot to whitelist the BGP neighbor IP address from GSK. The whitelist code wasn't pushed to the GitHub repository which is why when I reset the GitHub repo and updated it with the Squad fix, that code was lost. The reason I didn't push this code to that repository is because the repository has 8 - 9 other server owners I've tried helping mitigate (D)DoS attacks for and gave them the filters I made for <abbr title="Games For Life"><abbr title="Games For Life">GFL</abbr></abbr>. I felt pushing code for the neighbor address like that wouldn't be secure. We had to do the same thing on our Vultr POP servers as well, but they use a local link IP address (to save IP addresses I guess, I don't know), so pushing the Vultr neighbor whitelist code to the GitHub repository wasn't a big deal in my opinion. After I whitelisted the GSK neighbor address on the GSK POP server, the BGP session became active again.
</p>

<p>
	 
</p>

<p>
	When the BGP session goes down, all traffic is supposed to be dropped out of our IP ranges (I know this is also a single-point-of-failure, but I'm working on a solution for that as well where we'd have fallback BGP sessions). The reason not all traffic was dropped is because we had a lot of issues with sending traffic out as our Anycast ranges months ago because we weren't plugged into the same switch as our GSK POP server (some of you probably remember) and GSK tried manually accepting certain traffic from us. Therefore, some traffic was still allowed through even while the BGP session was down. Honestly, if all traffic got cut off, I would have probably found the issue quicker considering we could have easily narrowed it down. It was the fact that 10 - 20% of the traffic was coming through and random packets were being dropped on all connections.
</p>

<p>
	 
</p>

<p>
	Unfortunately, we don't have a dev environment for our Anycast network besides running a single forwarding server on my home server and testing out filters, etc. Something like this we wouldn't have been able to test. This is something I will be considering in the future though, but it'd cost us a bit of money which is why I haven't done so yet.
</p>

<p>
	 
</p>

<p>
	In the end, this was my major mistake and I do apologize (don't blame anybody else besides me for this). I hope the above went into detail on my mind process regarding it. I'm going to look into having fallback BGP sessions to eliminate this single-point-of-failure.
</p>

<p>
	 
</p>

<p>
	All servers have been up for around an hour or so now.
</p>

<p>
	 
</p>

<p>
	Thank you for your time and understanding.
</p>
]]></description><guid isPermaLink="false">73062</guid><pubDate>Mon, 26 Apr 2021 04:05:01 +0000</pubDate></item><item><title>Europe/UK Expansion Update (It's Here!) + New Machine To Replace GS18</title><link>https://gflclan.com/topic/72545-europeuk-expansion-update-its-here-new-machine-to-replace-gs18/</link><description><![CDATA[
<p>
	Hey everyone,
</p>

<p>
	 
</p>

<p>
	I just wanted to provide two big updates.
</p>

<p>
	 
</p>

<p>
	<span style="font-size:18px;"><strong>Europe Update</strong></span>
</p>

<p>
	I know many of you have seen updates on this subject for months now, but I have something finally exciting to announce. Please read the last update <a href="https://gflclan.com/forums/topic/65069-europe-and-squad-expansion-plans-and-more/?tab=comments#comment-333193" rel="">here</a> if you haven't already.
</p>

<p>
	 
</p>

<p>
	The timeout issues we've been running into has been resolved with our hosting provider replacing a carrier in this location. I've confirmed this by staying on the server for 3+ hours and had others play for a while as well without any issues (it was normally timing me out for an hour after 5 - 10 minutes of playing time before). However, while this main issue is resolved, our hosting provider has stated they are making changes in the next two weeks. Therefore, I don't want to fully launch the Europe/UK division until this is sorted out. Things should be working regardless, though, and I'm still getting details on what changes they'll be making to see if it would impact us.
</p>

<p>
	 
</p>

<p>
	At the very least, we will be able to setup/test our Europe/UK game servers in the next two weeks and then once things are sorted completely, we can launch (this will be server/division-specific of course, as well).
</p>

<p>
	 
</p>

<p>
	Anyways, as of right now, <strong>~4 - 6ms</strong> additional latency overhead is added to connections/packets to our UK servers because outgoing traffic is going through the closest POP to the UK game server machine instead of out directly. To resolve this issue, we'll need to fix the UK POP and the BGP session. Since our hosting provider has routers in the UK now, we will be able to do this easily and could possibly get this done by tonight (if not, definitely by the end of next week).
</p>

<p>
	 
</p>

<p>
	<span style="font-size:18px;"><strong>New Machine</strong></span>
</p>

<p>
	Since we've been having issues with our GS18 machine in regards to CPU clock speeds and whatnot. I've decided the best option to avoid unnecessary downtime (from trying to fix it within the BIOS), we would just get a replacement machine. With this, we were offered a machine with different specs, but the same price. The processor is stronger in this new machine than the old machine and the RAM is also clocked faster.
</p>

<p>
	 
</p>

<p>
	The old machine (GS18)'s processor was the Intel i9-10900K with 10 cores and 20 threads. The new machine's processor is the AMD Ryzen 7-5800X with 8 cores and 16 threads. While we won't have as many cores to play with, the single-threaded performance on the AMD processor will be faster which matters most for most game servers. With that said, the old machine at 64 GBs of RAM clocked at 2133 MHz whereas the new machine has 32 GBs of RAM clocked at 3600 MHz. Initially, we were hoping the RAM could go clocked at 4000 MHz, but for some reason, even though the RAM states it's supported along with the manufacture confirming so, we've ran into stability issues and our hosting provider had reports of the same issue running 4000 MHz RAM before. Therefore, we've downclocked the RAM to 3600 MHz.
</p>

<p>
	 
</p>

<p>
	I am currently running a stress test that'll go on for 24+ hours to make sure we aren't running into stability issues. With that said, I've requested a KVM to be attached to the machine as well because I want to see if there's anything I can tune in the BIOS in regards to the CPU and memory performance before using the machine publicly. 
</p>

<p>
	 
</p>

<p>
	I setup a CS:S DM server with 32vs32 bots (close quarters) similar to what I did <a href="https://www.youtube.com/watch?v=jYLCmcVGqEw" rel="external nofollow">here</a> with GS14 (which ran the Intel i9-9900K) and GS19 (the new machine) handled it beautifully at 70 - 80% CPU.
</p>

<p>
	 
</p>

<div class="ipsEmbeddedVideo" contenteditable="false">
	<div>
		<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="113" id="ips_uid_7223_4" src="https://gflclan.com/applications/core/interface/index.html" width="200" data-embed-src="https://www.youtube.com/embed/H6sL5y1H97I?feature=oembed"></iframe>
	</div>
</div>

<p>
	 
</p>

<p>
	 
</p>

<p>
	It never left from <strong>66.(6)7</strong> FPS!
</p>

<p>
	 
</p>

<p>
	To conclude, that's it for now. I'm pretty exciting for this and once I get done testing everything, I'll let everybody know.
</p>

<p>
	 
</p>

<p>
	Now it's time to start developing (D)DoS miitgation/prevention systems <span><span class="ipsEmoji">🙂</span> </span>
</p>
]]></description><guid isPermaLink="false">72545</guid><pubDate>Mon, 12 Apr 2021 00:02:24 +0000</pubDate></item><item><title>Europe And Squad Expansion Plans And More!</title><link>https://gflclan.com/topic/65069-europe-and-squad-expansion-plans-and-more/</link><description><![CDATA[
<p>
	Hey everyone,
</p>

<p>
	 
</p>

<p>
	I am making this thread to announce our plans to expand into Europe along with a game called Squad and some other small things. I've talked this over with <a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/120-frenzy/?do=hovercard" data-mentionid="120" href="https://gflclan.com/profile/120-frenzy/" id="ips_uid_2130_3" rel="">@FrenZy</a> and they were in full support of this.
</p>

<p>
	 
</p>

<p>
	Before continuing, I just wanted to briefly go into why I'd like to overview these expansions. As some of you know, a year and a half ago I took a complete back-end role in the community due to the amount of energy required for maintaining the Anycast network. While maintaining the network has been great, I feel I haven't been able to have much fun in <abbr title="Games For Life">GFL</abbr> since then because everything I do has been technical and most people don't understand 99% of the things I'm doing as well.
</p>

<p>
	 
</p>

<p>
	When I saw the opportunities to expand into Europe and Squad, I felt it was a chance to do something fun again. While this won't decrease the amount of stress I have (in fact, it could increase it a tad bit), I do think it'll be fun and I don't plan to play a hectic role. I simply just want to overview these expansions and ensure they're heading in the right direction. I will be doing my best to guide the people we assign to the servers and train them as well. I think this gives us a good opportunity to improve our internal training program within <abbr title="Games For Life">GFL</abbr> as well.
</p>

<p>
	 
</p>

<p>
	<span style="font-size:18px;"><strong>Europe Expansion</strong></span>
</p>

<p>
	The first expansion I want to talk about is Europe. For those that do not know, we used to have many Europe servers back years ago and they all did very well for the most part. In fact, many game modes we've tried in the past that didn't succeed in the US did succeed in Europe. These include Call of Duty 4, CS:GO Zombie Mod, Killing Floor 2, and more. We had to cancel our Europe machine at the time due to financial problems we were having in 2016. <a href="https://oldsite.browser.tf/topic/6746-articlegfls-eu-expansion/" rel="external nofollow">Here's</a> the old thread I made regarding <abbr title="Games For Life">GFL</abbr>'s Europe expansion back in 2014.
</p>

<p>
	 
</p>

<p>
	Anyways, Europe arguably has better success rates for game servers than the US. This is because the US is oversaturated with game servers in my opinion. Therefore, I feel this Europe expansion could potentially double <abbr title="Games For Life">GFL</abbr>'s population if it succeeds. With that said, a lot of our successful staff (present and past) have came from Europe due to our Europe servers at the time and in general.
</p>

<p>
	 
</p>

<p>
	To start off, I'd like to setup servers in game modes that don't require many admins. This is because we likely will be short of staff at the start of these game servers. However, with the success rates I'm expecting, I don't think they will be shortly staffed for long. Obviously, we may go through a couple rough patches with admins at first since we'll likely be looking for admins ASAP (e.g. we may pick admins that aren't great quality), but this will balance itself out after a month with good management (these are risks you have to take when expanding and we've done this in the past).
</p>

<p>
	 
</p>

<p>
	Some servers I was thinking about expanding into that didn't require many admins were CS:GO Surf Timer, CS:GO Bunny Hop, GMod Prop Hunt, GMod Murder, and a few others. However, that list isn't certain since I haven't talked to our Division Leaders and Server Managers yet. If you have any game mode suggestions that won't require many admins, please feel free to reply with your suggestion(s) <span><img alt=":)" data-emoticon="true" height="20" src="//gflusercontent.gflclan.com/file/forums-prod/emoticons/smile.png" srcset="//gflusercontent.gflclan.com/file/forums-prod/emoticons/smile@2x.png 2x" title=":)" width="20"></span>
</p>

<p>
	 
</p>

<p>
	In regards to the technical aspect, I ordered a new /24 IP block today for our Europe expansion (this will cost us <strong>$50.00/m</strong> and will give us another 256 IPs to play with). We'll be using this with the Anycast network and at the moment, this block's geo location on <a href="http://ip2location.com/" rel="external nofollow">IP2Location</a> (what the Steam Master Server uses) is set to California, US. I will be putting in a request to have this changed to London, UK since that's where our physical servers will be hosted at in Europe. This change can take up to 30 days since <span ipsnoautolink="true">IP2Location</span> updates these on the first of every month.
</p>

<p>
	 
</p>

<p>
	I still think we'll be able to get decent population with the IP block's geo location set to California, US. However, we'll see more population when it's properly set to London, UK.
</p>

<p>
	 
</p>

<p>
	The physical game server machine we'll be ordering to start from GSK in London, UK will include the AMD Ryzen 5 3600 @ 3.6 GHz (4.2 GHz turbo), 32 GBs of DDR4 RAM, and a 500 GBs NVMe drive. This machine will be temporary and will cost us <strong>$84.99/m</strong>. If the servers we place on the machine work out, we'll be looking to get a permanent machine that'll either include the Intel i9-10900K or the AMD 5 5600X when it releases (the <a href="https://www.tomshardware.com/news/amd-5600x-passmark-singlethread?fbclid=IwAR1HM3hlS04hLAtyGU4mqhb1_nIe5nKEY8YYX99XYz11D9i75JnXRdvbHjM" rel="external nofollow">fastest</a> processor in regards to single-threaded performance).
</p>

<p>
	 
</p>

<p>
	The machines should be ready in the next one - two weeks. With that said, I'll need to setup BGP sessions and provide LOAs for the new IP block to GSK and Vultr (our POP hosting providers) which should take around a week to implement (probably sooner, lol).
</p>

<p>
	 
</p>

<p>
	As of right now, I'd say the timeline for when the Europe expansion will be ready to go will be in the next <strong>3 - 4</strong> weeks or sooner. I'll probably throw some unofficial game servers on the machine to start to see if they receive any population and release them officially if they succeed. With that said, <a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/120-frenzy/?do=hovercard" data-mentionid="120" href="https://gflclan.com/profile/120-frenzy/" rel="">@FrenZy</a> and the Division Leaders + Server Managers will be working with us to see which game modes we can try out in Europe.
</p>

<p>
	 
</p>

<p>
	<span style="font-size:18px;"><strong>Squad Expansion</strong></span>
</p>

<p>
	The next expansion I want to talk about is Squad. We actually <a href="https://gflclan.com/work/projects/divisions/squad/start-the-squad-division-r33/" rel="">tried expanding</a> into Squad back in the Summer of 2018. While the server did get full a couple of times (80/80) which can be seen below, unfortunately the server didn't work out long-term.
</p>

<p>
	 
</p>

<p>
	<img alt="583D8D251ED54DB415E1DB21FBE860D02BDFAE63" class="ipsImage" data-ratio="56.25" height="562" style="height: auto;" width="1000" data-src="https://steamuserimages-a.akamaihd.net/ugc/941708627444244018/583D8D251ED54DB415E1DB21FBE860D02BDFAE63/" src="https://gflclan.com/applications/core/interface/js/spacer.png"></p>

<p>
	 
</p>

<p>
	I feel there wasn't enough time spent into populating the server though and we never ended up keeping the players we had around. This is something easily fixable if we have dedicated players/management which I'm expecting to have this time around.
</p>

<p>
	 
</p>

<p>
	Squad is a fairly easy game to manage since the servers are mostly vanilla and the game itself recently had a full release (it was in early-access beforehand). According to my community server population tracker <a href="https://pop.browser.tf/index.php" rel="external nofollow">website</a>, Squad has the best player to server ratio over any other game tracked. Please take a look <a href="https://pop.browser.tf/index.php?appid=393380&amp;mode=1" rel="external nofollow">here</a> for example. For every Squad server over 0 players, they have an average of <strong>51 </strong>players this past month per server. This is a great amount of average players and compared to other games like CS:GO and TF2, is very high! I do believe we can offer perks as well such as reserved slots for Supporters/VIPs and more.
</p>

<p>
	 
</p>

<p>
	The first thing I need to check is whether our license from 2018 is still valid for game servers in Squad. Squad will require a license for all servers to use. If it is not valid, I will email their team regarding this request. The next thing I need to do is implement filters into Compressor so players routing to our Europe/Asia POPs will be able to connect to our Squad servers. I'm hoping this doesn't take long and they support a handshake sequence. If not, I can always make exceptions for Squad servers in our filters. I'm not expecting this to take long.
</p>

<p>
	 
</p>

<p>
	With that said, we will <strong>not </strong>be doing A2S_INFO <a href="https://docs.google.com/document/d/1-5f_TlJEki_qJYvZvlZ37QdsDambYIpEgPOUMP2RK9A/" rel="external nofollow">caching</a> with our Squad servers.
</p>

<p>
	 
</p>

<p>
	At the moment, I have an individual who I believe will be helping and heavily involved with this expansion, but since it isn't 100% confirmed yet (<a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/120-frenzy/?do=hovercard" data-mentionid="120" href="https://gflclan.com/profile/120-frenzy/" id="ips_uid_2130_9" rel="">@FrenZy</a> and this individual need to talk), I will not announce them until the next update.
</p>

<p>
	 
</p>

<p>
	The ETA on this expansion is probably within the next two - three weeks or sooner.
</p>

<p>
	 
</p>

<p>
	If you have any interest in helping with our Squad expansion, please let me know! We could use all the help we can get and considering we're a lot more popular now than mid-2018, I do believe we'll have a better chance at prepopulating our Squad servers.
</p>

<p>
	 
</p>

<p>
	<span style="font-size:18px;"><strong>London POP Replacement</strong></span>
</p>

<p>
	Since we're getting a machine in London from GSK for our game servers, I figured this'll also be a good time to replace our Vultr London POP with a GSK machine. This was on the to-do list to begin with and will offer us better protection from (D)DoS attacks.
</p>

<p>
	 
</p>

<p>
	This process should be simple and straight forward, but when I'm implementing the change, I'll let everybody know more details.
</p>

<p>
	 
</p>

<p>
	<span style="font-size:18px;"><strong>Conclusion</strong></span>
</p>

<p>
	That's really all for now. It's actually nice making a public-facing post again, I've missed it (I used to post threads like these weekly lol) <span><span><img alt=":)" data-emoticon="true" height="20" src="//gflusercontent.gflclan.com/file/forums-prod/emoticons/smile.png" srcset="//gflusercontent.gflclan.com/file/forums-prod/emoticons/smile@2x.png 2x" title=":)" width="20"></span></span>
</p>

<p>
	 
</p>

<p>
	I just want to make clear I'll be overviewing these two expansions only and ensuring they're moving in the right direction. If you have any suggestions for other servers in <abbr title="Games For Life">GFL</abbr> or the community as a whole, I'd recommend reaching out to our Server Managers, Division Leaders, and Directors.
</p>

<p>
	 
</p>

<p>
	If you have any questions, please feel free to reply.
</p>

<p>
	 
</p>

<p>
	Thank you for reading!
</p>
]]></description><guid isPermaLink="false">65069</guid><pubDate>Fri, 30 Oct 2020 01:31:48 +0000</pubDate></item><item><title>Recent Performance Issues + Maintenance This Upcoming Tuesday</title><link>https://gflclan.com/topic/68884-recent-performance-issues-maintenance-this-upcoming-tuesday/</link><description><![CDATA[
<p>
	Hi everyone,
</p>

<p>
	 
</p>

<p>
	I just wanted to provide another update to the recent performance issues we've been suffering from. This is more of a follow-up to <a href="https://gflclan.com/forums/topic/68045-recent-surge-in-players-new-machine-and-eu-expansion-update/" rel="">this</a> update, but I wanted to make it a separate thread since we will be performing maintenance this upcoming <strong>Tuesday (January 19th)</strong>.
</p>

<p>
	 
</p>

<p>
	<span style="font-size:16px;"><strong>Machines Overloaded + New Machine Issues</strong></span>
</p>

<p>
	Our game server machines have been overloaded recently, especially GS16 which hosts our Rust servers specifically. We purchased another machine with the Intel i9-10900K and tried moving some servers to that machine for load-balancing last week. However, we ran into even more complicated issues regarding RPF policies (filters put in-place to prevent spoofed traffic originating from our hosting provider's network). We ran into these issues because with our current setup, technically speaking, we're "spoofing" as our Anycast network when sending traffic from our game server machines directly (we aren't announcing the Anycast IP ranges via BGP on our game server machines themselves and there's no real way to do that right now that doesn't conflict with our current Anycast setup). Resolving the issue is very tricky and it has become a huge pain to deal with without us purchasing a switch for <abbr title="Games For Life"><abbr title="Games For Life">GFL</abbr></abbr> (read below). Unfortunately, we had to revert the move last week after finding it too complicated to get things working with the new machine.
</p>

<p>
	 
</p>

<p>
	At first, we were hoping we'd be able to setup a BGP session on our game server machines and announce our specific Anycast IPs allocated to the game server machine without traffic from the Internet also routing to them. This would allow us to send out as the Anycast IPs since they're being announced and RPF would succeed, but unfortunately, there's no way to prevent the Internet from routing to the game server machines in this case and this wouldn't work because we have an IPIP setup meaning we're expecting incoming packets to the game server machine to be in IPIP format. If we were to go with a solution like this, we'd literally need something to encapsulate each incoming packet into IPIP and even then, we'd also need to keep track of the internal IPs which would just be a mess (so more overhead since we'd need to encapsulate incoming packets, bigger packet sizes by 20 bytes each, and a solution to map internal IPs properly). This isn't ideal in my opinion.
</p>

<p>
	 
</p>

<p>
	A solution our hosting provider (<a href="https://www.gameserverkings.com/" rel="external nofollow">GSK</a>) offered was purchasing a switch for <abbr title="Games For Life"><abbr title="Games For Life">GFL</abbr></abbr> and then we'd plug all of our game server machines into this switch along with our Dallas POP (which is with GSK). Since the Dallas POP is announcing our Anycast range already and is plugged into the same switch as our game server machines, this would allow us to send traffic out as our Anycast network directly and not have to worry about RPF policies dropping them since our Anycast IP ranges are being announced on the switch by our Dallas POP. I believe this makes most sense in my opinion and is the best option for us. This way, we wouldn't have to worry about not being able to spoof as our Anycast network from our game server machines.
</p>

<p>
	 
</p>

<p>
	I've decided to accept this solution. With that being said,<strong> 30 or so seconds</strong> of downtime will be required for each game server machine and our Dallas POP. The 30 seconds or so of downtime would be network-related and this is assuming the move is smooth. I'd like to move our least popular machine first to this switch and ensure everything is working properly. Afterwards, we will move the others. We're likely going to move the new game server machine and Dallas POP first to ensure things are running okay on them and we're able to send traffic out as our Anycast IPs that aren't associated with existing game servers (since they would fail anyways due to them being directed towards a different switch). For game server machines in production use, I'm planning for the maintenance to start after my current job finishes on <strong>Tuesday (January 19th)</strong> which is 4 PM CST. Therefore, the timeframe will be <strong>4 PM - 7 PM CST</strong> on Tuesday most likely. There's a high chance we'll finish sooner, but I just wanted to provide a longer time period in case. We'll likely try getting the Dallas POP and new game server machine moved earlier if possible on Tuesday just so I can do testing on those while on my breaks or on lunch at my current job.
</p>

<p>
	 
</p>

<p>
	<span style="font-size:16px;"><strong>GS16 Heating Issues + Poor Performance</strong></span>
</p>

<p>
	Last night, I was informed that many of our modified Rust servers (running additional addons and whatnot) were experiencing poor performance issues along with Vanilla from time to time. I started looking into this and immediately saw an issue. The processor was downclocking from <span style="color:#2ecc71;"><strong>4.9 GHz</strong></span> to <span style="color:#c0392b;"><strong>~4.4 - 4.6 GHz</strong></span>. This was a huge issue and after trying to receive the CPU temperatures via the <strong>sensors</strong> command, the temperatures seemed okay mostly (~85 - 90C). However, our hosting provider confirmed this was due to thermal throttling. Therefore, those temperatures were most likely inaccurate.
</p>

<p>
	 
</p>

<p>
	I thought I had stress-tested this machine specifically before putting it into production use. However, after thinking about it, I believe I only stress-tested the GS14 and GS15 machines (GS15 having the same specs as GS16). The GS15 machine is performing fine and we ran additional stress testing (without impacting performance) to ensure it can receive equal amount of load to GS16 without thermal throttling occurring. I believe we were trying to get GS16 up as fast as possible because of performance issues we were having on our old machines, but I should have still stress-tested this machine before production use and I apologize for this.
</p>

<p>
	 
</p>

<p>
	Our hosting provider believes having the thermal paste reapplied to the CPU will resolve this issue and I agree. Therefore, at some point (unknown when we'll do this), we will most likely be taking out the machine and reapplying the thermal paste. This would result in <strong>30 minutes</strong> or so downtime assuming everything is smooth. I'm not sure if I'd want to do stress-testing after the machine comes back up to ensure it can handle high loads without thermal throttling and this would require more downtime unfortunately unless if we want server performance being heavily impacted during this time. I still have no ETA, but since the switch maintenance on Tuesday should allow us to move servers to the new machine without issues, I don't believe this is urgent. Moving servers to a new machine will lower the load on GS16 which'll result in cooler CPU temperatures and the CPU being clocked at 4.9 GHz on all cores.
</p>

<p>
	 
</p>

<p>
	Once I have an update on the GS16 maintenance specifically, I'll let everybody know.
</p>

<p>
	 
</p>

<p>
	In the meantime, I believe we had our Rust 10x server's tick-rate set to 20 instead of 30 to see if that helps with the performance issues at higher player counts for now. A lower tick-rate should result in less CPU consumption from the server itself. Therefore, it should have improved performance at higher player counts even while the CPU is being throttled to 4.4 - 4.6 GHz. The drawback to this is the server itself won't be as smooth as before (since it's calculating less ticks per second), but hopefully it isn't really noticeable. This is all a temporary solution anyways unless if nobody sees a difference running at 20 instead of 30 ticks per second.
</p>

<p>
	 
</p>

<p>
	<span style="font-size:16px;"><strong>Conclusion</strong></span>
</p>

<p>
	I understand the frustration recently in regards to poor performance. We saw a huge spike in players recently as stated in the other update post and we did try moving servers to a new machine last week, but ran into complicated issues. The truth is, we have a very unique setup since we own the network itself and while this comes with many pros, it also makes things a lot more complicated (e.g. having to find ways to obey RPF policies and whatnot). Thankfully, we'll be doing things a lot differently with the new packet processing/forwarding/filtering software we'll be making for the Anycast network, but that's still in development and quite complicated as you can see in <a href="https://gflclan.com/forums/topic/68519-another-big-bifrost-update-really-close-to-finalized-plan/" rel="">this</a> post. The new software will make it so we don't have to obey RPF policies since we won't be sending out as our Anycast network directly, thankfully.
</p>

<p>
	 
</p>

<p>
	Overall, we'll be having maintenance on our game server machines and Dallas POP on <strong>Tuesday, January 19th</strong> (2021). This will result in <strong>~30 seconds</strong> network downtime for each game server machine and the Dallas POP assuming all goes well. We're going to try moving our Dallas POP and new machine to our new switch at first to ensure things are working. Once things are confirmed working with the new machine and POP, we will be moving the rest of our game server machines (GS14, GS15, and GS16) to the switch from <strong>4 - 7 PM CST </strong>(when my current job ends for the day). Afterwards, <a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/1623-aurora/?do=hovercard" data-mentionid="1623" href="https://gflclan.com/profile/1623-aurora/" rel="">@<span style="color:#ff0000">Aurora</span></a> will be able to offload servers to the new machine without running into issues assuming everything works correctly. This alone should resolve the GS16's thermal throttling issues mentioned above since it'll lower the CPU load/temps and therefore, the CPU will be clocked at 4.9 GHz on all cores again.
</p>

<p>
	 
</p>

<p>
	We'll also want to schedule a time at some point to take out our GS16 machine and reapply thermal paste. This would likely result in at least 30 minutes downtime, but isn't urgent right now since we're going to be moving servers after the Tuesday maintenance which'll result in the machine load going down and CPU being clocked at 4.9 GHz like normal. More updates on that will be announced at a later time.
</p>

<p>
	 
</p>

<p>
	Server performance should improve after the machines are load-balanced. If servers are still experiencing performance issues after the machines are load-balanced and it is confirmed that both the CPU is clocked at 4.9 GHz on all cores and that it isn't a network-related issue, the server itself is most likely unoptimized. At that point, managers of the server need to look through its addons to see if there's anything consuming too many CPU cycles, change the server's tick-rate, adjust the player count, or more.
</p>

<p>
	 
</p>

<p>
	I also want to state this is in no way our hosting provider's fault (GSK). It's really good that they have RPF policies (not enough hosting providers do this in my opinion), it's just unfortunately our setup is unique and currently relies on spoofing traffic out as our Anycast network. I really appreciate GSK's help in regards to this as well since a lot of hosting providers wouldn't allow you to spoof traffic at all.
</p>

<p>
	 
</p>

<p>
	If you have any questions, please let me know and once again, I apologize for the inconvenience recently. I hope this post clears things up and allows people to see what issues we're running into while trying to resolve everything. I'm trying to be as transparent as possible in regards to these issues and our solutions to them.
</p>

<p>
	 
</p>

<p>
	Thank you for your time.
</p>
]]></description><guid isPermaLink="false">68884</guid><pubDate>Sun, 17 Jan 2021 23:08:59 +0000</pubDate></item><item><title>Recent Surge In Players + New Machine And EU Expansion Update</title><link>https://gflclan.com/topic/68045-recent-surge-in-players-new-machine-and-eu-expansion-update/</link><description><![CDATA[
<p>
	Hey everyone and happy New Years!
</p>

<p>
	 
</p>

<p>
	This year represents <abbr title="Games For Life">GFL</abbr>'s 10 year anniversary (January 25th, 2011 is when it was founded) and we're already off to a great start!
</p>

<p>
	 
</p>

<p>
	I just wanted to provide a small update to everybody regarding two topics.
</p>

<p>
	 
</p>

<p>
	<span style="font-size:16px;"><strong>Recent Surge Of Players + New Machine</strong></span>
</p>

<p>
	Since New Years, we've seen a big surge in players on <abbr title="Games For Life">GFL</abbr>'s servers. In fact, today, we beat our all-time record of players online on all of <abbr title="Games For Life">GFL</abbr>'s servers at the same time (thank you to everybody who made this possible!).
</p>

<p>
	 
</p>

<p>
	<img alt="3593-01-01-2021-mGLtnRIc.png" class="ipsImage" data-ratio="37.62" height="272" style="height: auto;" width="723" data-src="https://g.gflclan.com/3593-01-01-2021-mGLtnRIc.png" src="https://gflclan.com/applications/core/interface/js/spacer.png"></p>

<p>
	 
</p>

<p>
	Our all-time record was <strong>1299 </strong>players on at the same time back in mid-2015 and this included a TeamSpeak 3 server at 100+ users! More accomplishments can be found <a href="https://gflclan.com/forums/topic/3532-gfls-great-accomplishments/?page=2&amp;tab=comments#comment-323673" rel="">here</a> for those interested!
</p>

<p>
	 
</p>

<p>
	We've seen a 200 - 300+ peak player increase just in the past two days (mostly from our Rust servers). With this happening, we've noticed our machines becoming overloaded. While our servers run with great hardware (which are detailed <a href="https://gflclan.com/forums/topic/61638-big-game-server-machine-upgrades-and-more/" rel="">here</a>), we're starting to run into performance issues due to the amount of CPU all of the game servers are consuming in combination.
</p>

<p>
	 
</p>

<p>
	<a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/1623-aurora/?do=hovercard" data-mentionid="1623" href="https://gflclan.com/profile/1623-aurora/" rel="">@<span style="color:#ff0000">Aurora</span></a> asked if we could purchase another US machine and I agreed. Therefore, I am going to be talking to <a href="https://gameserverkings.com/" rel="external nofollow">GSK</a> and requesting a new US machine (located in Dallas, TX like the rest of our US machines).
</p>

<p>
	 
</p>

<p>
	I believe I'm going to request another machine with the <a href="https://ark.intel.com/content/www/us/en/ark/products/199332/intel-core-i9-10900k-processor-20m-cache-up-to-5-30-ghz.html" rel="external nofollow">Intel i9-10900K</a> processor just like our GS15 and GS16 machines unless if the newer AMD series is available with GSK which have better single-threaded performance according to the <a href="https://www.cpubenchmark.net/singleThread.html" rel="external nofollow">benchmarks</a> I've seen (single-threaded performance matters the most for game servers since most are mostly single-threaded).
</p>

<p>
	 
</p>

<p>
	Once I have an update on this, I will let you know.
</p>

<p>
	 
</p>

<p>
	I just wanted to also apologize for the poor performance on the affected servers. I was, at least, not expecting this big of a player surge so suddenly, but regardless, I'm going to try getting this new machine ASAP so we can mitigate the issues. In the meantime, we may try load-balancing servers between machines to help with performance/load.
</p>

<p>
	 
</p>

<p>
	<span style="font-size:16px;"><strong>Small Update On Europe Expansion</strong></span>
</p>

<p>
	We're going to be expanding into Europe and the plan can be found <a href="https://gflclan.com/forums/topic/65069-europe-and-squad-expansion-plans-and-more/" rel="">here</a>! 
</p>

<p>
	 
</p>

<p>
	This process has taken longer than I was expecting and I apologize for the delay. Towards the bright side, <a href="http://ip2location.com" rel="external nofollow">IP2Location</a> has set our EU IP's block geo-location (<strong>185.240.217.0/24</strong>) to London, UK which was a huge step in this process (so the Valve Master Server will show the EU servers quicker for EU players and whatnot due to it using geo-location). All we're waiting for now is the EU POP to be setup with GSK. The EU machine itself is setup and has been ready for some time. However, we're unable to send traffic out as our EU IP block (185.240.217.0/24) and this is because the GSK network blocks these packets due to them not being a part of the RPF policies. We must announce the EU IP block on the GSK POP in order to resolve this issue. I am waiting for GSK to correct the BGP session issue we're running into at the moment and GSK said it shouldn't take much longer.
</p>

<p>
	 
</p>

<p>
	I'm hoping we can get this going this weekend or next week at some point!
</p>

<p>
	 
</p>

<p>
	If you have any questions about the above, feel free to reply!
</p>

<p>
	 
</p>

<p>
	Thank you!
</p>
]]></description><guid isPermaLink="false">68045</guid><pubDate>Sat, 02 Jan 2021 06:43:35 +0000</pubDate></item><item><title>[12-14-20] Server Downtime</title><link>https://gflclan.com/topic/67219-12-14-20-server-downtime/</link><description><![CDATA[
<p>
	Hi everybody,
</p>

<p>
	 
</p>

<p>
	Around two hours or so, a majority of our servers experienced a full outage. Specifically any servers on our <a href="https://gflclan.com/forums/topic/61638-big-game-server-machine-upgrades-and-more/" rel="">GSK machines</a>.
</p>

<p>
	 
</p>

<p>
	This was due to stricter RPF filtering GSK started enforcing earlier tonight. This basically made it so traffic originating as our Anycast IP range from our GSK machines were being dropped. With that said, our GSK POP wasn't able to send IPIP traffic to our game server machines and vice versa which broke A2S_INFO responses and players routing through our GSK POP wouldn't be able to send packets to any of our servers.
</p>

<p>
	 
</p>

<p>
	 This change was brought to my attention a couple of weeks ago or so, but preparations were more complex since it involved additional BGP sessions to my understanding. I wasn't aware they were making the change today. As a temporary solution, they've lifted restrictions for our specific Anycast IPs on our GSK machines and also allow IPIP traffic to certain destinations from the GSK POP. The permanent solution will be setting up the needed BGP sessions to allow us to obey the RPF filters without issues. However, this will likely time more time and will be more complex.
</p>

<p>
	 
</p>

<p>
	I'm going to talk to GSK and ensure communication for things like this is established better in the future since we weren't aware of the specific date and the only communication we had was it probably happening in a couple of weeks without BGP session details, etc.
</p>

<p>
	 
</p>

<p>
	I do apologize for the inconvenience.
</p>

<p>
	 
</p>

<p>
	If you have any questions, please feel free to reply.
</p>
]]></description><guid isPermaLink="false">67219</guid><pubDate>Tue, 15 Dec 2020 01:46:59 +0000</pubDate></item><item><title>Update On The Moderation Team</title><link>https://gflclan.com/topic/66402-update-on-the-moderation-team/</link><description><![CDATA[
<p>
	Hello everyone! The Council has been hard at work answering a lot of questions about the status of our Moderation Team, and would like to give everyone a little bit of an update on what’s been going on behind the scenes! First… answering the burning question, how will the team work?
</p>

<p>
	 
</p>

<p>
	<span style="font-size:20px;"><strong>How will the new team work?</strong></span>
</p>

<p>
	In the beginning, we only had a few Discord servers, and managing only a few discords with our prior system worked very well for a while, but now we have many different Official <abbr title="Games For Life">GFL</abbr> Discord servers, the majority of them pertaining to a certain Division (E.g.. CS:GO, GMOD, Rust, TF2). 
</p>

<p>
	 
</p>

<p>
	While the structure of our Discords have drastically changed, our Moderation team did not. The team remained mostly the same, with very few changes to its own structure. Each of our Official Discords will now have a dedicated team of 2+ moderators to tend to their server, and only their server. 
</p>

<p>
	 
</p>

<p>
	We will also have Senior Moderators, their responsibility is to ensure that Moderators are behaving appropriately, making sure that Moderators are following our Moderation Etiquette, and they will also be moderating globally in both of our Forums and Discords. This is expanded on below. 
</p>

<p>
	 
</p>

<p>
	<span style="font-size:20px;"><strong>Localization</strong></span>
</p>

<p>
	One of the main concerns of the previous team was that they “didn’t understand” the culture of all of the Discords that they moderated, and that our team was "spread too thin". For this reason, we have decided to implement moderators that know the communities they will be moderating. We’ve seen great success with the CWRP Moderators, and this is our effort to replicate that on a much larger scale. This also reduces the barrier of activity in all of our Discords, thus allowing the community to have more mods where we need them. Instead of everyone being required to moderate all of our Official Discords, they will only be tasked with one.
</p>

<p>
	 
</p>

<p>
	<span style="font-size:20px;"><strong>Filling in the Gaps</strong></span>
</p>

<p>
	A team as large as this will need to be overseen by not only it’s Team Leader, but also a few Senior Moderators. These Senior Moderators will be experienced and assist the Moderation Team Leader in training and maintaining the team. Senior Moderators will be available to consult on situations that other Moderators are unsure of.
</p>

<p>
	 
</p>

<p>
	<span style="font-size:20px;"><strong>What else has happened?</strong></span>
</p>

<ul><li>
		The former Moderation Team was relieved of their duties on 11/14/20 in favor of the new team we’re building. <a href="https://gflclan.com/forums/topic/65809-announcement-regarding-the-moderation-team/" ipsnoembed="true" rel="">https://gflclan.com/forums/topic/65809-announcement-regarding-the-moderation-team/</a>
	</li>
	<li>
		We understand the way that this was handled was brash, and we have since issued an apology to all former Moderators for not notifying them beforehand of their team being dissolved. In the future, we will take consideration to inform staff members of situations such as this before announcing it publicly.
	</li>
	<li>
		The Global Discord ruleset and Discord Punishment guidelines have been updated and expanded upon.
	</li>
	<li>
		We released these modified rules/guidelines so that everyone can provide some feedback before implementation, it will be up for two days. The thread can be found here: <a href="https://gflclan.com/forums/topic/66399-feedback-moderation-rules-and-punishment-guidelines/" ipsnoembed="true" rel="">https://gflclan.com/forums/topic/66399-feedback-moderation-rules-and-punishment-guidelines/</a>
	</li>
	<li>
		The Moderator Etiquette guidelines for all of our Moderators have been updated, adding a list of punishments for all team members and guides on how to handle criticism.
	</li>
	<li>
		On 11/17/20, we began the process of selecting members for the new team. To do this we are working with Division staff to find people with interest, as well as looking over Moderation applications that already exist.
	</li>
	<li>
		On 11/26/20, with assistance from Division Leaders, I chose the divisional-specific Discord Moderators. The new Moderators in each division can be found at the bottom of this post.
	</li>
	<li>
		We are still working to appoint a new Team Leader.
	</li>
</ul><p>
	 
</p>

<p>
	<strong><span style="font-size:20px;">New Division-Specific Discord Moderators:</span></strong>
</p>

<p>
	<strong>CS:GO</strong>
</p>

<p>
	QueenKill1o1
</p>

<p>
	Quad
</p>

<p>
	ColaCanFan
</p>

<p>
	Infra
</p>

<p>
	SQUIRRELY
</p>

<p>
	 
</p>

<p>
	<strong>GMod</strong>
</p>

<p>
	Jarm
</p>

<p>
	Plum
</p>

<p>
	Aurora
</p>

<p>
	Duck
</p>

<p>
	 
</p>

<p>
	<strong>Rust</strong>
</p>

<p>
	Hatehim222
</p>

<p>
	MadTurkey
</p>

<p>
	Bue
</p>

<p>
	Ace7
</p>

<p>
	 
</p>

<p>
	<strong>TF2</strong>
</p>

<p>
	Lachaine
</p>

<p>
	Cuasi
</p>

<p>
	Scott
</p>

<p>
	 
</p>

<p>
	Some of the moderators will also be moderating Main Discord. 
</p>

<p>
	 
</p>

<p>
	Additionally, applications will finally be dealt with - within the next few days, be prepared to see some new Forum Moderators, some former Discord Moderators returning.
</p>
]]></description><guid isPermaLink="false">66402</guid><pubDate>Thu, 26 Nov 2020 22:21:13 +0000</pubDate></item><item><title>GS14 Downtime [10-14-20]</title><link>https://gflclan.com/topic/64373-gs14-downtime-10-14-20/</link><description><![CDATA[
<p>
	Hello everybody,
</p>

<p>
	 
</p>

<p>
	Around <strong>1:50 PM CST </strong>today, our GS14 machine went down who is hosted with <a href="https://www.gameserverkings.com/" rel="external nofollow">GSK</a>. I was at a GP <a href="https://gflclan.com/profile/1-roy/?status=12474&amp;type=status" rel="">appointment</a> at this time and <a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/1623-aurora/?do=hovercard" data-mentionid="1623" href="https://gflclan.com/profile/1623-aurora/" id="ips_uid_7918_3" rel="">@Aurora</a> was looking into this issue. Unfortunately, we didn't have KVM access. Therefore, we asked GSK to reboot the machine which resulted in it coming back online. However, once we tried launching a game server, it went down again.
</p>

<p>
	 
</p>

<p>
	When I got home, GSK was in the process of getting a KVM attached to the machine. At 4:51 PM CST, the KVM was successfully attached and I was able to connect. However, I received a call and had to go for another ten - twenty minutes. So investigation with the KVM didn't start until <strong>5 - 5:10 PM CST</strong>.
</p>

<p>
	 
</p>

<p>
	Last night, we updated our control panel which included a daemon update that was applied to all of our game server machines. For some reason, the new configuration file on GS14 was setting our control panel daemon to use the entire host machine as its network instead of the separate bridge it creates internally (<strong>172.18.0.0/16</strong>). Basically what this does is exposes the interfaces on the host machine within the game server's Docker containers and network namespaces. Now, without our Anycast setup, this probably would have worked fine. However, exposing each container to all the interfaces on the machine defeats the purpose of isolating each container's network and would be less secure.
</p>

<p>
	 
</p>

<p>
	The reason the machine was going down is because we use <a href="https://github.com/GFLClan/Anycast-Endpoint/tree/master/dockergen" rel="external nofollow">Docker Gen</a> to deploy an IPIP tunnel for our Anycast network to use (since <a href="https://github.com/Dreae/compressor" rel="external nofollow">Compressor</a> forwards traffic from our POPs to our game servers via IPIP formatted packets, we need to setup an IPIP tunnel/endpoint in each container with the remote host and internal IP). When running our control panel in "host" mode, <span ipsnoautolink="true">Docker Gen</span> would execute these commands on the main host interface instead of inside each game server's network namespace. Two <a href="https://github.com/GFLClan/Anycast-Endpoint/blob/master/dockergen/update-netns.sh.tmpl#L35" rel="external nofollow">commands</a> this ran were deleting the old default gateway and replacing it with the new gateway which pointed towards the IPIP tunnel. Since this was being ran on the host, it removed the main machine's gateway (which went to GSK's router of course from the NIC) and replaced it with the IPIP tunnel (which wasn't even configured at this time). This resulted in the machine losing network connectivity and could not be restored until you either restart the machine or attach a KVM, log in that way, remove the old default gateway, and add the correct default gateway (which is what we did).
</p>

<p>
	 
</p>

<p>
	To resolve this issue, firstly, we modified the control panel's daemon config to set up its own bridge which are linked to the veth pairs our control panel adds when spinning up a game server within Docker containers. Afterwards, restart the daemon and if the machine is still offline, you can check the default route via the <strong>ip route</strong> command. If the default route is set to the IPIP tunnel Docker Gen was trying to set up, you will need to delete the default route via <strong>ip route delete default</strong> and add the original default route via <strong>ip route add default dev &lt;Main Interface&gt; via &lt;Gateway IP&gt;</strong>.
</p>

<p>
	 
</p>

<p>
	After this, we were able to spin up servers, but they weren't getting network connectivity. This was due to my IPIPDirect program <a href="https://github.com/gamemann/IPIPDirect-TC" rel="external nofollow">here</a> and when it first started up (on machine boot), it's supposed to get the default gateway's MAC address <a href="https://github.com/gamemann/IPIPDirect-TC/blob/master/src/IPIPDirect_loader.c#L227" rel="external nofollow">here</a> and <a href="https://github.com/gamemann/IPIPDirect-TC/blob/master/src/IPIPDirect_loader.c#L46" rel="external nofollow">here</a>. I believe it was starting up when the default gateway was set to the IPIP tunnel (so the MAC address was probably <strong>00:00:00:00:00:00</strong> or something virtualized/fake). This resulted in outbound game server traffic not being routed out properly. Simple restarting the application via <strong>systemctl restart IPIPDirect</strong> resolved this issue since it was able to save the correct destination MAC address (the default gateway's) and all game servers were online again.
</p>

<p>
	 
</p>

<p>
	What's not yet clear is why the machine didn't go down after the panel update last night. My suspicion is we updated the panel and it started using the internal network initially. However, the config file was still specified to use the host network. Therefore, at some point, the panel restarted today and that's when everything went down. <a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/1623-aurora/?do=hovercard" data-mentionid="1623" href="https://gflclan.com/profile/1623-aurora/" id="ips_uid_7918_14" rel="">@Aurora</a> and I will most likely be digging through the logs to see if we can find anything that would have caused this.
</p>

<p>
	 
</p>

<p>
	With that said, we're going to be looking into purchasing our own KVMs for GS14 along with our future game server machines with GSK. This is just so we have our own dedicated KVMs we can access at any time and don't have to rely on GSK manually attaching a KVM (which can be used by other clients which is why it took longer than usual this time).
</p>

<p>
	 
</p>

<p>
	I understand this post is more technical, but figured I'd give this information for those interested and who knows, maybe if I'm not here, this'll help others like <a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/1623-aurora/?do=hovercard" data-mentionid="1623" href="https://gflclan.com/profile/1623-aurora/" id="ips_uid_7918_15" rel="">@Aurora</a> with what to look at and so on in the case this happens again.
</p>

<p>
	 
</p>

<p>
	I also wanted to say thank you to <a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/1623-aurora/?do=hovercard" data-mentionid="1623" href="https://gflclan.com/profile/1623-aurora/" id="ips_uid_7918_16" rel="">@Aurora</a> for looking into this and contacting GSK while I was at my GP appointment <span><img alt=":)" data-emoticon="true" height="20" src="//gflusercontent.gflclan.com/file/forums-prod/emoticons/smile.png" srcset="//gflusercontent.gflclan.com/file/forums-prod/emoticons/smile@2x.png 2x" title=":)" width="20"> This would have taken longer if she hadn't started looking into it.</span>
</p>

<p>
	 
</p>

<p>
	I apologize for inconvenience and thank you for your patience regarding this. If you have any questions, please feel free to reply!
</p>
]]></description><guid isPermaLink="false">64373</guid><pubDate>Wed, 14 Oct 2020 23:50:57 +0000</pubDate></item><item><title>Outages On Dallas POP and GS14 [September 23rd, 2020]</title><link>https://gflclan.com/topic/63368-outages-on-dallas-pop-and-gs14-september-23rd-2020/</link><description><![CDATA[
<p>
	Hey everyone,
</p>

<p>
	 
</p>

<p>
	This thread is being made for documentation purposes.
</p>

<p>
	 
</p>

<p>
	At <strong>10:16 AM CST</strong> today, our Dallas POP with GSK lost network connectivity. Since the POP at the time was still announcing our IP range, this resulted in players being briefly disconnected until our IP range stopped being announced. For the next thirty or so minutes, the POP was announcing the IP range on and off which caused connectivity issues for users routing through our Dallas POP. This was because the machine was still online and BIRD was running which resulted in our range still being announced (it only needs to communicate with the neighbor address), but external traffic into the POP failed.
</p>

<p>
	 
</p>

<p>
	Players routing to a POP not directly affected by this incident may have not been able to see our game servers in the server browser as well during this time, but should have been able to connect to them directly. This is because our game servers in Texas route through the Dallas POP and it would use that POP server to cache A2S_INFO responses. Since this POP server went down, the A2S_INFO responses never made it back to the Redis server causing the players to never receive A2S_INFO responses (the server wouldn't show up in the server browser in this case). With Bifrost, we'll have a better method of handling A2S_INFO responses/caching which should resolve this issue from occurring again if the main POP goes down.
</p>

<p>
	 
</p>

<p>
	The Dallas POP's issue was corrected 45 minutes or so after the initial incident. However, our GS14 machine (with GSK as well) went down soon afterwards. This resulted in downtime for game servers on this machine. It turns out it ran into similar issues regarding the network as the POP server.
</p>

<p>
	 
</p>

<p>
	GSK (Gameserverkings) released an official statement <a href="https://blog.gameserverkings.com/incidents/09-23-2020/" rel="external nofollow">here</a> stating what caused the outage.
</p>

<p>
	 
</p>

<p>
	I apologize for the inconvenience and thank you for understanding. I don't believe this'll happen again and I do appreciate the transparency from GSK on this situation.
</p>

<p>
	 
</p>

<p>
	If you have any questions or concerns, please feel free to reply.
</p>

<p>
	 
</p>

<p>
	Thank you.
</p>
]]></description><guid isPermaLink="false">63368</guid><pubDate>Wed, 23 Sep 2020 21:15:59 +0000</pubDate></item><item><title>Anycast Update - New POP Hosting Provider, Upgrades, and More!</title><link>https://gflclan.com/topic/60352-anycast-update-new-pop-hosting-provider-upgrades-and-more/</link><description><![CDATA[
<p>
	Hey everyone,
</p>

<p>
	 
</p>

<p>
	I just wanted to announce some exciting news regarding our Anycast network!
</p>

<p>
	 
</p>

<p>
	As some of you probably know, I have plans to upgrade our Anycast network. For more information regarding the expansion itself along with what we're planning to do to mitigate (D)DoS attacks, I'd recommend reading <a href="https://gflclan.com/forums/topic/59681-anycast-expansion-plansapproaches/" rel="">this</a> and <a href="https://gflclan.com/forums/topic/57006-anycast-filtering-notes-ddos-attacks-plans-to-mitigate-and-more/" rel="">this</a> thread. The goal of this expansion is to strengthen our (D)DoS protection and acquire better hosting providers which'll result in:
</p>

<p>
	 
</p>

<ul><li>
		Better routing.
	</li>
	<li>
		More network capacity to mitigate higher volume (D)DoS attacks.
	</li>
	<li>
		Better support.
	</li>
	<li>
		And more!
	</li>
</ul><p>
	 
</p>

<p>
	I will now discuss the updates themselves individually.
</p>

<p>
	 
</p>

<p>
	<span style="font-size:18px;"><strong>New Hosting Provider and Dallas POP (GSK)</strong></span>
</p>

<p>
	Recently, I've came into contact with the owner of <a href="https://www.gameserverkings.com/" rel="external nofollow">Gameserverkings</a> (GSK), Renual, and we've been talking a lot the past couple of weeks. At first, we were talking about Anycast and (D)DoS protection in general. However, Renual felt we'd benefit from having them as a part of our network and I agreed.
</p>

<p>
	 
</p>

<p>
	After talking to Renual for the last two weeks, it was pretty obvious he knew what he was doing and he has given me <strong>A TON</strong> of useful information in regards to (D)DoS mitigation/attacks, Anycast, BGP, routing, ISPs, fiber optics, network equipment, and more. He also mentioned his plans for GSK and I'm honestly really excited for their future! When putting into account his mindset, talents, and goals for GSK, I would not be surprised if they became a very big hosting provider in the future. I'm honestly surprised I haven't heard of them before, but I don't think many gaming communities in the Source Engine world know of them. That'll probably be changing, though.
</p>

<p>
	 
</p>

<p>
	Renual offered us a trial run for a machine with the following specs:
</p>

<p>
	 
</p>

<ul><li>
		AMD Ryzen 5 3600 @ 3.6 GHz (4.2 GHz turbo).
	</li>
	<li>
		32 GBs of DDR4 RAM.
	</li>
	<li>
		500 GBs NVMe.
	</li>
	<li>
		$75.00/m (we aren't paying for this yet until the trial run is over and if we are satisfied).
	</li>
</ul><p>
	 
</p>

<p>
	These specs are probably more overkill with our current setup, but I want to give this a shot and if things run okay, I think I'm going to take a quality over quantity approach with the network and use GSK in our primary locations such as Dallas, TX and London, UK. They also have VPSs coming up, so if need to be, we can downgrade to a VPS which should still be powerful since it'll include dedicated cores IIRC.
</p>

<p>
	 
</p>

<p>
	They also plan to expand into other locations in the future that includes NYC and Chicago. Once they have these locations available, we'll be likely purchasing machines or VPSs in those locations as well.
</p>

<p>
	 
</p>

<p>
	GSK's DDoS protection is great and Renual has showed me proof of them being able to mitigate larger DDoS attacks (e.g. attacks that exceed 100 million PPS). In fact, a majority of the malicious traffic that goes through this new POP will more than likely be filtered by GSK's equipment. However, with BGP Flowspec support (which will be discussed below as a subject), any attacks BiFrost detects can be pushed to the upstreams. Therefore, GSK's upstreams and equipment could take care of those detected attacks.
</p>

<p>
	 
</p>

<p>
	Another pro is if need to be, we can order special equipment (e.g. switches and so on) and get colocation space within GSK. I don't personally believe we'll need that for now, but it's something to keep in mind for the future. We may also request a replacement with any part in our server (e.g. if we wanted to, we could upgrade our server to a 10 or 40 Gb NIC and GSK has 10+ Gb links available for an extra ~$50/m).
</p>

<p>
	 
</p>

<p>
	Finally, I feel having direct contact with Renual will be beneficial to us in the case an issue arises or just generally speaking.
</p>

<p>
	 
</p>

<p>
	Anyways, the setup of the new POP hasn't been as smooth as I'd like, but it was to be expected. Since GSK's filtering is pretty strict (for protection reasons), we had some issues getting the POP up and working with Compressor and so on. Thankfully, all of that appears to be fixed now and Renual was very quick to resolve these filtering issues. I started announcing the POP this morning and everything appears to be working okay so far. I did have to make some changes to Compressor (that include the new filters) since the function to get the CPU count was actually returning 32 cores instead of 12, so when we were looping through CPUs on our handshake BPF map, after CPU #12, there was garbage data which was resulting in unexpected behavior. Probably something worth mentioning to the Linux kernel devs, haha.
</p>

<p>
	 
</p>

<p>
	If you see any issues connecting to our game servers, please let me know!
</p>

<p>
	 
</p>

<p>
	Also, here's a cool pic of our current Dallas POP server in the GSK rack <span><img alt=":)" data-emoticon="true" height="20" src="//gflusercontent.gflclan.com/file/forums-prod/emoticons/smile.png" srcset="//gflusercontent.gflclan.com/file/forums-prod/emoticons/smile@2x.png 2x" title=":)" width="20"></span>
</p>

<p>
	 
</p>

<p>
	<img alt="Na1r0wcfBC.jpg" class="ipsImage" data-ratio="75.00" height="750" style="height: auto;" width="1000" data-src="https://g.gflclan.com/Na1r0wcfBC.jpg" src="https://gflclan.com/applications/core/interface/js/spacer.png"></p>

<p>
	 
</p>

<p>
	P.S. These servers are liquid-cooled <span><img alt=":)" data-emoticon="true" height="20" src="//gflusercontent.gflclan.com/file/forums-prod/emoticons/smile.png" srcset="//gflusercontent.gflclan.com/file/forums-prod/emoticons/smile@2x.png 2x" title=":)" width="20"></span>
</p>

<p>
	 
</p>

<p>
	<span>And I just wanted to give a big shout out to Renual for being so helpful recently!</span>
</p>

<p>
	 
</p>

<p>
	<span style="font-size:18px;"><strong>BGP Flowspec Support In Bifrost</strong></span>
</p>

<p>
	I just wanted to briefly go over our plans to implement BGP Flowspec support into Bifrost. What BGP Flowspec will allow us to do is push policies to our upstreams via BGP. What this means is if we detect an attack with BiFrost and push it to our upstreams, they will filter the attack instead of the POP server/Bifrost. This'll result in less load on the POP server and if an attack exceeded our POP's link, this would stop the attack from saturating our link since the upstreams would filter the malicious traffic instead.
</p>

<p>
	 
</p>

<p>
	At the moment, this is only supported on GSK I believe, but given we'll be likely expanding with them in the future, this is something worth implementing in my opinion. Unfortunately, you can't push policies that include payload matching via BGP Flowspec. The policies operate more like an ACL. Therefore, if we're getting hit by a larger (D)DoS attack that gets through the GSK filters, but each malicious packet's source IP isn't spoofed randomly each time, we can push a policy to filter that source IP via BGP Flowspec and GSK's upstreams will accept that policy along with GSK's equipment.
</p>

<p>
	 
</p>

<p>
	I believe we'll also be able to push policies including TTLs (time-to-live) and more as well which gives us a bit more control than a typical ACL. Just sadly payload matching isn't supported within the policies themselves.
</p>

<p>
	 
</p>

<p>
	<span style="font-size:18px;"><strong>DPDK Over XDP</strong></span>
</p>

<p>
	Another subject I wanted to briefly go over is the possibility of us using <a href="https://www.dpdk.org/" rel="external nofollow">DPDK</a> to process/drop packets instead of XDP in Bifrost. While XDP is still great and has its own pros over DPDK (e.g. being able to use kernel functionality and just easier to learn/understand/work with at times since the network stack is already implemented), DPDK is still technically faster than XDP when dropping packets. If you want some benchmarks, feel free to look at <a href="http://vger.kernel.org/lpc_net2018_talks/presentation-lpc2018-xdp-future.pdf" rel="external nofollow">this</a>. Note that these are from 2018, but I do believe these are still relevant today (I've talked to a few experienced people about DPDK vs XDP recently and they all stated DPDK still has an edge over XDP). I do plan on making my own benchmarks in the future with XDP vs DPDK once I purchase my home servers. However, that'll take some time.
</p>

<p>
	 
</p>

<p>
	The only cons with DPDK are that we'd need to implement the network stack ourselves and it is also a lot more complicated. For example, regarding the first con, we'd need to implement functionality ourselves to handle ARP packets in DPDK.
</p>

<p>
	 
</p>

<p>
	With that said, I tried looking into DPDK in the past, but their documentation is <strong>VERY </strong>long and time consuming. Please take a look <a href="https://doc.dpdk.org/guides/prog_guide/intro.html" rel="external nofollow">here</a> if you're interested. The third section alone feels like a book to read! There may be easier ways to go about getting familiar with DPDK though, so that's something I'll be looking into.
</p>

<p>
	 
</p>

<p>
	Overall, while DPDK is complicated and time-consuming, I do believe it's something we should try to do with Bifrost because it'd give us more control and it's also probably one of the fastest libraries to use when dropping (malicious) packets.
</p>

<p>
	 
</p>

<p>
	<span style="font-size:18px;"><strong>HellsGamers And Our Anycast Network</strong></span>
</p>

<p>
	In the past two months or so, I offered <a href="https://hellsgamers.com/" rel="external nofollow">HG</a> a few Anycast IPs from <abbr title="Games For Life">GFL</abbr>'s IP block (92.119.148.0/24) to use for a trial run to see if they were interested in utilizing our Anycast network. I got told that they were indeed interested a couple weeks ago after the trial run and they decided to purchase their own /24 IPv4 block that is currently being pointed to our ASN. All the LOAs and BGP sessions are pretty much set up at this point and the only thing left to do is reconfigure BIRD (our BGP daemon) on each of our POP servers to announce their IPv4 block and add IP allocations to Compressor for them to use. I'm hoping to get this completed by early next week.
</p>

<p>
	 
</p>

<p>
	HG will also be paying us monthly for this usage. I will not disclose the cost, but I believe this'll help with our Anycast expansion and allow us to spend more on it which'll result in better DDoS protection, routing, network capacity, and more!
</p>

<p>
	 
</p>

<p>
	As of right now, HG will be the only community outside of <abbr title="Games For Life">GFL</abbr> that'll utilize our network. I don't have any plans at the moment to accept other gaming communities. However, it's something I will consider in the future once my life starts improving and I'm less stressed since I feel like taking on the responsibility of more communities will add a lot more stress onto me (I'm already very stressed, to be honest). But then again, it's something I'm willing to do once the network is mostly expanded and my life situation starts improving (so I'll be able to put in work without feeling super stressed already).
</p>

<p>
	 
</p>

<p>
	Overall, these changes are very exciting in my opinion! I know a lot of you will not understand what a lot of the above mean due to the technical aspect, but the things we're accomplishing with the network is very impressive. Especially for just a gaming community. We still have a lot more to do, but I'm confident we'll get there <span><img alt=":)" data-emoticon="true" height="20" src="//gflusercontent.gflclan.com/file/forums-prod/emoticons/smile.png" srcset="//gflusercontent.gflclan.com/file/forums-prod/emoticons/smile@2x.png 2x" title=":)" width="20"></span>
</p>

<p>
	 
</p>

<p>
	<span>If you have any questions, please feel free to reply.</span>
</p>

<p>
	 
</p>

<p>
	<span>Thank you!</span>
</p>
]]></description><guid isPermaLink="false">60352</guid><pubDate>Sat, 01 Aug 2020 17:24:19 +0000</pubDate></item><item><title>Anycast Network Issues</title><link>https://gflclan.com/topic/60208-anycast-network-issues/</link><description><![CDATA[
<p>
	Hey everyone,
</p>

<p>
	 
</p>

<p>
	Tonight, our Anycast network experienced a major issue for our servers located in Dallas, TX under Nexril. Around a couple hours ago, I started announcing a new POP server in Dallas, TX which is a part of the major Anycast expansion that I'll be making a thread about tomorrow or whenever this issue is fixed.
</p>

<p>
	 
</p>

<p>
	What started occurring was all game servers under Nexril weren't responding to A2S_INFO queries. This resulted in the server being thrown off the server browser. At first, I thought this was due to the AF_XDP part of Compressor breaking on the new POP server (which is a common reason to why A2S_INFO wouldn't be working and we've ran into this in the past). However, after disabling the POP server and confirming the BGP session was inactive, the issue continued to occur. I confirmed these machines were routing to one of our Vultr's POPs and not the new one. Therefore, I was honestly stumped on what the issue could be.
</p>

<p>
	 
</p>

<p>
	After troubleshooting, I discovered the servers completely went down for the Nexril machines when I disabled my IPIP Direct <a href="https://github.com/gamemann/IPIPDirect-TC" rel="external nofollow">program</a>. This meant all game server packets being sent back from the Nexril machines to the closest POP server (our Vultr Dallas POP) weren't making it back. This also would explain why A2S_INFO broke initially because my IPIP Direct program is configured to send A2S_INFO responses back through the POP only for caching purposes (by <a href="https://github.com/gamemann/IPIPDirect-TC/blob/master/src/IPIPDirect_kern.c#L18" rel="external nofollow">this</a> line).
</p>

<p>
	 
</p>

<p>
	So this had me confused because when running an MTR to the Anycast network from the Nexril machines, it was receiving ICMP replies fine. With that said, I tried disabling BGP on the Vultr Dallas POP so the machines routed to the Chicago POP. However, the same issue occurred. This is when I needed to dig even deeper and unfortunately, since XDP isn't debug-friendly (what <a href="https://github.com/Dreae/compressor" rel="external nofollow">Compressor</a>, our packet processing software uses), I did have to stop Compressor on our Dallas POP multiple times after announcing it to the network again (so the machines routed to this POP again, etc).
</p>

<p>
	 
</p>

<p>
	When I did this, I used my Packet Flooding tool <a href="https://github.com/gamemann/Packet-Flooder" rel="external nofollow">here</a> to continuously send UDP and TCP packets back to the Anycast network every second (the closest POP which is Dallas, TX) from the Nexril machine. When Compressor was disabled and I had a TCPDump running, it WAS able to see the standard UDP/TCP packets. However, when I disabled the IPIPDirect program on the Nexril machine, Compressor on the Dallas POP, and ran a TCPDump for any packet coming from the game server machine on the Dallas POP, it ONLY saw those standard TCP/UDP packets. It <strong>DID NOT</strong> see the IPIP packets being sent back. I also ran a TCPDump on the game server machine catching all the IPIP packets it was trying to send back to the POP and there were many of them. None of them made it back to the Dallas POP server via TCPDump.
</p>

<p>
	 
</p>

<p>
	It appears one of the routers/hops from the Nexril machines to the Dallas POP are dropping/filtering our IPIP packets. I honestly haven't seen such a strange issue before on our Anycast network.
</p>

<p>
	 
</p>

<p>
	I've opened up a ticket with Nexril and also recorded a video of all my troubleshooting showing there's clearly an issue with IPIP packets. I'm waiting for a response from them.
</p>

<p>
	 
</p>

<p>
	In the meantime, what I did to get the servers up and running is run the IPIPDirect program and also include A2S_INFO responses so it'll send those out directly instead of through the POP. A2S_INFO caching will be disabled due to this, but once Nexril and us are able to figure out the issue, I'll be rebuilding the IPIPDirect program to send A2S_INFO responses back through the POP again for caching purposes.
</p>

<p>
	 
</p>

<p>
	I do want to apologize for the inconvenience and downtime associated with our Dallas servers along with players routing through our Dallas POP since I needed to take that down at times for debugging purposes.
</p>

<p>
	 
</p>

<p>
	Once I have an update, I will let you know.
</p>

<p>
	 
</p>

<p>
	Thank you.
</p>
]]></description><guid isPermaLink="false">60208</guid><pubDate>Thu, 30 Jul 2020 05:32:16 +0000</pubDate></item><item><title><![CDATA[Council & the Board of Directors]]></title><link>https://gflclan.com/topic/59730-council-the-board-of-directors/</link><description><![CDATA[<p>
	This will serve as an interim document that describes how the Council and the Board of Directors work until this can be written into an extensive rank definition.
</p>

<p>
	<br /><span style="font-size:20px;"><strong>Council</strong></span><br />
	Council is a new decision mechanism for high-level decisions. This will resemble a board of <abbr title="Games For Life">GFL</abbr> that sets the direction and makes the final calls for large decisions. The amount of members of Council will vary, but it will be preferred to have an odd number of members since that makes voting easier.
</p>

<p>
	 
</p>

<p>
	The Council in its entirety will not be responsible for the daily operations. This will be handled by the Board of Directors who will always have spots in Council as well.The chairman of the board will be called an Executive Director who will also serve as the chairman of Council. This board will be elaborated in the next section.
</p>

<p>
	 
</p>

<p>
	Each well-established game in <abbr title="Games For Life">GFL</abbr> will have a representative on the board. Whether a game is well-established or not will be decided by the Council. These will be elaborated in an upcoming section.
</p>

<p>
	 
</p>

<p>
	One or more non-staff members of the community will also take part in Council as Community Representatives.
</p>

<p>
	 
</p>

<p>
	Each Council member has equal say and a vote for each decision. If a vote is tied, the Executive Director makes the final call. The quorum of the Council will be half of the members.
</p>

<p>
	<br /><span style="font-size:20px;"><strong>Executive Director</strong></span><br />
	The Executive Director is the chairman of the Council and the Board of Directors; this might sound very powerful at first, but it is important to remember that everyone has an equal say in the Council. The Executive Director just has more responsibilities to the Council and the Board of Directors. To list some of these:
</p>

<ul><li>
		Ensure that the Council is fulfilling their duties and decisions are being responsibly made at an acceptable pace.
	</li>
	<li>
		Ensure that Directors are fulfilling their duties during daily operations.
	</li>
	<li>
		During daily operations, the Executive Director will handle the events that “slip through the cracks”.
	</li>
</ul><p>
	<br /><span style="font-size:20px;"><strong>Board of Directors</strong></span><br />
	The Board of Directors are responsible for the daily operations of <abbr title="Games For Life">GFL</abbr> under the direction of the Council. They are allowed to make decisions within their area without going to the entire Council, but they should never go against the direction set by the Council. Major decisions will be discussed with the Council beforehand.
</p>

<p>
	 
</p>

<p>
	A Director (excluding the Executive Director) is responsible for a specific area in <abbr title="Games For Life">GFL</abbr>. The following areas exist at the moment:
</p>

<ul><li>
		<strong>Communication</strong><br />
		The Director of Communication is responsible for the daily operations concerned with communication. Primarily this is the Discord servers and the forums.
	</li>
	<li>
		<strong>Tech</strong><br />
		The Director of Tech is the leader and spokesperson for the TAs. Furthermore, the Director will be responsible for the daily operations in the backend systems such as game servers, the network, web services, and so on.
	</li>
	<li>
		<strong>Divisions</strong><br />
		The Director of Divisions is the leader and spokesperson for the DLs. This entails ensuring the DLs are fulfilling their duties. Only the Council in its entirety can promote and demote DLs.
	</li>
	<li>
		<strong>Teams</strong><br />
		The Director of Teams is the leader and spokesperson for the Team Leaders. This entails ensuring that TLs are fulfilling their duties. The Director will be the interim leader of leaderless teams. The Council in its entirety must approve new teams or the removal of existing teams.
	</li>
</ul><p>
	 
</p>

<p>
	Areas will be added and removed as needed.
</p>

<p>
	<br /><span style="font-size:20px;"><strong>Division Representatives</strong></span><br />
	In order to ensure that our divisions get proper representation in the Council, each well-established division (WED) has a spot in the Council as a Division Representative (DR). Each WED appoints a <abbr title="Division Leader">DL</abbr> to be the DR in the Council. The Director of Divisions will act as their division’s representative if applicable.
</p>

<p>
	 
</p>

<p>
	The following divisions are considered well-established at the moment:
</p>

<ul><li>
		CS:GO
	</li>
	<li>
		GMOD
	</li>
	<li>
		Rust
	</li>
</ul><p>
	 
</p>

<p>
	<span style="font-size:20px;"><strong>Community Representative</strong></span><br />
	One or more non-staff members of the community will serve as Community Representatives in the Council. These are ideally people who often play on <abbr title="Games For Life">GFL</abbr>’s servers or similar. These representatives will be elected by the community and serve as a member of Council for six months.
</p>

<p>
	 
</p>

<p>
	You must be recommended by at least one Council member before being able to run for Community Representative.
</p>

<p>
	 
</p>]]></description><guid isPermaLink="false">59730</guid><pubDate>Mon, 20 Jul 2020 20:02:29 +0000</pubDate></item><item><title>Recent Performance Issues On GS12</title><link>https://gflclan.com/topic/59096-recent-performance-issues-on-gs12/</link><description><![CDATA[
<p>
	Hey everyone,
</p>

<p>
	 
</p>

<p>
	I just wanted to briefly address recent performance issues with our GS12 machine. Unfortunately, the machine's processor down-clocked a lot more than I expected at higher load. As of right now, it clocks to 4.3 GHz on all cores at ~50 - 60% load. This is because the processor is getting too hot running all cores at &gt; 4.3 GHz and it needs to down clock. At 20 - 30% load, we see 4.6 - 4.9 GHz on all cores which is what I was hoping we'd get at 50 - 60%+ load. This is resulting in bad performance with servers on GS12 that include all of our Rust servers.
</p>

<p>
	 
</p>

<p>
	<a contenteditable="false" data-ipshover="" data-ipshover-target="https://gflclan.com/profile/1623-xy/?do=hovercard" data-mentionid="1623" href="https://gflclan.com/profile/1623-xy/" rel="">@Xy</a> and I have been talking for the last few hours in voice and we've been talking to our current hosting provider in Texas. We believe we've found a solid solution and will be looking to order a new machine in the next day or so depending on the benchmarks we get back from the hosting provider (which are likely going to suit our needs). Our hosting provider also has better solutions coming at the end of the month that'll result in better cooling. Therefore, we'll be able to get higher clock speeds with the same processor. We will be using the same processor as the GS12 machine (the Intel i9-9900K). However, we'll have higher clock speeds at the same load GS12 is running at (likely &gt; 4.6 GHz).
</p>

<p>
	 
</p>

<p>
	Once I have another update, I will let you know. I apologize for the poor performance as well. I wasn't expecting the machine to down-clock this much at 50 - 60% load.
</p>

<p>
	 
</p>

<p>
	Thank you for understanding.
</p>
]]></description><guid isPermaLink="false">59096</guid><pubDate>Sun, 12 Jul 2020 02:01:02 +0000</pubDate></item></channel></rss>
