Jump to content

Roy

Administrator
  • Content Count

    2,451
  • Joined

  • Last visited

  • Days Won

    346

Roy last won the day on July 7

Roy had the most liked content!

Community Reputation

9,455 Rainbow

About Roy

Personal Information

  • Location
    San Antonio, TX

Computer Information

  • Operating System
    NoobOS
  • CPU
    Intel i7-8700K @ 3.7 GHz
  • GPU
    RTX 2070
  • Ram
    16 GBs DDR4
  • Power Supply
    750W
  • Monitor(s)
    1 x 2K 144 HZ, 1 x 1080p 60 HZ
  • Hard Drive(s)
    1 x 512 GBs SSD, 1 x 3 TBs HDD

Recent Profile Visitors

172,206 profile views
  1. Also wanted to say thank you to the players who provided MTRs and trace routes to @Skittlez and @Frozen! It didn't take long to figure out the issue after discovering two users having the timeouts/packet loss when routing to the Dallas POP (the Dallas POP doesn't forward ICMP replies, so it easily allowed me to tell that it was the POP dropping the traffic itself).
  2. Hi, I'm making this thread for documentation purposes. Last night when setting up the new game server machine, I decided to add new IP allocations and sync the POP configs (which included the latest config from POPs running the new filters). The new filters config would work with POPs not running the new filters. However, the one thing I forgot about is POPs running the old version didn't have the Rust rate limit multiplier which was needed (since we had a relatively low standard rate limit which worked fine for Source Engine servers, but not for Rust servers). This resulted in players easily being rate limited and causing timeouts when routing to POPs not running the new filters. I've pushed a new update to all POPs raising the standard rate limit for the time being. When all POPs are running the new filters, I will be lowering the standard rate limit, but Rust servers will have a multiplier of four which will be fine. This issue should be resolved. I haven't received any complaints since. I just wanted to apologize for the inconvenience. It was something I overlooked and since I was in a rush to try to get the new machine running and tested last night (along with Xy), I didn't catch it while updating the POPs last night. Thank you for understanding.
  3. Hey everyone, I just wanted to see if there were any web designers who could give input or contribute to an open-source project I'm working on called Barricade Firewall. This project can be located on GitHub here. This is a neat personal project I'm working on and is based off of the XDP Firewall I made here. The Barricade Firewall offers a performance improvement to the XDP program itself (located in this commit). With that said, the firewall will be able to connect to a backbone to sync configs/filters along with reporting stats to (this isn't finished yet). As of right now, the XDP firewall itself works fine without the backbone and you can set filters and config options in the config file using the JSON format. An example is included in the README on the Firewall GitHub repo here. I'm currently in the process of creating the web design, but I'm honestly not a web designer (I prefer back-end programming). So far, here's what I have: https://g.gflclan.com/A917JW63Hw.mp4 I haven't messed with the content on the page which is why it looks bad at the moment. However, I do believe the nav-bar looks somewhat decent at least. I just wanted to see if there were anybody who was interested. You can view the CSS/SCSS code here and HTML code through the Elixir templates here (I use the Bootstrap CSS framework). This project also has potential to turn into a full-fledged firewall. However, I'm going to push out a basic version and switch focus to BiFrost with Dreae before implementing more complex features (e.g. forwarding rules that are synced on firewalls via NFTables API). Dropping traffic using XDP-native is one of the fastest ways in the Linux networking path at this moment which is why I feel the firewall itself has a lot of potential. Thanks!
  4. IT'S ALL YOUR FAULT! HOW DARE YOU MANAGE SERVERS THAT HAVE A BIG SPIKE IN POPULATION!
  5. I wonder why servers on that machine are suffering from performance issues Also, I'm going to talk to our staff to ensure machine load is monitored before putting servers on future machines. We've had a big spike in Rust population recently which is why we're seeing such high load. Thanks.
  6. I've ordered the machine from the Texas hosting provider after reviewing the benchmark results. We're going to be offloading servers from GS12 (probably all Rust servers) and ensure to keep the server at <50% load so we'll get the best clock speeds possible. It is likely we'll have this machine setup by tonight. ACLs are already removed for this hosting provider since we have GS08 and GS09 with them. Thanks!
  7. One other thing I wanted to add is we are also looking into using OVH. However, I'd like to know if we can: Overclock their servers with the KVM access given. OVH does water-cooling to my understanding. Therefore, temperatures shouldn't be a big issue. If they're able to remove ACLs preventing us from using the IPIP program I made that sends traffic back to the client directly by spoofing the traffic as our Anycast network. My biggest concern is #2. I know OVH can do it since they own all of their infrastructure. However, I'm not sure if they'd be willing to do this given they're a very big company. We would need the ACLs removed since we currently don't have a POP server in the any locations OVH supports. Otherwise, we'd be looking at an additional 10 - 20ms latency being added to the network depending on the POP the OVH machine routes to. If we could get an OVH machine, that'd be awesome though since OVH is one of the most stable hosting providers I've ever used. Would be nice if we could colocate somewhere too. However, I don't trust myself to build a production server and overclock it since I haven't done that before. Thanks!
  8. Hey everyone, I just wanted to briefly address recent performance issues with our GS12 machine. Unfortunately, the machine's processor down-clocked a lot more than I expected at higher load. As of right now, it clocks to 4.3 GHz on all cores at ~50 - 60% load. This is because the processor is getting too hot running all cores at > 4.3 GHz and it needs to down clock. At 20 - 30% load, we see 4.6 - 4.9 GHz on all cores which is what I was hoping we'd get at 50 - 60%+ load. This is resulting in bad performance with servers on GS12 that include all of our Rust servers. @Xy and I have been talking for the last few hours in voice and we've been talking to our current hosting provider in Texas. We believe we've found a solid solution and will be looking to order a new machine in the next day or so depending on the benchmarks we get back from the hosting provider (which are likely going to suit our needs). Our hosting provider also has better solutions coming at the end of the month that'll result in better cooling. Therefore, we'll be able to get higher clock speeds with the same processor. We will be using the same processor as the GS12 machine (the Intel i9-9900K). However, we'll have higher clock speeds at the same load GS12 is running at (likely > 4.6 GHz). Once I have another update, I will let you know. I apologize for the poor performance as well. I wasn't expecting the machine to down-clock this much at 50 - 60% load. Thank you for understanding.
  9. I got a really good part of my XDP Firewall completed for my Barricade FW project here. It's basically just like my plain XDP Firewall, but has performance improvements and the config is stored using JSON. Pretty happy with how smoothly I've transitioned everything.

     

    The next thing to do is complete the web back-end in Elixir and have both the XDP firewall(s) and web back-end communicate with each other with encryption and sync the configs that includes filters, etc. I'm still going through a lot of Elixir documentation trying to understand some key features of the language. However, once that's done, I'll be good to go :) 

     

    Afterwards, it'll be time to work on creating testing websites and creating new Kubernetes and CI/CD projects that'll help with my career growth. I'll say once the Barricade FW project is done along with the Kubernetes and CI/CD projects, I'll be looking really good career wise! Then after this, I'll go fully into working on BiFrost with Dreae.

     

    Pretty excited for everything 😄

  10. 2952-07-10-2020-BCrJciFe.png

     

    We beat our record again and nearly had 1100 players on at the same time :) We're getting really close to beating our all-time record of 1299 players from 2015 (in which, included a 100+ TS3 server at the time that added onto the count).

  11. A few pics I took today after attempting to push a ~271 pound treadmill up the stairs solely (that didn't work out well). I also gave myself a haircut last week for the first time and I think it's looking okay lmao.
  12. Welp, although I've felt swamped recently with work from GFL and many other things (there's a lot going on right now), I feel I'm making great progress at least. Unfortunately, I haven't been able to work on the Barricade Firewall project this past week, but I'm hoping to start again once I figure out all these annoying GFL issues with the network and so on (I want to expand the filters to Europe soon, but I'm trying to fix some reports first). After finishing the Barricade FW project, I plan on getting back into BiFrost (initially called Compressor V2) with Dreae and hopefully the experience I gain from the Barricade FW project (specifically Elixir) will help a lot.

     

    Other than that, I've thought of a great idea for me personally. I've made it known that I want a DevOps position and I noticed that pretty much anything related to DevOps requires knowledge in CD/CI tools along with Kubernetes. I was already planning to get into CircleCI (a CD/CI tool), but after looking into Kubernetes, this highly interests me as well. Therefore, I plan on making two websites. One will be written in Go and the other will be written in Elixir. I will be making both of these separate repos on my GitHub profile. These websites will be testing websites, but I plan on implementing pretty neat functionality such as user management systems along with some other back-end things. I want some resource-intensive tasks within these websites as well so I can see the difference when scaling these websites with Kubernetes later.

     

    After I create these websites, I'm going to make two more GitHub repos. One will be for CircleCI and the other will be for Kubernetes. I'm going to be learning CircleCI more and creating configurations that I'll make open-source (into these repos) that'll build both of these websites along with deploy them to production if all tests come back successfully.

     

    With the Kubernetes repo, I plan on making a bunch of README files (*.md files) that'll document commands I run with Kubernetes along with results. I want to document how I completely scale out the web application to multiple Kubernete clusters. I also plan on making a personal Vultr account and using some VMs from them for this project. This'll also give me an opportunity to mess with startup scripts on Vultr.

     

    I believe this'll look really great for me and it'll be way better than just saying "I have experience with Kubernetes" or saying "I have certs" (personally, I'd think it'd look more attractive seeing a repo I make documented with everything I've done with Kubernetes over some certificate(s)).

     

    Still have so much more to do and I've felt quite burned out. But I'm hoping it is all worth it in the end.

    1. Roy

      Roy

      Also, these are things that could benefit GFL sometime in the future once I gain experience. For example, we should be able to put GFL's website into high availability which is a project I've wanted to do for some time now.

  13. They were a Council member (back when Council was nearly equal to Director nowadays) from 2012 to 2016 or something like that. I don't remember the time frame of Council but he joined GFL in 2011. P.S. I think @Floopyhiggle is the only person who got to see my bunny back in 2011 or early 2012.
×
×
  • Create New...