Jump to content
 Share

Snowy

[CS:GO ZE] Linux Incoming?

Recommended Posts

weeb skin plox


any luff maps -> ff maps -> random tryhard maps -> tryhard laser maps that isn't ff or irrelevant ff maps -> random tryhard map -> mars_escape -> atix_panic -> any surf maps -> crazy_escape -> atix apocalypse -> minas tirith -> random casual maps -> shroomforest 123 -> platformer -> repeat

Share this post


Link to post
Share on other sites


1 hour ago, CSS ZE Secretary said:

rip my 20 ping :feelsbadman:

rip my 200 ping


any luff maps -> ff maps -> random tryhard maps -> tryhard laser maps that isn't ff or irrelevant ff maps -> random tryhard map -> mars_escape -> atix_panic -> any surf maps -> crazy_escape -> atix apocalypse -> minas tirith -> random casual maps -> shroomforest 123 -> platformer -> repeat

Share this post


Link to post
Share on other sites


9 hours ago, Drifter said:

So this will be a definite advantage for Australian players is what i am getting?
Not a massive difference in ping but slightly better than 260?

I don't believe the latency would change much from Australia on the new server. The latency from our GS15 machine to our Sydney POP is ~270 ms. What will be different is the latency (or ping) displayed inside the server browser. You'll receive the latency from you to the POP you're routing to instead of to the physical machine itself. This is due to A2S_INFO caching we have enabled on our POP servers (the servers that forward the game server traffic to the game server machines themselves). You can read what it A2S_INFO caching is and why we have it enabled here.

 

In Regards To The Move Itself

The processor upgrade will be major considering the new machine is running with the Intel i9-10900K, though. Therefore, I do believe there will be a big performance increase with this move for the server itself.

 

In regards to the move itself, I haven't been involved much with it, but I was going to suggest a move eventually anyways due to NFO's limitations (e.g. being limited by Internap and not being able to purchase consumer hardware that would be faster than server hardware). Though, personally, I would really like for the main server move to wait until Bifrost is completed/stable. I've brought up the idea of having a second CS:GO ZE server on the network a while back, but I think that came with complications for the community itself, etc. I do think this would be a good move still though considering I'm sure the second server would easily receive population and we'd have the main server to fallback on if any issues occurred until we're fully ready to move the main server. That said, it would be a good time to test CS:GO ZE on the network itself. However, obviously it's up to everyone else. That's just my opinion on it.

 

One big thing about this move is it gives us pretty much control over everything. We own the network itself and rent our the IPv4 blocks (but have full control over them), but with that, we also are responsible for a lot more such as packet processing/forwarding, routing, (D)DoS protection, and more. While this area is something I'm super passionate about, it also takes a lot of work and the network isn't fully built out yet. These are the kind of things hosting providers do who don't resell services from other hosting providers like many, etc. The network's infrastructure is pretty much being maintained by myself along with help from @Aurora with updating Compressor configs when adding/changing forwarding rules. I won't lie, with everything going on in my life right now, it has been taking a big toll on me as of recent and any help we can get on maintaining the network is beneficial. The problem is, this network is very complex and it's very hard for others to understand it on a low-level, which is understandable. Therefore, we haven't found anybody yet willing to take on the amount of work I am in regards to the network's infrastructure itself. Thankfully we have @Aurora helping out a ton with the game server machine's infrastructure though! It's also hard for me to train people in-depth on the network because that takes a lot of time with the amount of complexity this network comes with.

 

There's also some weird bugs with DockerGen/Linux/IPIP tunnels which all servers suffer from on the network and CS:GO ZE on the new server is as well. This isn't necessarily related to the Anycast network itself. Seems like at times when starting up the server, the IPIP tunnel doesn't attach inside the Docker container/network namespace. We have to repeatedly start/stop the server until it works unless if we run into this annoying bug lol, but this is still quite annoying. This is something I'm still trying to figure out and everything looks configured correctly. We released the DockerGen files we used here for those interested.

 

Bifrost is the packet processing/filtering software @Dreae and I are working on and the latest update on this can be found here. I'm currently working on this new project that'll be used as a source for Bifrost (replicating IPTables/NFTable's source port mapping in XDP/BPF). You can also read on our plans for Bifrost here. Though, some things have changed such as me wanting to write everything using XDP/BPF instead of having a combination of XDP, TC, netfilter, and IPTables/NFTables.

 

Once Bifrost is made, I believe a lot of our issues will be resolved. We won't be relying on IPIP traffic mainly and it'll be much more user-friendly. With that said, I plan to implement a packet capture engine into XDP that'll be used for Bifrost so we'll be able to inspect packets on the POPs easier (as of right now, since game server traffic is forwarded within the XDP hook, the packets don't make it to the engine tcpdump uses for example).

 

I understand this is a pretty low-level overview of the network right now. A TL;DR version is:

  • This move will definitely increase the server performance heavily itself and is good long-term.
  • I'd really prefer if we waited until Bifrost is completed before moving the main server. I do still think a second CS:GO ZE server would be cool on the network though and it'd allow us to see how a CS:GO ZE server will perform on the network while having the main server to fallback on.
  • This move gives us basically full control over a majority of things. While this is nice to have, it also comes with a lot of responsibility and a lot of that responsibility is relying on me right now.
  • The network maintenance and development takes a toll on me personally. Therefore, I am trying to find others who can help with the infrastructure itself since that relies on me mostly right now.

 

Had to write this fairly quickly. I apologize for any mistakes, but I hope this clears up some things and provides a more in-depth overview of the network and whatnot.

 

If you have any questions regarding the hardware and network, please feel free to reply 🙂 

Share this post


Link to post
Share on other sites


42 minutes ago, reindeer5 said:

Are the plugins/extensions open source?

The extensions are private. Some of the plugins are open source, but with plugins we only have to worry about porting gamedata (since SourcePawn is already OS-agnostic) and not really in need of extra hands to help out with.

Share this post


Link to post
Share on other sites


On 12/29/2020 at 5:37 AM, Roy said:

I don't believe the latency would change much from Australia on the new server. The latency from our GS15 machine to our Sydney POP is ~270 ms. What will be different is the latency (or ping) displayed inside the server browser. You'll receive the latency from you to the POP you're routing to instead of to the physical machine itself. This is due to A2S_INFO caching we have enabled on our POP servers (the servers that forward the game server traffic to the game server machines themselves). You can read what it A2S_INFO caching is and why we have it enabled here.

 

 

One big thing about this move is it gives us pretty much control over everything. We own the network itself and rent our the IPv4 blocks (but have full control over them), but with that, we also are responsible for a lot more such as packet processing/forwarding, routing, (D)DoS protection, and more. While this area is something I'm super passionate about, it also takes a lot of work and the network isn't fully built out yet. These are the kind of things hosting providers do who don't resell services from other hosting providers like many, etc. The network's infrastructure is pretty much being maintained by myself along with help from @Aurora with updating Compressor configs when adding/changing forwarding rules. I won't lie, with everything going on in my life right now, it has been taking a big toll on me as of recent and any help we can get on maintaining the network is beneficial. The problem is, this network is very complex and it's very hard for others to understand it on a low-level, which is understandable. Therefore, we haven't found anybody yet willing to take on the amount of work I am in regards to the network's infrastructure itself. Thankfully we have @Aurora helping out a ton with the game server machine's infrastructure though! It's also hard for me to train people in-depth on the network because that takes a lot of time with the amount of complexity this network comes with.

 

I understand this is a pretty low-level overview of the network right now. A TL;DR version is:

  • This move will definitely increase the server performance heavily itself and is good long-term.
  • I'd really prefer if we waited until Bifrost is completed before moving the main server. I do still think a second CS:GO ZE server would be cool on the network though and it'd allow us to see how a CS:GO ZE server will perform on the network while having the main server to fallback on.
  • This move gives us basically full control over a majority of things. While this is nice to have, it also comes with a lot of responsibility and a lot of that responsibility is relying on me right now.
  • The network maintenance and development takes a toll on me personally. Therefore, I am trying to find others who can help with the infrastructure itself since that relies on me mostly right now.

 

Had to write this fairly quickly. I apologize for any mistakes, but I hope this clears up some things and provides a more in-depth overview of the network and whatnot.

 

If you have any questions regarding the hardware and network, please feel free to reply 🙂 

 

 

Ahhhh awesome, thank you very much for your really detailed answer.

I appreciate the care to detail and explanation you have for my question, thank you ❤️ 

Share this post


Link to post
Share on other sites


7 hours ago, Drifter said:

 

 

Ahhhh awesome, thank you very much for your really detailed answer.

I appreciate the care to detail and explanation you have for my question, thank you ❤️ 

You're welcome 🙂 

Share this post


Link to post
Share on other sites


2 hours ago, OxideTM said:

A threadripper 3950x or 3990x would be much faster than a 10900k but that will surely boost up the cost to run, right?

 

GSK has none of these AMDs in stock at the moment, unfortunately. These CPUs would definitely be faster in regards to single-threaded performance according to the benchmarks I've seen. I believe the Threadripper machine would be too expensive, but the AMD 9 3950x is something I wouldn't mind going with if we decide to replace two existing 8-core machines (since the 3950x is 16 cores and 32 threads).

 

I've heard from @Bara there are new Intel CPUs coming out that may offer even better single-threaded performance than the AMD CPUs. I haven't looked much into this yet, though.

Share this post


Link to post
Share on other sites


Guest
This topic is now closed to further replies.


×
×
  • Create New...