Roy
Content Type
Profiles
Articles
Updates
Projects
Twitch
Website Bugs/Suggestions
Guides
Newsletters
About GFL
Knowledge Base
Expenses/Hardware
Server Comparisons
Routing
Form Bugs
Community Representative Applications
Development Request
Forums
Events
Posts posted by Roy
-
-
- Popular Post
Hello, I just wanted to let everybody know that I opened the "Updates" forum to all "higher" team Members (e.g. eSports team, Media team, etc).
With that said, I made a new category as well named "Administration". I felt the "Updates" forum didn't necessarily belong in the "Teams" section. Along with this change, I made another forum under the "Administration" category for discussing and brainstorming ideas that will help GFL as a whole.
Thanks.
-
All the servers are moved off of the old machine (I did this sooner than I expected). If you experience any issues, please let me know. I ensured each server successfully connected to the MySQL server. However, there is always a possibility that something else went wrong.
4 hours ago, Batty said:After you explained what was different with Linux, I can't wait to see the rest of the servers moved to Linux (if they do get moved). Good choice!
Yep! Basically, in my opinion, Linux for game servers beats Windows any day

Thanks.
-
CS:S Bunny Hop and Garry's Mod Murder #1 are moved to the new VPS.
Note to Current/Future Server Managers running GMod servers on Linux
All the folders in addons/ must not contain any uppercase letters. They will not be loaded if they do. I believe the reason for this is because Garry's Mod loads these addons with all lowercase letters. This isn't an issue in Windows because the directory "notes", "Notes", and "nOteS" are basically the same directory while in Linux, you can have two separate directories with one named "notes" and the other named "Notes" (case-sensitive basically).
Thanks.
-
1 hour ago, TheQueenOfDicks said:
Is VPS as good as our old machine? If so why don't we buy more VPS?
The cores on NFO VPSs are virtualized resulting in lower performance. For example, having CS:GO ZE on VPS is a very bad idea because the performance would be terrible (probably by 50% or so).
Thanks.
-
I have purchased the VPS and cancelled the machine. We have until the 23rd (21st since I go on vacation) to move our servers off the machine. I will be working on this in the next couple days.
Thanks.
-
- Popular Post
- Popular Post
Hello everybody, I've took a look at our current NFO Chicago setup and a couple machines are under-loaded again. Given our current financial state, I believe it would be best to make some changes. So, here is goes.
Servers Affected (NFO Identifiers)
- CS:GO Inferno FFA DM
- CS:GO Arena
- CS:S Bunny Hop
- CS:GO DodgeBall
- GMod Hide and Seek
- GMod Murder #1
- GMod Murder #2
- CS:GO Surf Timer All Tier
- CS:GO TTT
- TF2 DeathRun
- TF2 Freak Fortress
Orange = Being moved to another machine smoothly.
Red = Being moved to another machine with a operating system change (Windows to Linux). Will need some adjustments (which I can work on).
Normal = Offline or to-be-removed.
Machine Removal
Name - chicago-quad34-i-39
Price - $129.99/m
Specs -
- Intel Xeon E3-1270 @ 3.4 GHz.
- 16 GB of ram
- 1 TB Hard Drive space.
VPS Addition
Core Count - 4 (4 or above is required for "Managed")
Price - $31.99/m
OS - Linux (Gentoo)
Notes -
- We are making this Linux because we would have to pay an additional $14.00/m if we went with Windows (licensing). That said, the servers which are planned to move to Linux aren't affected with a limitation.
- Linux is generally better than Windows with many more features.
Price Differences
Machine - $129.99/m
VPS - $31.99/m
Saved - $98.00/m ($129.99 - $31.99)
What's Next?
Next month once our back-end system is done, I plan on attempting to move our European NFO machine to an OVH GAME machine. We would save $100.99/m ($179.99 - $79.00).
If you have any questions, feel free to ask.
Thanks.
-
53 minutes ago, Spookytime666 said:
@Roy currently we are working on bringing back Deathrun (Not a big secret, most know already) then there are some other oldies I want to bring back. We haven't decided if/when we want to do Jailbreak again.
With what you said about the MOTD Ads, I'm all for them and have been wondering when we were going to do them. Now if you were to add them to the loading screen would it be possible to maybe have the add just be in a corner of the screen? Also you say members+ are immune to the ads(I figure this is to encourage people to apply for member) but does that mean someone who is member+ can't see ads? If they can still choose to see them I would enable them just to help out.
Initially, I was planning on having the ad as a pop-up. Now thinking about it, we could easily just do it in the corner. The problem is, I never could get the loading screen with the ad to consistently load (e.g. the ad/page would only load 1/4 of the time).
We can definitely add a feature so that Members+ can have the ads load if they want to support GFL in the loading screen.
Thanks.
-
- Popular Post
- Popular Post
-
- Popular Post
- Popular Post
Hello. It has been a while since I’ve made an update post, but here it is! There are many things to talk about. In the end, we are doing our best to improve. Anyway, let’s go!
Division Updates
Rust
As many of you know, we’ve recently promoted @Slotter as a Rust manager. Since then, he has been doing a great job at building and finishing up the server. We’ve never had a complete Rust server until now. From what I’ve seen, VIP/Supporter perks look great as well.The only thing lacking is the hardest part of managing a server: the population! While we do get around 10 - 12 players during our peak, a majority of those players are already in GFL (no “randoms”). In my opinion, having around 10 - 12 GFL members on the server should definitely spark more population. Unfortunately, it appears Rust acts differently than other games such as CS:GO where 10 - 12 GFL players can likely fill a server. One thing I have noticed in Rust is that when we do get randoms, they usually stay connected for a long time. This will help a lot once we find out how to bring in randoms.
Lastly, let’s talk about some things to help populate the server. The first thing that comes to my mind is advertising the server through such mediums as websites and word of mouth. Perhaps trying to get a popular Twitch streamer to actively play on the server would help as well. There are also two server browsers for community servers (Community and Modded) and we could try changing which one we show up under to see if that helps. Other than that, I cannot think of anything else.
Garry’s Mod
In my opinion, Garry’s Mod is the best game to try to expand into at the moment. With CS:GO and TF2 generally suffering, Garry’s Mod is fully community-based and a company like Valve won’t come in and destroy community servers. Recently, we promoted two Garry’s Mod Division Leaders (@Spookytime666 and @Zebra) and I’ve definitely noticed an improvement (e.g. things are starting to get done). This is great to see. Of course, I still believe we need to improve on things.First, we should be focusing on Supporter/VIP perks. While we are already doing this on TTT #1, I’m not sure if this is being done on the other servers. GFL is in a state where donations are very important, especially with the decline of ad revenue we are receiving monthly. A question I’ve noticed has been asked is whether we should provide “pay-to-win” perks. PTW (Pay-To-Win) perks are perks that give a player an advantage over another player in gameplay. While I don’t believe we should be offering perks such as “You do 150% more damage than other players”, I believe we can offer slight advantages to a player. For example, on Purge, if we had a Supporter/VIP class slightly stronger than the default classes, that should be fine.
Second, we need to get MOTDGD ads added to our servers. Initially, I was going to add the ad to the loading screen, but I’m reconsidering that (keep in mind, this would give us a lot of money if successful). However, once a player connects, an MOTDGD window should be opened with an ad playing (Members+ being immune). This will likely increase our ad revenue.
Third, expanding into more servers such as DeathRun and Jailbreak. We have succeeded with these servers in the past but unfortunately stopped showing interest in the servers resulting in them dying. We should definitely try these again and keep them up-to-date and long-term stable (e.g. long-term correct management).
Lastly, each server should have FProfiler. Only super admins should be able to use the FProfiler commands. FProfiler will help us see which server addons are causing low performance for the server and client. Even if a server isn’t experiencing performance issues, it doesn’t hurt having it on the server just in case an addon is added in the future resulting in poor performance.
To conclude, our Garry’s Mod division’s future is looking bright. Focusing on the things listed above, in my opinion, will strengthen our division. I do, however, still believe there are other things to improve on as well such as bad/inactive admins.
Team Fortress 2
With the recent promotion of @Mercer, we should get the ball running in TF2 again. As of right now, we will be setting up two new servers. These servers will be Freak Fortress 2 and DeathRun. In my opinion, these game modes are the best to expand into. They are both popular in TF2 and shouldn’t require a lot of admin activity. DeathRun is also very easy to set up.I’ve also seen a YouTube video recently where a (Twitch?) streamer visited Valve’s Headquarters and was told that they are currently working on a new server browser for TF2 and possibly other Valve games. I have sent an e-mail to the main TF2 developer asking if this is true; if so, if there is any more information available. I have yet to receive a reply. However, if this is indeed true, TF2 community servers may see a big improvement (the current server browser is extremely broken).
Counter-Strike: Global Offensive
I do not have anything important to discuss with our CS:GO division at the moment but I will in the next couple updates. After talking to @Ariistuujj, he pointed out that CS:GO MG had the “empty” tag in sv_tags. What this basically means is that the server was tagged under “empty” at all times (even when not empty). To the server managers, I would strongly suggest checking your sv_tags on your server and ensuring they are- Up-To-Date
- Unique
- Properly set
I, or possibly Ariistuujj, will eventually make a post explaining the sv_tags command further and why it is so important.
Other than that, while there are other things going on, they are not ready to be discussed.
Pirates, Vikings, & Knights
A while ago, we setup a server in PVKII. For the first couple weeks, it never filled up. Afterwards, it had a three-day span of continuous population. While this was awesome to see, I knew it wouldn’t last and I was right. PVKII isn’t a stable game. There are usually around 3 - 4 populated servers at a time (peak).A big question is, “Will this server be taken down?”. Currently, it is not generating any donations, at the same time, it isn’t really costing us anything either. Our machines are already underloaded. Therefore, my answer to that question is “The server will stay up unless our machines start to become overloaded again and the server isn’t generating income”.
eSports Team Success
A while ago, we introduced our first eSports team. While the launch of this eSports team wasn’t smooth (e.g. a lot of complaints), I strongly believe we have succeeded. From what I’ve seen, the team is great. Great skill, sportsmanship, etc. and it’s almost perfect for GFL. With that said, there are things GFL’s core will need to do to help this eSports team succeed even further and I plan on getting those sorted. Although, some of these things are very difficult to achieve.Overall, I just would like to say a big thanks to our eSports team’s roster!
OVH Back-End
In the past couple updates, I’ve briefly mentioned that we are working on a new back-end system for future OVH machines. Before going further, we want to eventually attempt to move off of NFO. I’ve explained in many times in the past, but to sum it up: Basically, OVH offers better hardware than NFO and that for more than two times cheaper. While having our servers run on these OVH machines sounds like a dream for GFL, there are many concerns with moving our servers off of our NFO machine.- I’ve heard a couple stories about server owners moving off of NFO to OVH and completely losing their player base on the spot. While I find it hard to believe that even transferring the GSLT over can’t (at least temporarily) move over the old player base, it’s definitely a big concern.
- OVH’s current datacenter close to the USA isn’t a great location and unless we get our IPs geo-located to Chicago or another populated area, we aren’t going to move our US servers. This is due to the Server Browser issue I’ve explained before.
@Ariistuujj found an article stating that OVH plans to expand into the US. A more specific location would be Vint Hill, Virginia. Initially, my heart sank and I was highly upset that this wasn’t close to Chicago. However, after thinking about it, Chicago has became very saturated for game servers. More specifically, NFO. A few years ago, I feel game servers were more spread out (e.g. we had populated US servers in the East Coast, West Coast, southern, and northern). However, when I look at GameTracker now-a-days, I see NFO - Chicago a majority of the time. Honestly, if we have the money, I think OVH’s US location isn’t a bad option to at least try to expand/move to.
As for the back-end system we are building, it is Linux-based. This is also being developed by Killer-Banana and I. However, @SPOOKY-Banana is doing a majority of the work :D. Having this developed in Linux will open up a lot of features for server managers including advanced debugging (e.g. using SourceMod Accelerator for server crashes). Since we will have full control of the machine, we can make it so server managers can see their server’s CPU usage in real-time using a command such as ‘htop’.
A more descriptive list of features will be posted in the future along with guides on how to use the new interface (basically Linux tutorials). Many server managers will understandably panic over this since they may not even know what Linux is. But honestly, this stuff isn’t difficult to do and considering there will be full server guides available on the GFL forums along with support (e.g. Technical Administrators and other server managers), there really shouldn’t be any issues.
To conclude, we unfortunately still have no ETA on when this back-end will be done. However, I am hoping it won’t take too much longer. This system will strengthen our server’s back-end and with full control come many possible features that can be used to improve our servers. With that said, the adage “with great power comes great responsibility” also plays a part in developing this system.
IPS 4 Application
One thing GFL has been severely lacking in is the fact that we have no official forms (e.g. player/admin reports, ban appeals, admin applications, etc). While I agree this should be a high priority in GFL, it’s not as simple as a few of you think. It really depends on what we want to do. We have a few choices:- Implement this with our IPS 4 application - Long Process
- Buy A Forms Addon - Short Process, $40.00 Purchase, and Features are N/A
- Look Into IPS 4 Databases - Short Process, Free, but Features are N/A and possibly wouldn’t meet our needs
- Modify (Code) IPS 4 Databases or Forms Addon - Long-ish Process and if modified right, would meet our needs
Personally, I believe 1 and 4 are our best options. However, I’m not sure what we’re going to do. I still need to talk to Denros.
Other than that, @denros and I (mostly Denros) got a Server List up and running on our main forums, which is great to see! In the future, I hope to eventually code a machine’s list that will list our future OVH machines along with the hardware and possibly even the current CPU/memory usage. Each server will have a machine assigned to them indicating they are being hosted on that specific machine.
While there are many other things that we can add onto our IPS 4 application, I don’t have any to announce at the moment due to lack of information & planning. I will eventually discuss them in future updates, though.
Player Time Tracker
I believe coding an advanced and fast Player Time Tracker will benefit GFL. For example, we could add a feature that automatically alerts the server manager when an admin has been inactive for X amount of days. I also feel implementing this into the website (e.g. IPS 4 Application) would be beneficial.We’ve been facing issues regarding inactive admins and an excuse I see often is “I didn’t even know they’re inactive”. Well, with this Player Time Tracker, that shouldn’t become an issue.
More information will be released in the next upcoming updates about this.
Board of Directors Reformation
As most of you know, @Kim and @Santahiggle have stepped down from BoD. To fill the void, the BoD team has decided to promote @Dano and @denros (officially) to the BoD group. While many questions are being asked about this, let’s go back in time when the BoD group was created. Initially, BoD was created to be like the ex-Council rank, but with less load. For example, we weren’t requiring BoD members to play games. Instead, they only needed to be active in non-game applications such as the forums, TeamSpeak 3, etc. Unfortunately, due to lack of interest for GFL or personal issues, this didn’t work out for members in our BoD team.The communication with the BoD team and staff has been very poor. Therefore, something eventually had to be done. I’ve seen a few people question the BoD team’s existence and recommended just removing it. That is not an option. BoD has a lot of access; access that I don’t want <BoD to have. Another question I’ve seen asked is why we don't have the Trusted group vote for future BoD members. Recently, we’ve been holding final votes in the Trusted group for promotions. While from a Trusted group view, this would make sense for future BoD members, the BoD rank is an extremely high rank and it requires a lot of trust from not only GFL, but from me as well. BoD promotions will be kept between the BoD team.
With Dano and Denros being promoted to BoD, I feel the BoD team as a whole will become more active since we have mostly active members in it. Both Dano and Denros have done much for GFL and I, along with many others, am confident that they both will do great as BoD.
To conclude, I feel this BoD reformation will strongly strengthen GFL, especially with knowing we promoted active users who are great at communication.
Personal Update
Vacation
I will be going on a family vacation from October 22nd to October 29th. I apologize for the very short notice. I will have my laptop with me on which I will still be able to do things for GFL in the case of an emergency.New Job
As I’ve said in the past, I will be getting a new job. I’ve been waiting for around 1.25 months now due to drug tests failing (on their end, not mine). I’ve had to take three drug tests… With the last drug test, they said they were very confident it wouldn’t come back as failed and after I come back from vacation, I will be trained.With a combination of this new job along with things in life I have to do (e.g. get signed up for school), I won’t nearly have as much time for GFL. With the new BoD selections, I feel confident that becoming less active won’t harm GFL.
TL; DR
We’ve had many changes take effect in our game divisions recently. Firstly, Rust got a new manager (Otter) and is finally completely setup, however, we still need to find ways to get the server more populated. Second, Garry’s Mod had two promotions (BigTime388 and Zebra) and things are starting to get done. Though we still need to focus on things like Supporter/VIP perks, MOTD Ads, expanding into new servers, and finally ensuring all of our servers have FProfiler. Third, with the promotion of Mercer in TF2, we can try to get the ball rolling in TF2 again. Setting up Freak Fortress 2 and DeathRun will definitely benefit us. Valve is apparently working on a new server browser for TF2 as well; I am trying to get more information about that from the main TF2 developer. Fourth, while there are things going on in the CS:GO division, I have nothing to announce at the moment. I would like for managers to check their sv_tags to ensure they’re up-to-date, unique, and properly set. Finally, our PVKII server has been suffering recently in population. However, that was expected with a game such as PVKII (e.g. only three - four servers populated at a time at peak). We will be keeping our PVKII server until our machines start to become overloaded.Although when we launched our eSports team we received a lot of negative feedback, I feel we have greatly improved since then. Our eSports team has been a success for GFL and it’s great to have them in GFL. We need to make a few changes to help them go further, however, I plan on getting them done. Unfortunately, those changes are very difficult to achieve.
As mentioned before, we have a new back-end system in development for future OVH machines. This back-end will be based in Linux and will open up a lot of potential with managing a server (e.g. we have full control now). OVH is currently expanding into Vint Hill, VA, US and I believe that location will be worth trying out.
While forms such as player/admin reports, admin applications, ban appeals and so on should be our main priority, it is a very difficult task to achieve. We have a few ways to achieve this difficult task, although we are currently unsure on which would be the best way. I will be talking to Denros about this to see which option would be the best for us. Other than that, I plan on creating a list of server machines we have along with assigning each server (from the Server List) to a machine (will tell users what hardware the server they play on runs). Developing a Player Time Tracker would also help get rid of inactive admins and such.
With the recent BoD changes, I feel our BoD team has strengthened. Many Trusted members have been questioning the existence of the BoD rank and why they can’t vote on future BoD promotions. As for the first question, I do not feel comfortable with giving the Trusted group as much access as the BoD. While yes, they are “Trusted”, the BoD has a lot more access than them. As for the second question, each BoD member not only requires trust from GFL but from me personally as well. BoD promotions will continue to be discussed between BoD members.
Finally, I will be going on vacation from October 22nd to October 29. Although this is a very short notice, I will still have my laptop and will be able to do things in case of an emergency. I will also be starting my new job after my vacation resulting in me becoming less active in GFL. I still feel the new BoD team should be able to keep GFL up and running fine.
Conclusion
To conclude, there have definitely been a lot of changes to GFL. In my opinion, we have been slowly strengthening, but with a more active BoD team, we should be able to strengthen much quicker. While there are still many things to work on and discuss, I feel focusing on the topics in this post will benefit us.
Thanks for reading! -
Also, our CS:GO servers are crashing with SourceMod loaded. We will have to wait until the SourceMod developers fix the issue.
Yet again, Valve screwed up another great feature from successful Valve games in the past in CS:GO.
Thanks.
-
They have time to add "graffiti" boxes but no time to fix community servers. Pathetic.
Thanks.
-
- Popular Post
- Popular Post
Hello, I am making this thread to basically post progress on official admin/player reports, ban appeals, and admin applications. As of right now, the only way we can submit player reports and ban appeals is through SourceBans and even then, I feel that won't suite our needs.
Recently, I've took the time to learn some things about the IPS 4 API and put them into use by creating a top donors list and donations list. After running into two big issues, this took about a day and a half straight (excluding sleep) to make. Honestly, I was hoping I could get them done without taking over a day but hey, it was successful!
My next plan is to tackle down these forms (e.g. player reports, etc) with the IPS 4 API. This is a much more "advanced" task. Why? Well, let's see what I have to learn:
- Forms - Although, after doing testing yesterday, it doesn't seem all that difficult.
- Permissions - I need to look into this but it seems a bit tricky.
- Displaying Applications - Will have to see how "topics" are displayed.
- Comments/Posts - Allow users to post comments/posts on the application.
- Other - Other small things such as automation for bans, Member Application tools (if we do add this to the system, we can just use what Denros did).
Honestly, I think the biggest pain about this whole thing is the fact that there is no documentation for IPS 4 for applications. Therefore, I have to look at the system files (considered the "API") and see how to use them through code. The only documentation IPS 4 does have that I can use is for CSS and JavaScript.
Now, let's get to the "features" list for each application.
Player/Admin Reports
- Forms - Your Steam ID, (Player|Admin)'s Steam ID, Server (automated), Map, Time, explanation, and proof.
- (Advanced|Player) Make a ban button that will automatically ban the user through SourceBans along with closing the report if the moderator is an admin on the server.
- (Advanced) Make a "Support" button and a "No Support" button for users that want to "support" the ban or "not support" the ban.
- (Advanced) If the report is valid and the admin/rule-breaker gets banned/demoted, perhaps give the reporter +x reputation on the forums? This could encourage players to report rule-breakers on the forum and get awarded if they do it right.
- Comments/Posts allowed.
Ban Appeals
- Forms - Steam ID, Server (automated), When you were banned, Why you were banned, and Why do you want to be un-banned.
- (Advanced) Make an "un-ban" button that will automatically un-ban the user if they're manager for the server selected.
- (Advanced) Make a "Support" button and a "No Support" button for users that want to "support" the un-ban or "not support" the un-ban.
- Comments/Posts allowed.
Admin Application
- Forms - Your Steam ID, Server (automated), Why you want to be admin, and How much time have you spent on the server.
- (Advanced) Make a "Support" button and a "No Support" button for users that want to "support" the application or "not support" the application. This would be WITH a post/comment (with maybe 30 characters minimum length?). Therefore, admins and such can't just "+1" an application without explaining why.
- Comments/Posts allowed.
Advanced = Hard to implement and only will be if suggested.
I will make posts on this topic explaining the progress made in the future. I hope to make things like Media/GFX applications possible and easy to implement as well.
P.S. I don't know how much time I'll have to work on this. Therefore, I cannot give an ETA. I am going to become busy in the next few months.
P.S. #2 - Also, we "could" technically buy a "Forms" addon from the IPS 4 marketplace. But it would be $40.00. Honestly, I would prefer just making them from scratch than spending $40.00 especially in the current state we're in.
Thanks.
-
1 hour ago, Major_Push said:
Hate to disagree with you on 2 things but I feel like they're worth reiterating. It's going to sound a bit whiny since it's something I've been experiencing a bit this week.
Anyway, the first thing is that we definitely can do more to populate our servers.
One thing that I've brought up before is holding admins more accountable for playing on their respective servers or else they risk losing their admin. Currently, some admins can not play on their server for months without being demoted and that's honestly not a good way to run a server. Because then the admins aren't going to feel like they need to play. Obviously, it's hard to keep track of them all the time, and I understand that it's really draining to try populating a server when it seems like your efforts are in vain, but we have to make an effort to hold admins more accountable. Also, it would help if everyone wasn't trying to populate their own server and only their own server.
This is going to sound a bit whiny, but it's hard for me to get people on DB when some of them have to be populating their own server whether that be ZM, Purge, JB, etc. The problem is that our resources are spread thin and we can't make a concentrated attempt. The way I think of our servers is that they're like an organism "budding" to make more versions of itself. We start off with 1 server and populate that one. After that one's populated, we can take some of our new found players and use them to "bud" off and make a 2nd server which we then populate. By the end of the process we should have 2 healthy servers which can both bud off to help populate a 3rd and/or 4th server. Afterwards we should be able to bud off more and more and more. The way I see our current process is that we have a few good servers that can bud off properly to make new servers, but they tend to come off premature and wither and die which don't help us grow. ie. We get a new server that ends up wasting our resources but dies off. Which is why I always say that we should slow down on trying to make new servers when we have old ones that need work. If ALL of our servers were at the level that ZE's at right now then we'd be a powerhouse and could be making new servers left and right. But right now we've got a bunch of servers that don't have the necessary attention to succeed. We could/should cut some of the dead servers. I guess I'll defend db a bit right now since I'm sort of advocating cutting it. DB is currently one of the only two decoy dodgeball servers that exist in the world. The other one is running 1 map 24/7 so it doesn't even really count. So in reality, we're the only db server that exists right now in CSGO. So it shouldn't be too hard to populate once I make the Reddit post at the end of this month.
When we make a new server it shouldn't just kind of release one day. A server being released should be an event and not just something that occurs. I'd say that we should do at most 1 new server a month. Or at the very least, we shouldn't make any new servers while we have a server that's struggling. There's no point in releasing 2~3 new servers in the same month just to have them compete for attention.
And this brings us into my 2nd disagreement. And that's that we're NOT a strong community. We're actually a rather weak community, imo. Admins from 1 server almost never want to help out admins from another server. And pretty much everyone's in their own clique which creates a very weak sense of community. If we were in fact a strong community then we wouldn't have had things like the CSS division leaving and we wouldn't have as many dead/dying servers as we do. The GMod division, TF2 Division, CSGO Division, and what is left of the CSS division could all break away 1 day and it wouldn't bother anyone else in any other division. The old CSS division was admittedly a bit of an outlier in that they were conservative in their views on how a server should be run compared to the rest of the community. ie. Let the players do more or less what they please.
And the problem even goes down a bit worse into server admins not caring about servers within their own division. Earlier this month we had a post by a GMod Admin about how the GMod admins had problems working together. Some of the CSGO admins probably laugh it off like "oh, that's just because they're GMod admins" but that's honestly hypocritical. Within the CSGO division, we have a lot of dead servers which get no love from their fellow servers. There's a huge disconnect between a lot of the admins from different servers so I honestly don't think we're a strong community. We're a weak one and we need to be a lot more cooperative or else the community won't be able to bud off any new good servers and eventually all of our good servers will fail until we're down to our strongest 2 servers (ZE and TTT).
I, for one, always do my part in helping out with populating new servers and I'd like to see our current admin team get more involved with populating new servers. Obviously they have to take care of their own servers (especially if they're dying) but the way things are currently is not something I'd describe as strong.
I posted quite a few solutions on another thread but almost no one replied to it which is quite sad since I think that some of the things I listed can be quite beneficial to the server. Once I get DB populated, hopefully by the end of this year, I'm going to go back to helping out with populating our servers.
tl;dr: We can do A LOT more to populate our servers despite how shitty Valve has been. I don't think that we should use the "It's valve's fault" excuse to explain why most of our servers are dying when there's a lot of internal failures. And we're NOT a strong community. We're pathetically weak as a community and part of that has to do with the fact that admins from 1 server don't care about the admins on another server. I listed quite a few things that I think can help here: https://gflclan.com/forums/topic/5372-some-suggestions/
We kind of have a crab mentality (https://en.wikipedia.org/wiki/Crab_mentality). We all want OUR server to succeed but don't want to help out the other servers because it doesn't immediately help us and since everyone thinks this, none of our servers get populated. And just like those crabs aren't a community, nor are we. We need to help each other out and then when we need help, we'll get it. This will require EVERYONE to work together.
If we don't step up as members of this community to become a stronger community then we're going to crash and burn eventually. And I'd really not like that.
I know it sounds hypocritical but mind you that I've been saying this for at least 2 years now.
I apologize for not responding to your suggestions thread, while reading this post I remembered I was suppose to read it. I will read it soon (as long as nothing important comes up again).
While I do agree with a lot of the things you're saying, I do believe you highly underestimate how difficult it is to "keep" a CS:GO server populated. Yes, short-term populating isn't an issue, especially with having ~5 - 10 GFL members on the server, but keeping it populated without those ~5 - 10 GFL members is the rather difficult part. I guarantee you that if we "cheap'd" out and put weapon skins on our servers, they would immediately receive a ton of population (but no, we will NOT do that since we're a fair community). With that said, I do believe having admins and such help on other servers will help us but if they don't, that wouldn't make them a bad admin. As long as they are actively helping the server(s) they are assigned to (e.g. playing on them daily, etc), I don't have any problems.
I'm not going to go into much more. But like I said before, I do agree with you on how admins need to start actually playing their server, etc. Once we get a nice admin tracking system, I hope this becomes easier. With that said, if there are any admins who haven't played their server for over a month with proof and no notice, I'll gladly demote them from that specific server (it SHOULD be the Server Manager's job).
Thanks.
-
I apologize if the post seems kinda.. bumpy? It is late and I'm honestly tired. I've re-read it and I believe I got the point across.
P.S. Yes, my sleeping schedule is messed up again. Good game...
Thanks.
-
- Popular Post
- Popular Post
Hello everybody, I just wanted to give a smallish update on what's happening to me in the next upcoming weeks. Basically, my Summer job ended on the 9th of this month. I immediately went looking for new jobs and fortunately found one. While I'm not actually working yet (still have to go through an orientation + training), I will be working there soon. This job will be part-time, which will leave me with extra time to work on GFL. Although, I will be focusing on other things in life such as setting myself up to go to college (this upcoming Winter) and getting certifications that could help me find another job in the future. Once school starts, I highly hope I can find time for GFL.
With that said, I would like to talk about the last few months. Ever since the series of negative events have occurred, I've been spending A LOT of time working on GFL. Since security has became such a big factor in how I run GFL, I've been trying to do as many things as I can without giving access out. It will stay this way. Now, you're probably expecting me to complain about the work I'm doing, but that's not the case. The things I've been doing with GFL recently have highly benefited me, a lot, and I honestly enjoy them. The only thing I can complain about is the fact that a majority of our servers are dying, but sadly, there's nothing much we can do for a majority of the games we are suffering in (all I can say is I've tried my absolute best, Valve).
Recently, I've been very stressed out since many decisions have popped up (probably more than you think). I've been thinking about this for a long time and in the next couple months, I'm not going to be able to put as much time into GFL as I have in the past four months. That's why I am going to focus on strengthening our current teams in hopes that future decisions, etc are made smoothly. I want to eventually get to the point where I am only doing the back-end things while the Board of Directors, Division Leaders, and Community Advisers are making all the decisions and such to benefit the community (yes, this includes coming up with ideas). I hope to be called a "highly trusted" Technical Administrator by then (no, I won't be "stepping" down neither)
Anyway, other than that, I've been spending a lot of time into building the Linux back-end of a possible system we'll use when we slowly move to OVH. There are many other things on my to-do-list as well and yes, I'm aware of the long delays I'm causing (e.g. @Amelie needing access to our social media accounts). I'll try to get that all done (so many things at once).
Lastly, I would just like to give a big thank you to all the people who have contributed to GFL in the past four months. It has been a very, very rough ride and there are many gaming communities who wouldn't have survived it. While we still need to improve in areas (e.g. getting more donations, completing the IPS 4 application, etc) and a majority of our servers are suffering in population, I would say our back-end is more stronger than ever. Once we get a ticket system, there shouldn't be issues going unnoticed. I'm happy to say that GFL is indeed, a "strong" community.
If you have any questions/concerns, feel free to reply with them!
Again, thanks!
-
13 minutes ago, Major_Push said:
Wasn't he the guy who helped figure out how to fix the ze knockback?
Yes, he was. He created this plugin that patched it
Thanks.
-
- Popular Post
- Popular Post
Hello, last week Peace-Maker was helping me with an issue and I linked a thread from GFL's forum. After he viewed it, he went to our Surf RPG DM forum section and discovered players wanted some things implemented to the RPG plugin. For those of you that do not know, Peace-Maker is currently the developer of the RPG plugin we use for our servers (CS:GO/CS:S Surf DeathMatch).
He wanted me to notify everybody that players can suggest things on the SM:RPG GitHub repository (under the "issues" tab). If you have ideas to make the RPG plugin better, please report there!
P.S. This also applies to CS:S Surf RPG DM.
Thanks.
-
- Popular Post
- Popular Post
-
- Popular Post
- Popular Post
I'm a little late to reply, but whatever (yeah, ban me for necro'ing!).
Infantry, you have always been a great person and I am happy to have you as a friend. I'm very sad to see you go and I wish you could have been in the community longer. But I am glad you are taking the steps to improve your life and future.
I wish you the best of luck in the future and if you ever need anything, I will always be here for you!
Farewell, Infantry

-
- Popular Post
- Popular Post
Congratulations to both of you! I believe you both will do well.
There are many more decisions to make in the next few days and I'm honestly excited for the future!
Thanks.
-
Also leaving these test results here using the threadpool_run_tests command...
Linux
QuoteTesting 2 threads:
CTSQueue test: single thread push/pop, in order... pass
CTSQueue test: single thread push/pop, interleaved... pass
CTSQueue test: sequential push, multithread pop, no affinity...pass
CTSQueue test: single thread push, multithread pop, no affinity...pass
CTSQueue test: multithread push, sequential pop, no affinity...pass
CTSQueue test: multithread push, single thread pop, no affinity...pass
CTSQueue test: multithread push, multithread pop, no affinity...pass
CTSQueue test: multithread interleaved push/pop, no affinity...pass
CTSQueue test: sequential push, multithread pop, distributed...pass
CTSQueue test: single thread push, multithread pop, distributed...pass
CTSQueue test: multithread push, sequential pop, distributed...pass
CTSQueue test: multithread push, single thread pop, distributed...pass
CTSQueue test: multithread push, multithread pop, distributed...pass
CTSQueue test: multithread interleaved push/pop, distributed...passTesting 4 threads:
CTSQueue test: single thread push/pop, in order... pass
CTSQueue test: single thread push/pop, interleaved... pass
CTSQueue test: sequential push, multithread pop, no affinity...pass
CTSQueue test: single thread push, multithread pop, no affinity...pass
CTSQueue test: multithread push, sequential pop, no affinity...pass
CTSQueue test: multithread push, single thread pop, no affinity...pass
CTSQueue test: multithread push, multithread pop, no affinity...pass
CTSQueue test: multithread interleaved push/pop, no affinity...pass
CTSQueue test: sequential push, multithread pop, distributed...pass
CTSQueue test: single thread push, multithread pop, distributed...pass
CTSQueue test: multithread push, sequential pop, distributed...pass
CTSQueue test: multithread push, single thread pop, distributed...pass
CTSQueue test: multithread push, multithread pop, distributed...pass
CTSQueue test: multithread interleaved push/pop, distributed...pass
Tests done, purging test memory...doneTesting 2 threads:
CTSList test: single thread push/pop, in order... pass
CTSList test: single thread push/pop, interleaved... pass
CTSList test: sequential push, multithread pop, no affinity...pass
CTSList test: single thread push, multithread pop, no affinity...pass
CTSList test: multithread push, sequential pop, no affinity...pass
CTSList test: multithread push, single thread pop, no affinity...pass
CTSList test: multithread push, multithread pop, no affinity...pass
CTSList test: multithread interleaved push/pop, no affinity...pass
CTSList test: sequential push, multithread pop, distributed...pass
CTSList test: single thread push, multithread pop, distributed...pass
CTSList test: multithread push, sequential pop, distributed...pass
CTSList test: multithread push, single thread pop, distributed...pass
CTSList test: multithread push, multithread pop, distributed...pass
CTSList test: multithread interleaved push/pop, distributed...passTesting 4 threads:
CTSList test: single thread push/pop, in order... pass
CTSList test: single thread push/pop, interleaved... pass
CTSList test: sequential push, multithread pop, no affinity...pass
CTSList test: single thread push, multithread pop, no affinity...pass
CTSList test: multithread push, sequential pop, no affinity...pass
CTSList test: multithread push, single thread pop, no affinity...pass
CTSList test: multithread push, multithread pop, no affinity...pass
CTSList test: multithread interleaved push/pop, no affinity...pass
CTSList test: sequential push, multithread pop, distributed...pass
CTSList test: single thread push, multithread pop, distributed...pass
CTSList test: multithread push, sequential pop, distributed...pass
CTSList test: multithread push, single thread pop, distributed...pass
CTSList test: multithread push, multithread pop, distributed...pass
CTSList test: multithread interleaved push/pop, distributed...pass
Tests done, purging test memory...done
ThreadPoolTest: Job distribution speed
ThreadPoolTest: NOT to completion
ThreadPoolTest: Non-distribute
ThreadPoolTest: Testing! Sleep -10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 4000 (4000) jobs processed in 14.540510ms, 0.000121ms to suspend (0.003635/0.003635) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 13.827848ms, 0.000099ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 14.112061ms, 0.000084ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 14.249906ms, 0.000251ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 0, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 13.986922ms, 0.000286ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 14.567649ms, 0.000147ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 15.044790ms, 0.000126ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 14.242330ms, 0.000182ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 14.867997ms, 0.000244ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 15.650580ms, 0.000271ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 16.162694ms, 0.000249ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 16.047820ms, 0.000300ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep -10, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 15.693098ms, 0.000715ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 14.159820ms, 0.000446ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 15.308517ms, 0.000598ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 16.376081ms, 0.000580ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 0, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 16.324083ms, 0.000509ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 15.023115ms, 0.000738ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 16.405169ms, 0.000435ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 15.050575ms, 0.000608ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 10, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 16.127151ms, 0.000574ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 15.881634ms, 0.000737ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 15.840338ms, 0.000614ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 17.741101ms, 0.008225ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Distribute
ThreadPoolTest: Testing! Sleep -10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 4000 (4000) jobs processed in 15.756236ms, 0.000157ms to suspend (0.003939/0.003939) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 14.176467ms, 0.000194ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 14.327793ms, 0.000212ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 15.769738ms, 0.000132ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 0, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 17.511458ms, 0.000231ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 15.507633ms, 0.000292ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 14.526792ms, 0.000224ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 15.105385ms, 0.000183ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 14.910221ms, 0.000225ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 14.677172ms, 0.000201ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 15.899655ms, 0.000304ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 15.825037ms, 0.000118ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep -10, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 14.325932ms, 0.000465ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 15.054373ms, 0.000512ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 15.531627ms, 0.000666ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 15.563478ms, 0.000575ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 0, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 15.853493ms, 0.000647ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 15.481148ms, 0.000465ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 15.447769ms, 0.000675ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 15.586225ms, 0.000913ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 10, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 14.988083ms, 0.000714ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 16.306920ms, 0.001047ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 17.797023ms, 0.000833ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 16.499123ms, 0.000569ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: NO Sleep
ThreadPoolTest: Testing! Sleep -10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 4000 (4000) jobs processed in 0.336548ms, 0.000053ms to suspend (0.000084/0.000084) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.025678ms, 0.000048ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.025391ms, 0.000053ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.025699ms, 0.000059ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 0, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.025638ms, 0.000050ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.025435ms, 0.000058ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.025565ms, 0.000054ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.025440ms, 0.000056ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.025521ms, 0.000051ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.025518ms, 0.000054ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.025482ms, 0.000054ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.025716ms, 0.000054ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep -10, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.025494ms, 0.000205ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.025543ms, 0.000223ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.025325ms, 0.000229ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.025842ms, 0.000217ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 0, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.038731ms, 0.000298ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.040394ms, 0.000448ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.027510ms, 0.000262ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.025457ms, 0.000243ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 10, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.025454ms, 0.000226ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.025776ms, 0.000207ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.025562ms, 0.000218ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.025942ms, 0.000220ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Distribute NO Sleep
ThreadPoolTest: Testing! Sleep -10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 4000 (4000) jobs processed in 0.328272ms, 0.000091ms to suspend (0.000082/0.000082) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.039734ms, 0.000073ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.025598ms, 0.000053ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.032157ms, 0.000048ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 0, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.025579ms, 0.000053ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.025822ms, 0.000056ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.025317ms, 0.000050ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.025576ms, 0.000049ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.025742ms, 0.000055ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.025706ms, 0.000055ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.025742ms, 0.000055ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.025641ms, 0.000055ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep -10, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.025714ms, 0.000202ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.025558ms, 0.000226ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.025592ms, 0.000229ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.029938ms, 0.000220ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 0, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.025629ms, 0.000214ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.025532ms, 0.000246ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.025746ms, 0.000236ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.038387ms, 0.000313ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 10, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.025794ms, 0.000250ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.025718ms, 0.000234ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.026331ms, 0.000221ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.026618ms, 0.000213ms to suspend (inf/inf) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: To completion
ThreadPoolTest: Non-distribute
ThreadPoolTest: Testing! Sleep -10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 4000 (4000) jobs processed in 15.725574ms, 0.000155ms to suspend (0.003931/0.003931) [ (main) 4000, 0, 0, 0, 0, 0, 0, 0, 0]Windows
QuoteTesting 2 threads:
CTSQueue test: single thread push/pop, in order... pass
CTSQueue test: single thread push/pop, interleaved... pass
CTSQueue test: sequential push, multithread pop, no affinity...pass
CTSQueue test: single thread push, multithread pop, no affinity...pass
CTSQueue test: multithread push, sequential pop, no affinity...pass
CTSQueue test: multithread push, single thread pop, no affinity...pass
CTSQueue test: multithread push, multithread pop, no affinity...pass
CTSQueue test: multithread interleaved push/pop, no affinity...pass
CTSQueue test: sequential push, multithread pop, distributed...pass
CTSQueue test: single thread push, multithread pop, distributed...pass
CTSQueue test: multithread push, sequential pop, distributed...pass
CTSQueue test: multithread push, single thread pop, distributed...pass
CTSQueue test: multithread push, multithread pop, distributed...pass
CTSQueue test: multithread interleaved push/pop, distributed...passTesting 4 threads:
CTSQueue test: single thread push/pop, in order... pass
CTSQueue test: single thread push/pop, interleaved... pass
CTSQueue test: sequential push, multithread pop, no affinity...pass
CTSQueue test: single thread push, multithread pop, no affinity...pass
CTSQueue test: multithread push, sequential pop, no affinity...pass
CTSQueue test: multithread push, single thread pop, no affinity...pass
CTSQueue test: multithread push, multithread pop, no affinity...pass
CTSQueue test: multithread interleaved push/pop, no affinity...pass
CTSQueue test: sequential push, multithread pop, distributed...pass
CTSQueue test: single thread push, multithread pop, distributed...pass
CTSQueue test: multithread push, sequential pop, distributed...pass
CTSQueue test: multithread push, single thread pop, distributed...pass
CTSQueue test: multithread push, multithread pop, distributed...pass
CTSQueue test: multithread interleaved push/pop, distributed...passTesting 8 threads:
CTSQueue test: single thread push/pop, in order... pass
CTSQueue test: single thread push/pop, interleaved... pass
CTSQueue test: sequential push, multithread pop, no affinity...pass
CTSQueue test: single thread push, multithread pop, no affinity...pass
CTSQueue test: multithread push, sequential pop, no affinity...pass
CTSQueue test: multithread push, single thread pop, no affinity...pass
CTSQueue test: multithread push, multithread pop, no affinity...pass
CTSQueue test: multithread interleaved push/pop, no affinity...pass
CTSQueue test: sequential push, multithread pop, distributed...pass
CTSQueue test: single thread push, multithread pop, distributed...pass
CTSQueue test: multithread push, sequential pop, distributed...pass
CTSQueue test: multithread push, single thread pop, distributed...pass
CTSQueue test: multithread push, multithread pop, distributed...pass
CTSQueue test: multithread interleaved push/pop, distributed...pass
Tests done, purging test memory...doneTesting 2 threads:
CTSList test: single thread push/pop, in order... pass
CTSList test: single thread push/pop, interleaved... pass
CTSList test: sequential push, multithread pop, no affinity...pass
CTSList test: single thread push, multithread pop, no affinity...pass
CTSList test: multithread push, sequential pop, no affinity...pass
CTSList test: multithread push, single thread pop, no affinity...pass
CTSList test: multithread push, multithread pop, no affinity...pass
CTSList test: multithread interleaved push/pop, no affinity...pass
CTSList test: sequential push, multithread pop, distributed...pass
CTSList test: single thread push, multithread pop, distributed...pass
CTSList test: multithread push, sequential pop, distributed...pass
CTSList test: multithread push, single thread pop, distributed...pass
CTSList test: multithread push, multithread pop, distributed...pass
CTSList test: multithread interleaved push/pop, distributed...passTesting 4 threads:
CTSList test: single thread push/pop, in order... pass
CTSList test: single thread push/pop, interleaved... pass
CTSList test: sequential push, multithread pop, no affinity...pass
CTSList test: single thread push, multithread pop, no affinity...pass
CTSList test: multithread push, sequential pop, no affinity...pass
CTSList test: multithread push, single thread pop, no affinity...pass
CTSList test: multithread push, multithread pop, no affinity...pass
CTSList test: multithread interleaved push/pop, no affinity...pass
CTSList test: sequential push, multithread pop, distributed...pass
CTSList test: single thread push, multithread pop, distributed...pass
CTSList test: multithread push, sequential pop, distributed...pass
CTSList test: multithread push, single thread pop, distributed...pass
CTSList test: multithread push, multithread pop, distributed...pass
CTSList test: multithread interleaved push/pop, distributed...passTesting 8 threads:
CTSList test: single thread push/pop, in order... pass
CTSList test: single thread push/pop, interleaved... pass
CTSList test: sequential push, multithread pop, no affinity...pass
CTSList test: single thread push, multithread pop, no affinity...pass
CTSList test: multithread push, sequential pop, no affinity...pass
CTSList test: multithread push, single thread pop, no affinity...pass
CTSList test: multithread push, multithread pop, no affinity...pass
CTSList test: multithread interleaved push/pop, no affinity...pass
CTSList test: sequential push, multithread pop, distributed...pass
CTSList test: single thread push, multithread pop, distributed...pass
CTSList test: multithread push, sequential pop, distributed...pass
CTSList test: multithread push, single thread pop, distributed...pass
CTSList test: multithread push, multithread pop, distributed...pass
CTSList test: multithread interleaved push/pop, distributed...pass
Tests done, purging test memory...done
ThreadPoolTest: Job distribution speed
ThreadPoolTest: NOT to completion
ThreadPoolTest: Non-distribute
ThreadPoolTest: Testing! Sleep -10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (3) jobs processed in 0.714909ms, 0.011514ms to suspend (0.238303/1.#INF00) [ (main) 0, 3, 0, 0, 0, 0, 0, 0, 0]
Attempted to add job to job queue that has already been completed
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.282616ms, 0.018259ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.100116ms, 0.025955ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.102533ms, 0.036134ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 0, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.104017ms, 0.009464ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.098194ms, 0.016952ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.100315ms, 0.026059ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.102507ms, 0.036015ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.095878ms, 0.009671ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.097990ms, 0.017121ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.100266ms, 0.026089ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.102538ms, 0.038719ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep -10, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.094030ms, 0.011120ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.093972ms, 0.019758ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.099419ms, 0.027741ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.094244ms, 0.037491ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 0, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.094052ms, 0.011125ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.094116ms, 0.019957ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.094111ms, 0.027997ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.094046ms, 0.037634ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 10, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.094015ms, 0.011246ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.093980ms, 0.020889ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.093875ms, 0.027756ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.094134ms, 0.037158ms to suspend (1.#INF00/1.#INF00) [ (main) 3, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Distribute
ThreadPoolTest: Testing! Sleep -10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (1) jobs processed in 0.550529ms, 0.006431ms to suspend (0.550529/1.#INF00) [ (main) 0, 1, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.097904ms, 0.016897ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.100067ms, 0.026628ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.102503ms, 0.035878ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 0, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.095783ms, 0.006421ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.098046ms, 0.017110ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.100425ms, 0.026308ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.111229ms, 0.036234ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.095566ms, 0.006104ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.097960ms, 0.016767ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.100096ms, 0.026755ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.106062ms, 0.036583ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep -10, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.095994ms, 0.012179ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.093865ms, 0.021252ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.094874ms, 0.028252ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.093904ms, 0.038102ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 0, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.093820ms, 0.012261ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.093909ms, 0.041370ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.094051ms, 0.028115ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.093982ms, 0.037663ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 10, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.094091ms, 0.012820ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.093905ms, 0.021282ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.093912ms, 0.028124ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.094206ms, 0.037462ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: NO Sleep
ThreadPoolTest: Testing! Sleep -10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (1) jobs processed in 0.541596ms, 0.006381ms to suspend (0.541596/1.#INF00) [ (main) 0, 1, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.038105ms, 0.062844ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.084418ms, 0.043070ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.069527ms, 0.036421ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 0, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.035457ms, 0.009688ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.037990ms, 0.017023ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.040000ms, 0.026354ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.066583ms, 0.035821ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.035517ms, 0.009769ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.064824ms, 0.016824ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.086525ms, 0.044440ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.127173ms, 0.051773ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep -10, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.033628ms, 0.011283ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.050328ms, 0.020374ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.033526ms, 0.100385ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.033623ms, 0.148395ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 0, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.033636ms, 0.011352ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.033679ms, 0.051808ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.033638ms, 0.056821ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.033681ms, 0.063080ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 10, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.033602ms, 0.011355ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.033709ms, 0.021385ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.033689ms, 0.074278ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.033511ms, 0.200861ms to suspend (1.#INF00/1.#INF00) [ (main) 1, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Distribute NO Sleep
ThreadPoolTest: Testing! Sleep -10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.564103ms, 0.053300ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.204297ms, 0.132525ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.089459ms, 0.186706ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.089419ms, 0.068286ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 0, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.035572ms, 0.009654ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.066074ms, 0.017006ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.039890ms, 0.026672ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.042537ms, 0.036192ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.035367ms, 0.009810ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.037723ms, 0.016789ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.040022ms, 0.026132ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.077053ms, 0.035659ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep -10, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.033547ms, 0.012471ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.033750ms, 0.020155ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.033714ms, 0.070854ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.033741ms, 0.092625ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 0, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.033574ms, 0.021567ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.053227ms, 0.061745ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.033537ms, 0.073486ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.033719ms, 0.076525ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: Testing! Sleep 10, interleave 1, prioritized 0
ThreadPoolTest: 1 threads -- 0 (0) jobs processed in 0.033633ms, 0.062313ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 3 threads -- 0 (0) jobs processed in 0.033714ms, 0.061330ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 5 threads -- 0 (0) jobs processed in 0.033664ms, 0.049557ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: 7 threads -- 0 (0) jobs processed in 0.033729ms, 0.063686ms to suspend (1.#INF00/1.#INF00) [ (main) 0, 0, 0, 0, 0, 0, 0, 0, 0]
ThreadPoolTest: To completion
ThreadPoolTest: Non-distribute
ThreadPoolTest: Testing! Sleep -10, interleave 0, prioritized 0
ThreadPoolTest: 1 threads -- 4000 (4000) jobs processed in 5.049464ms, 0.024500ms to suspend (0.001262/0.001268) [ (main) 0, 4000, 0, 0, 0, 0, 0, 0, 0]Thanks.
-
Update
I've discovered some commands with very interesting results. Basically, what I am trying to do right now is limit my Window's server to only one thread. However, the only way I can do that is with -thread 1 in the command line. This is not the case with Linux (keep reading!).
Windows configuration
Quotesv_stressbots 0
bot_quota 34
bot_difficulty 2
mp_limitteams 0
net_splitrate 4
sm_cvar net_maxcleartime 0.001
sv_maxupdaterate 256
sm_cvar sv_maxcmdrate 256
sv_minupdaterate 256
sv_mincmdrate 256
sv_occlude_players 0
occlusion_test_async 0
bot_join_after_player 0
sm_cvar bot_flipout 1
sm_cvar net_queued_packet_thread 0sm_cvar threadpool_affinity 0
sm_cvar host_thread_mode 0
mat_queue_mode 0Linux configuration
Quotesv_stressbots 0
bot_quota 34
bot_difficulty 2
mp_limitteams 0
net_splitrate 4
sm_cvar net_maxcleartime 0.001
sv_maxupdaterate 256
sm_cvar sv_maxcmdrate 256
sv_minupdaterate 256
sv_mincmdrate 256
sv_occlude_players 0
occlusion_test_async 0
bot_join_after_player 0
sm_cvar bot_flipout 1
sm_cvar net_queued_packet_thread 1
sm_cvar threadpool_affinity 1
sm_cvar host_thread_mode 2
mat_queue_mode 2The truth is, none of those commands on Windows limits the server to one thread while on my Linux server, if I set occlusion_test_async to 0, it can't use more than one thread.
After looking at my old ConVar list for CS:GO, I discovered two commands with very interesting results:
- threadpool_reserve - Reserves x amount of threads. This is basically threadpool_cycle_reserve, but with this, you can declare how many extra threads should be reserved. This also includes an argument (x) while threadpool_cycle_reserve does not.
- threadpool_cycle_reserve - Reserves all the "extra" threads. This basically toggles between reserving the extra threads (e.g. occlusion and networking).
From what I've tested, these commands basically disable any extra threads besides the main thread. However, with threadpool_reserve, I can tell it how many extra threads to disable whereas threadpool_cycle_reserve disables and enables all the extra threads (toggle-like).
So, I've done a lot of testing and I am going to share it here!
Testing - Linux Server
"threadpool_cycle_reserve" with "occlusion_test_async 0"
Quote17:40:24 threadpool_cycle_reserve
17:40:24 0 threads being reserved
17:40:26 threadpool_cycle_reserve
17:40:26 0 threads being reservedResult: Since occlusion_test_async is set to 0 (the only extra thread I am aware of), there are no threads to be reserved or unreserved.
"threadpool_cycle_reserve" with "occlusion_test_async 1"
Quote17:42:05 threadpool_cycle_reserve
17:42:05 1 threads being reserved
17:42:10 threadpool_cycle_reserve
17:42:10 0 threads being reservedResult: Since occlusion_test_async was set to 1, this command "reserved" the so-called "occlusion" (not "network") thread. After sending the command again, it unreserved the extra thread (why it says 0 reserved).
Testing - Windows Server
"threadpool_cycle_reserve" with "occlusion_test_async 0"
Quote17:49:47 threadpool_cycle_reserve
2 threads being reserved
17:49:52 threadpool_cycle_reserve
0 threads being reservedResult: This is now disabling TWO extra threads. I would assume this includes the occlusion thread along with the networking thread. However, there is no proof. This is proof that "occlusion_test_async 0" has absolutely no effect on Windows servers and doesn't "stop" an extra thread (I explained this at the beginning of the thread).
"threadpool_cycle_reserve" with "occlusion_test_async 1"
Quote17:52:35 threadpool_cycle_reserve
2 threads being reserved
17:52:36 threadpool_cycle_reserve
0 threads being reservedResult: The same results as "occlusion_test_async" at 0 since again, this command has no effect on Windows servers.
Conclusion
Linux
The "networking" thread doesn't appear to be starting in general. All the commands that encourages the server to support multi-threading aren't working with the networking thread. I will continue to look for more commands and see if they somehow "start" the networking thread.
Windows
There's clearly two extra threads being started and the only way I can disable them is with -threads 1 in the command-line. Occlusion_test_async doesn't have any effect on Windows servers while it does on Linux servers. I've also tried multiple commands that discourage multi-threading and I cannot stop the networking thread without reserving the threads.
I've also done testing on CS:S, however, I haven't been able to use the networking thread.
Thanks.
-
- Popular Post
Changelog
- Fixed ATA_RollTheDice not loading (read below for a fix).
- (US) Fixed the server not connecting to MySQL (along with any other servers that didn't have MySQL access on the machine).
ATA_RollTheDice Fix
The plugin broke after the latest CS:GO update. The BloodDrips signature was invalid. Therefore, I used IDA to receive a new signature.
\x55\x8B\xEC\x83\xE4\xF8\x83\xEC\x78\x56\x8B\xF1\x57\x80\xBE\x36\x02\x00\x00\x00
If you're a server operator and want to apply this fix, please go to sourcemod/gamedata/ata_rollthedice.games.txt and replace the "BloodDrips" signature with the one above (for Windows). For example:
"BloodDrips" { "library" "server" "windows" "\x55\x8B\xEC\x83\xE4\xF8\x83\xEC\x78\x56\x8B\xF1\x57\x80\xBE\x36\x02\x00\x00\x00" "linux" "@_Z15UTIL_BloodDripsRK6VectorS1_ii" }
Ensure to do this under the "csgo" section (I made this mistake while testing and used the "cstrike" section by accident).
Full gamedata file example (working for CS:GO):
"Games" { "csgo" { "Offsets" { "SetModel" { "windows" "26" "linux" "27" } "OnTakeDamage" { "windows" "67" "linux" "68" } } "Signatures" { "SetModel" { "library" "server" "windows" "\x56\x8b\x74\x24\x08\x57\x8b\xf9\x2A\x2A\x2A\x2A\x2A\x2A\x8b\x01\x56\x2A\x2A\x2A\x2A\x2A\x2A\x2A\x2A\x2A\x8b\x11\x50\x2A\x2A\x2A\x85\xc0\x2A\x2A\x2A\x2A\x2A\x2A\x2A\x2A\x8b\x11\x50\x2A\x2A\x24\x83\xf8\x01" "linux" "@_ZN11CBaseEntity8SetModelEPKc" } "BloodDrips" { "library" "server" "windows" "\x55\x8B\xEC\x83\xE4\xF8\x83\xEC\x78\x56\x8B\xF1\x57\x80\xBE\x36\x02\x00\x00\x00" "linux" "@_Z15UTIL_BloodDripsRK6VectorS1_ii" } "BloodSpray" { "library" "server" "windows" "\x8B\x4C\x24\x0C\x83\xEC\x2A\x83\xF9\xFF\x0F\x2A\x2A\x2A\x2A\x2A\xD9\xEE\x33\xC0\xD9\x2A\x2A\x2A\x89\x2A\x2A\x2A\xD9" "linux" "@_Z15UTIL_BloodSprayRK6VectorS1_iii" } "TakeDamageInfo" { "library" "server" "windows" "\x83\xEC\x2A\x57\x8B\xF9\x8B\x0D\x2A\x2A\x2A\x2A\x85\xC9\x0F\x84\x2A\x2A\x2A\x2A\x8B\x11\x56\x8B" "linux" "@_ZN11CBaseEntity10TakeDamageERK15CTakeDamageInfo" } } } }
If you have any questions, please let me know!
Thanks.
-

Yes, I know, I use the "etc" statement far too much.
Don't hurt me, @Shuruia
I will correct the post and remove some of the useless "etc" statements.
Thanks.


VIP perks
in Closed
Posted
I'm going to move this to the "Ideas" forum under "Administration". We definitely need to start making more money and the first thing I would focus on is in-game/website perks. I have a few ideas that I will post later.
With that said, getting MOTD Ads deployed on all Garry's Mod servers will be important. There are other things we will setup that will help as well (I will discuss this later).
Please give your input on this! I've seen many suggestions in the past go un-noticed ( @Major_Push @ the Supporter Give-Aways based on Ad revenue?). Mostly due to me being a bit busy but since we have (in my opinion) a much stronger BoD team than before, I think we can finally start getting stuff done.
Thanks.