Click the post title for a direct link

[#133] [Thu, 13 Jan 2022 21:06:13 CST][news]

The RSS feed has been reactivated (again) after much neglect. This time for good (srsly). Previously, the rss file was one of many that had to be manually edited each time there was an update to report, each with their own minor niches that had to be accounted for. Updating any of the projects in the freeware section, for example, meant editing the website's HTML file, the project's included, the RSS feed, and any potential news entry.

Since I was already in the process of updating the site's backend, I took the time to streamline the entire feed deployment pipeline into a single script, and cleaned up about two years worth of deprecated, irrelevant news from the log.
[#132] [Mon, 10 Jan 2022 14:33:48 CST][news]

We've updated our official cryptographic public key to correctly reflect our new contact info. As always, we strongly suggest you encrypt messages of sensitive nature!

PGP KEY [public encryption key]

Fingerprint: 5CD6 489B 7DD7 0F48 398F F0C1 AF88 A0A0 89BD 9B72

In other news, we're very happy to announce that XENOBYTE.XYZ saw a 300% increase in (human) traffic in the past year, totaling at ~40,000 unique visitors. This with no shilling or even social media presence of any kind.

~40K visitors! we should start a cult or something

Our email inbox naturally received a lot more action. Be it project requests, greetings from fellow cybernauts or questions regarding our freeware, thank you for taking the time to share your thoughts, some messages were very kind and insightful! I also took the advice to publish a F.A.Q. blog post to clear up some of the more popoular misconceptions about the site's technical details and overal purpose. I couldn't squeeze everyhting into a single blog post, the rest will follow in the future.
[#131] [Sat, 01 Jan 2022 22:01:08 CST][tech]

Q: What is this site all about?

The original concept for the website was that of a clearnet sandbox to host what used to be in third party domains such as gitlab and twitter. Not just as a learning experience, but as a way to present myself to the development community from an independent platform, not unlike the oldschool hacker and programming blogs of the 90s that I so much enjoyed surfing through.

As the site's content slowly grew it also doubled as a informal curriculum and as a showcase of my work to potential clients, with the latter eventually formalizing into our anonymous engineering services as a means to score more interesting projects than those locally available.

Q: What kind of traffic does a niche site such as this see?

I don't keep close tabs on the traffic statistics since the site has never been promoted outside two or three underground messageboards, and there are no monetization schemes to look after. That being said, the total unique visitor count for 2021 stands at ~40K, with an increase of around 300% from the previous year.

never made more than 20 facebook friends, but 40K cybernauts visited my site in 2021 alone

That's with bots, spiders, search engines and other miscellaneous requests filtered out.

Q: Is this an individual project or a team effort?

Everything published in is authored by the admin.
However, in some rare instances where client requests call for extra manpower, I have a couple of longtime colleagues I like to team up with. All opinions, jokes, blog posts, media, grammar error and so on, are solely the opinion of the author etc. etc.

Q: Why does the site's name keep changing? Doesn't it hurt brand awareness?

Partially. Like previously stated, the site hasn't been formally promoted in any way, and it wasn't until late 2018 or so that our engineering services were opened up to the public. Not to mention that, although there has been a clear increase in anonymous project requests, most of our business come from local clients. I rather keep refreshing the site's name when the registry is about to expire to keep the project's aesthetic fresh.

However, the constant domain changes have definitely taken their toll on the site's visibility and overall placement in search engine rankings, which already punish for clashing with a other similar domains and products, not to mention the absolute lack of data collection greatly lowers the site's position in most search engines (google).

I'm still undecided on whether to change the current domain name or keep the current. Should it happen, it will certainly be for the last time.

Q: How profitable is larping as a 1337 hacker, if at all?

Not at all. That's not to say that the concept of providing engineering services "anonymously" couldn't be profitable, rather the business model around a successful example of such services demand resources beyond my original and current means, not to mention the deplorable state of the global economy that makes navigating the ridiculous amount of bureaucracy and legal bullshit required to formalize the idea that much worse.

Thankfully, the demand for competent engineers is only increasing and as the supply fails to meet the demand faster than Microsoft add abstraction layers to its enterprise tools, many potential clients are starting to look beyond the mainstream sphere since, more often than not, their problems can be solved by a single, knowledgeable programmer. I'm also well aware of the changes required to try and steer the concept into something economically viable, it's just not currently feasible. Hopefully in the future.

Q: What is the cost of running a personal website such as this?

The cheapest plan on virtually any VPS provider will do. Currently, the site is hosted in a $5 per month machine to accommodate for the much more resource intensive gitea instance. For a site as relatively simple as this, however, even an alternative site builder like neocities should suffice just fine.

Q: Why the anonymity? Does it hurt the business?

For a random, law-abiding, personal site such as this, it's not even close to being worth the hassle that ensuring actual anonymity requires.
As for the business, since its purpose was to expand my potential client / project pool (regardless of profit) and there are plenty of alternatives to the official payment and organization frameworks, the stigma associated with faceless, online business is almost gone. Worth a shot.

Q: Do I have any formal engineering education? What are my thoughts in going to college for this kind of work?

I have a Computer Engineering degree from my state's university and yes, it's absolutely useless and a total waste of life. I should probably write a blog post explaining myself to thoroughly cover my reasoning, needless to say, the quality of modern academia is unacceptably low. The time, cost and effort commitment are beyond ridiculous just to ultimately receive what is already freely available. Autodidacticism and mentorship (in its archaic practice) are the last, true path to knowledge for the average person in this age of ignorance and hyperrealism.

Q: What happened to the TOR access?

Haven't found the time to set it up. TOR activity has been rather sketchy as of lately, too.

Q: What does xenobyte™ mean?

Ayylien code. It's a WH40K term (I think?).

Q: What happened to the tutorials section that was available back in the days?

It was supposed to be revamped into a proper tutorials section in a future update (it originally consisted of a list of links to blog posts), but it was eventually buried by actual priorities and haven't had the chance to get around doing it since.

The blog post format does have its advantages, and there are a couple of relevant entries in the works concerning a client request turned free software that will be complementing this blog post right here. I'll probably formalize the tutorials section and reintroduce the previously available entries after some revisioning.

Q: Is the RSS feed dead?

Not yet! Because this site is custom made using original software, I have to manually update every single relevant log file, be it an SQLite databse, a inside a repo, the project's HTML page, or an RSS feed. As the website's content grew, so did the amount of work it takes to keep everything synchronized. I think I've organized my workflow to minimize so much redundancy now, I'll be refreshing the RSS feed from now on.

Q: Are any of the old social media accounts active? Can I follow you in any other platforms?

I used to manage an official Twitter account, used solely as a backup news feed that has since remained inactive. There just wasn't much of an incentive to keep the account active beyond its purpose, and was slowly left for death. However, I occasionally receive emails asking if some seemingly related online profile belongs to me or represents the site, a potentially dangerous confusion that can only be remedied by either emphasizing our lack of participation in such services or keeping an officially endorsed presence in said platforms. It'll most likely be the latter, stay tuned.
[#130] [Sat, 01 Jan 2022 06:13:28 CST][news]
// NEW YEAR 2022

Another twelve-month trip around the Sun. One not as dynamically chaotic as the first year of the pandemic, but thoroughly crippled by the ongoing damage. Many plans were shattered just to compensate for the extra effort required to survive, forcing some serious reconsideration as to what is truly worth pursuing and what can be set aside to make way for realistic goals, an admittedly painful process when our carefully crafted, utopic vision of the future begins to crack.
Alas, despite the most unusual hardships of the new world we find ourselves in, we also remain at the verge of maximizing the human potential. And though the challenges to come will no doubt be just as merciless as the ones already conquered, I am eternally grateful for the opportunity to face and grow from the adversity.

Happy New Year

[#129] [Mon, 27 Dec 2021 21:10:55 CST][misc]
// We be grillin'

The harsh reality of the post-pandemic holidays is evident even to the irrationally positive, and two years into the clusterfuck that society has warped into, it's only natural for spirits to be low. Such is the fate of those that get to experience life at the very transition between the old world and the new.

In any case, as the ancient wisdom states, happiness lies in appreciating the small things in life.

taste the meat, not the heat
[#128] [Thu, 11 Nov 2021 11:52:43 CST][news]

As long time visitors know, every year the site's domain name is intentionally changed during the last days of its registration period to refresh the site's brand and to encourage the constant redesign of the platform's front-end. However, in lieu of still undergoing global chaos and the sacrifices required to compensate, the current domain names of XENOBYTE.XYZ and its subdomain will remain the official links in hopes of instead reaching this year's work metrics. Even if we keep receiving the occasional email meant for some other site with a similar domain.

With that out of the way, the last days of the year should see the release of at least two (free!) projects that have been slowly (but steadily) nearing completion. One of them is a C++20 network debugging tool to help develop performance-critical internet applications such as online game servers and scalping bots called Netbugger. This software, originally a client commission, was meant to be released after the game server it was being used to develop was ready, only to get indefinitely delayed along the rest of the backlog. Shortly before the client request was successfully completed, a blog post was published detailing the software architecture with at least two more posts covering the development process planned for the public release, including an update revision of the (now outdated) first entry.

The second project, also a client commission, is a Pocket_PHP-backed website that will no doubt be the framework's most ambitious showcase yet, not to mention its second, publicly available service.


The framework's dead simple template engine was reworked to expand on its modular design. Namely, it now keeps track of the final HTML string in a stack and renders everything in a single, uninterrupted call. The content stack is filled using addFile() (adds and prepares a file for writing) and addString() (adds and prepares a string for writing) which completely replaces the previous functions. Once filled, the internal content stack is finally sent to the client using render().

This helps smooth the server to client content parse since it's sent in a sequential manner without interruptions. By atomically writing the complete stack to the socket, the potential for performance hiccups and browser related issues greatly diminishes as the server load grows and the time between writes potentially increases. Previously, it was possible to break the parsing sequence by sandwiching unrelated operations between renderHeader() and renderFooter(), which start and end the writing process respectively. Though this was never the intended design, as the projects relying on pocket_php diversify and the use of its more elemental components get tested from different angles, the limitations of the overly restrictive templating interface became apparent. Leaving services like REST APIs with little choice but to modify the core class in ways that don't disrupt its intended functionality, a clear design flaw.
The new TemplateEngine class

class TemplateEngine
    private $contentStack;
    private $stackCounter;

    function __construct ()
        $this->contentStack = array();
        $this->stackCounter = 0;

    public function addFile($filename, $data = NULL)
        $fileContents = file_get_contents(VIEWS_DIR . $filename);
        if (empty($fileContents))
            $this->addString($fileContents, $data);

    public function addString($string, $data = NULL)
        if (empty($string))

        if ($data != NULL && is_array($data) || !$empty($data))
            $this->contentStack[$this->stackCounter] = replaceValues($string, '{{', '}}', $data);
            $this->contentStack[$this->stackCounter] = $string;


    public function render()
        if (!ob_get_status()) // IF ob hasn't been manually started by the user

        foreach($this->contentStack as $section)

The core/utility.php file was also overhauled. The template engine keyword replacement function replaceVariables() was substituted by replaceValues(), a more flexible version that can parse keywords between any two different string limiters to encourage its use outside TemplateEngine. Basic directory management, a very general requirement to deal with user uploads was also added in the form of newDir() & delDir(). Check out the core/utility.php commentary on how to speed up directory deletion using system calls.
[#126] [Sun, 03 Oct 2021 09:42:17 CST][news]

Staying true to the current Pocket_PHP update streak, v2.2 has been released. The included templating engine was reworked to better fit non website related projects like REST APIs without compromising performance. Basic directory management functionality was added as well. Check this devlog post for a full overview of this small yet important update.

XENOBYTE.XYZ and all client services running Pocket_PHP have been accordingly updated. The notification email recipients may have noticed that it was sent from our new email address;

Henceforth, this will be our only official email.
[#125] [Sat, 18 Sep 2021 22:53:00 CST][misc]
// Do not go gentle into that good night

It's ridiculous how much has changed in the past thirty years. Life has warped into an unrecognizable mess that resembles the post-apocalyptic dystopian fiction of the last century a little too much for my liking. Spiritually subversive propaganda, pharmacological behavior modifiers, weaponized social pressure and shaming, disinformation distribution, secretive psyops and falseflags, artificial intelligence guided communities, openly totalitarian corporations, instigated irrational social and racial tensions, substitution of the nuclear family with governmental programs, psychological undermining of individuality and critical thinking...
The list of similarities is well beyond that of mere coincidences, and yet, against all reason, it goes largely unnoticed.

The very basic mechanisms of organic human interaction have been profaned by irrational fear and ego driven ignorance long enough for the natural imperative to protect one's way of life to slowly decay until it is completely replaced. Whatever cultural artifacts that remain are mere vestiges of the past that, at best, find themselves renewed as twisted marketing assets for opportunistic corporates, and at worst, another deceptively selected target for the misguided masses to attack. Establishing a self preserving vicious circle of hatred towards the very ideas and traditions that nurtured humanity up to the presented day, further straining the already crumbling social cohesion and encouraging isolation in an already apathetic society.

Those most unfortunate souls that have been born after the past decade or so will be the ones that suffer the most. Their mental model of reality will be tainted by the coldness and indifference we've embraced during the most critical period of their development. Through our misguided leadership, they will learn from a very young age to accept their place in the new world as hedonistic slaves and reject the endless potential inherent to their being.

as above, so below
[#124] [Sun, 24 Jul 2021 09:17:17 CST][misc]
// Legends never die

The life of John Mcafee is one that I've been following since I first learned about the man and his astonishing shenanigans. A true case of the eccentrically brilliant rebel with a background so colorful and extreme that, were it not so well documented, it wouldn't be at all believable. Even for a fictional character.

For a free spirit like him, death should be nothing special.
R.I.P the coolest programmer

All the projects I've worked on that rely on pocket_php and make use of POST requests to process user provided data require checking if the request was indeed sent as POST. The HTTPrequest->arguments member variable was supposed to abstract this away by only parsing the POST data into the request arguments array if the request was indeed a POST, if it happened to be a GET request containing data that would otherwise be sent by POST, then the engine would discard those form elements. Not a big deal since it's mandatory to check if the data is even present in the first place, but the monolithic arguments container still loses the intended context by mashing both GET and POST arguments into a single entity.

The HTTPrequest->arguments variable has been replaced by HTTPRequest->GET & HTTPRequest->POST respectively, this means that all and any inline URL arguments will always be present in GET despite the request having been originally of type POST and the POST argument array will only be populated on POST requests.

Minor change with a big impact in readability.

I still have a beefy backlog to clear, but I hope to upload a few pocket_php backed websites as documentation in the coming months.

Prompted by a recent (still in development) request, POCKET_PHP has seen its internal session management upgraded. This includes more detailed request tracking, manual cleanup of death SID files, session hijacking protection and some minor API changes. This is (ironically) a bigger update than v2.0, make sure to read this devlog entry for a complete overview of v2.1. All our client's projects and XENOBYE.XYZ have been accordingly updated.

On a side note, due to technical issues (among other things) we had to migrate two servers hosting client projects to a new VPS service. The involved clients were notified about the potential downtime, though the migration went by smoothly.
We apologize for any inconvenience.
[#121] [Thu, 17 Jun 2021 16:12:36 CST][tech]
// The Lainchan Webring Saga

For the past year or so, I've been following a series of threads on the lainchan imageboard centered around creating an anonymous webring of sorts. The idea behind is to unite the many personal webprojects that remain scattered throughout cyberspace into a decentralized confederation of related sites. A graph where every node is its own independent realm with a section that links to other such places, removing the need for any centralization while retaining the common context of the projects. I've yet to add the more recent ones, but I'm confident that most of the participating sites have been added to the links section by now, alongside many other unrelated links.

Current websites in the webring

The chain of threads (currently on it's fifth iteration) span hundredths of posts and around 60 or so participating domains, each an individual entity somewhere in the vastness of cyberspace, linked to the rest of the mesh by will and interest alone.
Needless to say, these types of communities are very reminiscent of the pre corporate internet, before "social media" giants held a monopoly on information exchange and finding like-minded netizens required extra effort and technical skills. Each new site first had to come across a webring, usually by surfing around forums or imageboards of interest, then joining the community and eventually the webring itself. This clever use of third-party platforms to strengthen individual domains, although not that common of an occurrence mainly because setting up a personal website acts as a filter, has been bridging personal projects since the early days of the web.

It's no surprise that these same users contribute all sorts of interesting ideas to their respective circles, some anons in the lainchan webring even started experimenting with scripts that illustrate the current state of the webring's matrix as a graph node.

The original concept post

A second poster then uploaded a graph generated by his own script.

A more readable and hilarious graph (post)

A quick glance at the graph makes it apparent that the node introduces many unrelated links that the script still counted as part of the lainchan webring. To facilitate anon's efforts, I created a separate link that hosts only the lainchan webring links right here.
I'll make sure to contribute something cool of my own in the future.
[#120] [Mon, 31 May 2021 16:47:31 CST][news]

Pocket_php has been updated to consolidate the small, post-v1.5 updates and a significant interface change into v2.0.
This update was originally larger, including many bits of useful code that were written for unrelated web projects running on pocket_php, alas, they were scraped to keep the engine clean. Instead, we'll be aiming to release a few more project samples to use both as framework documentation and as a showcase on how to adapt pocket_php to different designs. Check this devlog entry for more info.

XENOBYTE.XYZ itself also underwent some changes, mostly aesthetic and mobile layout details, but there were performance changes as well. The media gallery was reportedly loading at a glacial pace, occasionally preventing the rest of the page from loading correctly, leaving the background image and a handful of files empty. Among the issues were the server's limited bandwidth bottle necking the video file transfer, the site's VHS scan lines CSS SFX bugging Firefox as it tried to parse the files, but namely it was the fact that the gallery had grown too large to be served directly. No doubt it will eventually need a CDN, however, for the time being all files in the media gallery are now independently loaded by clicking the thumbnails. The site's overall CSS and backend were also upgraded, moving the project to V5.0.

Finally, we've set up a new public github account. Like the previous third-party git services, it will be used exclusively as a means to reach a larger audience and to offer an alternative source code host. The original development repos and all client projects will continue to be hosted in our private gitea service.

All the projects I've worked on that rely on pocket_php and make use of POST requests to process user provided data require checking if the request was indeed sent as POST. The HTTPrequest->arguments member variable was supposed to abstract this away by only parsing the POST data into the request arguments array if the request was indeed a POST, if it happened to be a GET request containing data that would otherwise be sent by POST, then the engine would discard those form elements. Not a big deal since it's mandatory to check if the data is even present in the first place, but the monolithic arguments container still loses the intended context by mashing both GET and POST arguments into a single entity.

The HTTPrequest->arguments variable has been replaced by HTTPRequest->GET & HTTPRequest->POST respectively, this means that all and any inline URL arguments will always be present in GET despite the request having been originally of type POST and the POST argument array will only be populated on POST requests.

Minor change with a big impact in readability.

I still have a beefy backlog to clear, but I hope to upload a few pocket_php backed websites as documentation in the coming months.
[#118] [Sun, 18 Apr 2021 05:51:38 CST][tech]
// High accessibility with a discriminating filter

I constantly complain about the lack of website diversity in the modern internet. Sadly, the last remnants of the free, community driven internet have mostly faded into a concept of the past. Simply put, corporate influence over the infrastructure runs deep enough that, were not for the scattered efforts of dedicated enthusiasts, the internet would be completely devoid of its original potential, becoming indistinguishable from any other lifeless, monopolized, endeavor.

Surviving in cyberspace as a legitimately free and autonomous community requires total commitment to the engineering, administration, maintenance, legality and moderation of the platform, such that only those that manage to turn in a profit that justifies the effort and risk have any chance of survival. Even then, the myriad of ways that the all-powerful competition can undermine smaller platforms makes it implausible for anything but the most sophisticated, underground efforts (onion markets, private decentralized messageboard, etc.) to have a fighting chance. Digital platforms on the mainstream internet are, therefore, always at risk of being targeted and dismantled, further discouraging the rise of new online ventures and keeping the ones online in check. Step out of line, and you'll be silently culled.

Decimated as the autonomous domains have gotten, there has been a comparatively small, yet symbolically victorious resurgence of anti-conformism with the current state of affairs by the programmers and engineers that the monopolization of the net has displaced. The few, independent platforms that still hold some relevancy have made it this far partially thanks to the collective, organic efforts of their respective administration and user base in defiance of the dystopian alternatives.

However, finding such communities is becoming an increasingly difficult challenge. In particular for the newer generations that, despite being born in a post-internet world, rarely venture beyond their mainstream prisons and thus, have been conditioned to rely on corporate solutions for what, in reality, are very basic services. To make matters worse for these unfortunate individuals, their mega-corporations of "choice" are most definitely, ruthlessly undermining their clientele by abusing their trust and ignorance. Often through sophisticated means that most are not willing to even believe real.

Nevertheless, the aforementioned digital realms where the last exchanges of unfiltered, untainted ideas still happen remain open, eagerly awaiting new participants to nurture its community. A good example of such platforms is lainchan.

A cyberpunkish, anonymous imageboard named after a surprisingly introspective if somewhat niche anime that revolves around discussing related subjects. Not particularly known for the stereotypical shitposting inherent to anonymous communities, it's one of the last bastions of cyberpunk culture worth participating in, at least in the clearnet. It's publicly available and requires no registration to post, making it an easy target for hackers, trolls and shills, despite this, the board remains relatively clean, probably due to its moderate popularity and organized moderation. Regardless, responses on anything but the most active threads do take their time to pour in, it's no reddit or 4chan in terms of sheer population density, that's for sure. It's to be expected since it's pro-freedom / anti-censorship attitude gets invariably associated with some recently conceived social taboos (an excellent way of reinforcing mainstream, sterilized preferences) that filter many, if not most, potential new users.

Those that do stay around tend to be more than just wandering cybernauts, a good deal of them can be considered aficionados to at least one of the discussed subjects. Such users are among the best additions to any community because they have the drive to participate in the current discussions, and it's from the mix of attuned veterans, eager new members and the liberty to exchange ideas that solidifies the platform's culture, crucial for the long term success of any collective. It's the unwritten rules of the game that ultimately serves as a catalyst for the personality that will be associated with the brand, hence why the mainstream alternatives all seem to complement the same superficial image of sterility and safety that encourages it's users to stay within corporate influence. Lest they learn a wrong opinion or two.
[#117] [Sun, 17 Apr 2021 13:11:28 CST][news]


This is a locally generated and managed, 4 character long basic captcha

Our high performance backend engine, pocket_php, has been updated to v1.52. It mainly introduces an internal captcha generator to gate keep online forms without having to rely on subversive third parties. The pocket_php implementation requires only the php-gd library, readily available in all package managers that matter and easily enabled by uncommenting the extension in the php.ini file. See the updated installation guide or this devlog entry for more info.

Accordingly, our mailing list form has been further secured.

If you've ever managed any kind of online service, you're well aware of how easy it is to abuse unprotected forms. A simple python script can wreak havoc by bombarding a server with randomized input that has to be carefully sanitized to prevent injections. Said processing, however, is typically the most computationally expensive request for a simple website to honor due to the complexity of the steps involved; sanitizing the user input, validating the cleansed data, writing it to a database, etc. Thus, unless we know for certain that the form we received was legitimately answered, it should be discarded, lest we neglect gate-keeping the database only to find it littered with nonsense or worse.

pocket_php captchas samples
The previously mentioned script that fills the form with randomized characters wouldn't be too difficult to detect and mitigate, but a slightly improved version that generates a well formatted email address (regardless of its authenticity)? Not so much. Even potential solutions, complicated as they may get, would most likely only work when validating the email field, not the rest of the form which would probably require their own specialized routines. The resulting increase in resource consumption doesn't even ensure the processed form was legit in the first place, only that it passed the aforementioned filters.

For the sake of brevity, now that the problem has been illustrated, I'll jump to the point. This is no easy issue to solve, but there are simple and effective precautions that universally apply to internet forms that will at least help deter automated injections, namely captchas.
The idea behind them is quite clever, they exploit the fact that there are trivially generated challenges that a computer still can't crack yet a human can solve in an instant. With character recognition tests being among the more popular.

As for the inevitable cost, captchas may be straightforward but are certainly not free, at least in contrast with leaving a form unprotected, and in cases where the server is working at or near capacity having to generate captchas would definitely worsen the service's responsiveness. On the opposite scenario where the server is practically idling, it's quite likely that working the captchas becomes the most expensive step of the form's validation anyway. Alternatively, it's common practice to outsource captchas to reduce the local workload at the expense of the user's privacy.
whichever you choose, safeguarding forms automated attacks has a price, but it is practically ALWAYS WORTH PAYING.

The captcha functionality added to the pocket_php example login page is very simple and effective. It randomizes a set amount of characters from a given input string, draws randomly generated squares to the randomly colored background to obfuscate the foreground, renders the selected characters at a random angle, position and color (within reason, it'd be redundant to make this hard to answer for humans) and finally, the generated string is stored in a PHP session variable to subsequently validate the client's answer. Should the created captcha be too difficult to read for a human, all the client has to do is ask for a new one, either by pressing the refresh captcha button or by reloading the form. Ezpz.

Note that the internal login captcha can be (de)activated by setting the ENFORCE_LOGIN_CAPTCHA in app/configure.php and utilizes the php-gd library.

Local, private, effective captcha

Before ending the post, I'd like to clarify why captchas were added while other utilities get overlooked from the project. Pocket_php is currently powering fourteen web services with more planned, working with these individual projects I often get tempted to add a particularly handy snippet into the pocket_php codebase for future use, only to scrap the idea in favor of its original vision; to provide the fastest, simplest template for web projects to use as foundation. Abiding by this rule means that certain kinds of utilities are typically not incorporated into source due to them being either too situational or just not worth the effort to implement on behalf of the programmer.

What separates captchas from other potential features is how necessary they've become and how widespread the use of (often subversive) third-party captcha services has grown in response, specially among sites that don't really benefit from such an approach at all. In the end, it's not about how easy or fast they are to implement, it's about having the option of privacy at the same reach as the alternatives.
[#115] [Mon, 12 Apr 2021 19:45:22 CST][misc]
// Gray morning

We're almost halfway through the year already in what feels like an instant.

Earlier this month, we agreed to help a video game studio with their upcoming MMO by developing a networking tool that aids the debugging and testing of their server's many features. In my opinion, the coolest request we've had yet.

Now, I've been meaning to write a few tutorials about some of the subjects that, in my experience, give aspiring programmers the most trouble. But 2021 has so far been a very busy year with no signs of slowing down, since I can't really spare the time to start yet another secondary project, I settled on trying to be more thorough as to how it is that we get actual engineering jobs done and documenting the relevant parts of the programming process in a tutorial format. Thankfully, the client agreed to release the finished software for free, making it a great opportunity to test out the idea.

Since pretty much all modern operating systems concern themselves with providing only the absolute minimum through their networking API, it befalls to library developers to implement a more robust solution for user programs to use as an intermediary. Features like multithreading, OS API abstraction, data serialization, packet queuing, SSL / TLS encryption and connection lifetime management are some of the more common demands of a typical, modern networking app. As for the sockets themselves, both the UNIX and Windows interfaces are very much the same but with enough minor differences to merit some abstraction, ironically the simplest problem to solve. The others? Not so trivial. Thus, we'll be employing the ASIO networking library as foundation.

Like its name implies, the library focuses in providing "Asynchronous Input & Output" that can be used in a variety of ways, networking and serial communication being among the more common. Socket centric projects, in particular, don't really have much of a choice but to adopt an asynchronous approach to combat the aforementioned problems, not so easy in the 90s but in actuality, tools like ASIO and modern C++ optimize the process quite nicely. Hence, the birth and raise of independent, high performance services, like private MMO servers (of which ASIO has sponsored many) and crypto trading bots have helped cement ASIO as arguably the best networking library for C++. So much so that I wouldn't be surprised if it (eventually) gets incorporated into the official C++ standard, not unlike a few other liboost projects.

The main idea is to get around the unpredictability of sockets by calling the relevant functions from a different thread to prevent pausing the rest of the program, and even though ASIO makes this (relatively) easy, there are still a few details to consider.

The client also specified that the program will be running on virtual machines without a GUI, and given the I/O (not to mention A5STH5TIC) limitations of a vanilla console, ncurses was added as a dependency. Even though it's older than me, it has yet to be trumped as the king of console UI libraries, and with good reason. It's very efficient, fast, tested, documented, easy to incorporate and likely already installed in whatever UNIX system the client will be using. However, it has absolutely no multithreading support. Working asynchronously with multiple threads can trigger memory related crashes if an ncurses function is called from anywhere other than its respective thread. A critical limitation for a tool that requires multiple threads to work, but it's by no means impossible.

Consolidating the threading discrepancies between ASIO's internals and ncurses is all about discerning how is it that ASIO will (asynchronously) communicate with ncurses to make sure this only happens in a predictable, thread-safe manner.
This extends to the internal I/O buffer storage and packet queue as well, any thread parsing a packet into a queue for incoming data must do so in harmony with the rest of the threads processing the ASIO io_context that may trigger an operation on the queue while it's already in use. So, for starters, we have to make note of where such events will happen.

The graph pinpoints the critical sections of the executable and their relationships, starting with the beginning of the main thread (top left). As previously stated, all ncurses functions have to be called from the main thread or risk undefined behavior, luckily there are only three relevant functions out of which only one should be accessed by secondary threads; the window output / system logging function. This is the only access to the window's output available, meaning its internal container must be protected to prevent simultaneous writes by the rest of the threads.

Do note that even though the UI drawing function will be invoked solely by the main thread, the container storing the window's output could be modified by a secondary thread as the draw function is processing it. This container has to be protected during UI processing as well, or it could be updated as it's being iterated through. That's makes two sections of the main thread that must be accounted for when writing the program. As for the rest of the threads, the relevant action will be happening between a handful of functions that interact with the socket themselves. Requesting or accepting connections, reading from and writing to the sockets, processing requests and logging through ncurses, it will all be done asynchronously from any of the secondary threads.

Now, what exactly is it to be done to protect these vulnerable data structures? The most common approach is to use locks and mutexes to limit access to the protected resources to one thread at a time. Despite not being the fastest spell in the book, it can still perform at a superb level as long as the locking period of the mutexes remains as short as possible. Another key reason as to why performance sensitive netcode should always be planned ahead of time, if we design a multi-threaded program around locking and waiting but fail to account for the workload of said function, we risk bottle-necking the tool's performance by often difficult to change code.

In our case, the only procedure that risks such behavior is the request resolution function, the rest shouldn't be doing much more than adding or removing a small object (representing a packet) to the connection's internal packet queue, but resolving requests may entail more complex behavior that wouldn't play well with lock based designs. Still, so far we've covered most of the expected pitfalls of working with high-octane netcode in the context of our project.

I'll cover the source code in the next post, this one has grown long enough.
[#113] [Sun, 21 Mar 2021 11:53:44 CST][news]


The website's RSS feed has been reactivated after years of neglect, you can add it to your RSS reader of choice by appending this link to the URL list. From now on, (public) project updates and site changes will also be announced via RSS, eventually we will be adding a public alternative as a secondary means to reach our user base, in the meantime, we emphasize that as of March 2021, XENOBYTE HAS NO PUBLIC PRESENCE IN ANY MAINSTREAM SOCIAL MEDIA PLATFORM.

On a side note, our custom EMACS configuration HEXmacs was also updated by removing obsolete packages and shifting the PHP autocomplete configuration from ac-php and php-extras to lsp-mode, which has been steadily replacing most autocomplete solutions for a while now.
The rest of the configs will be updated in due time.
[#112] [Sun, 07 Mar 2021 05:24:09 CST][tech]
// Project request: MMO server debugging tool

Got a project request from a game developer asking for a tool to thoroughly test custom, TCP based network protocols to help debug a developing MMO server. I was looking for an excuse to write a similar program to spice up the next SkeletonGL game and the client was happy to open source the final product, so I accepted the request despite the difficulty and time constraints. After all, a good chunk of our business involves these kinds of specialized, performance focused solutions. Few are this interesting, though.

Having had worked with game servers in the past I was well aware of how projects involving sockets tends to be either very fast and straightforward or a hard to debug, convoluted puzzles that demands a solid foundation of software engineering principles from the programmer and an efficient design philosophy to overcome the many pitfalls of bad netcode.

Getting around the limitations of the operating system's socket interface is not as trivial as their simplicity may initially appear, though much of the challenge lies in the dynamics of networking itself rather than interface shortcomings. A TCP socket, for example, may terminate a connection at any given moment by different means, suffer unpredictable time delays between endpoints in a synchronized system or slowed down to a halt serving a client with a particularly shit connection that bottlenecks the server, maybe someone tries to connect using networking tools (like the one requested) to mess with the service. The list of potential critical scenarios goes on.

With that in mind, here's a quick rundown of what the client asked for:

  • - Traffic simulation and analysis tool
  • - Support for custom, TCP based networking protocols
  • - IPv6 support
  • - Each instance must be able to act as a server as well
  • - Console based GUI, will be mostly run in a VM terminal
  • - Support for as many threads as the CPU allows
  • - SSL / TLS 1.2 support

It's apparent that the idea behind the request is to essentially outsource the aforementioned netcode issues into a templated solution that can be easily modified to fit their testing needs.
Since this is one interesting project I'll upload another blog post detailing the design and development process soon enough.
[#111] [Sun, 21 Feb 2021 20:25:34 CST][misc]
// Damned if you do, damned if you don't

Lost another client to the failing economy this past week. A small phone and laptop repairing business that I used to host a few services for was unfortunately claimed by the current economic circus, namely the recent cost spike in rent and basic services and their inability to compete with global brands. At least the owner managed to find employment shortly after, if that can even be considered advantageous anymore.

Workloads are at an all-time high, yet the greatly increased productivity does not reflect in the quality of the employee's life, or even it's overall time on the job. Even worse, due to the miraculous scientific and technological leaps of the past century, we find ourselves in the ironic position of a deprecated asset on its way out. One that isn't even worth replacing with dignity and that, ultimately, has little to no repercussions to a monopolized and corrupt market.

At this point I can't tell if this purge of independent efforts is part of the price to pay for genuine progress or just another example of predatory corporations and mindless consumers.

[#110] [Thu, 28 Jan 2021 18:26:39 CST][news]

Starting today, all our projects are being locally hosted for direct download in their respective sections, their source code will also be available in our new, public gitea instance available at To keep the codebase clean, the repo trees have been reset to remove deprecated files from versions that are no longer supported, both SkeletonGL example programs were updated to V2.0 as well.

This git host migration from third-party to locally served isn't necessarily a permanent departure from the various public git service providers, rather a preventive backup to ensure our projects remain online and as a safer, private way for our clients to keep track of their request's development. As for the previous public git accounts, they will remain inactive until the local repos are ready, once that's done they'll be cloned into a public git host for distribution purposes just as before.

It's been a busy winter so far, with many unrelated setbacks and sleepless nights, fortunately the development goals that were set the past year have been successfully met, clearing the way for future endeavors that would otherwise be postponed. This will be one interesting year.
[#109] [Thu, 05 Jan 2021 22:12:31 CST][tech]
// Hiding in plain sight

Deep within the more inquisitive internet communities, beyond the reach of censorship where the net's original anarchy still reigns supreme, a group of anonymous freedom enthusiasts have been nurturing a very interesting rumor concerning the bitcoin blockchain that has recently gained traction in the mainstream cyberspace. I'm well aware of the countless conspiracies already circulating but unlike most, this one has enough objective merit to warrant suspicion.

Some basic knowledge of how bitcoin operates is required to fully appreciate the gossip, so for those unaware of how the btc blockchain technology works, it's basically a program that maintains a log file of transactions across a network, this oversimplification ignores a lot of details critical to the bitcoin environment but for the context of this blog post what matters is that:

1. The bitcoin blockchain contains the entire BTC transaction history
2. This blockchain is preserved across all participant nodes
3. It is possible to send bitcoins to an "invalid", personalized address

Bitcoins sent to these fake addresses will be lost, nevertheless they are still a transaction that must be logged, appending the transaction details (including the fake output addresses) to the blockchain and distributing it across the entire mesh. Valid bitcoin addresses are generated from 256-bit private keys as public keys hashed to 160-bit (or 20 byte) string that is then stored in the blockchain as hex and can only be redeemed by the original private key.

This also means the blockchain isn't responsible for ensuring that a certain output address can be indeed accessed, it's the user's responsibility to provide an address he holds the private key to. Providing any unique combination of 20 bytes as a destination address is perfectly valid though it would lock any bitcoins sent to it since the blockchain requires the private key used to generate the address to redeem said coins, such is the price for storing 20 bytes of (unique) data in the BTC blockchain.

In a typical bitcoin transaction, decoding the resulting output addresses from hex to ASCII shouldn't amount to anything but a string of seemingly random alphanumeric characters. That is, unless said address was generated by encoding a custom, 20-byte ASCII string into hex using Base58, in this case the decoded output would produce the original ASCII text.
This means that, as long as you don't mind loosing a few BTC credits, you can insert 20 bytes of custom (and unique) data into the blockchain. Preserving it for posterity by synchronizing across all participant nodes as a decentralized, uncensorable, nigh untamperable public database.

Decoding address "15gHNr4TCKmhHDEG31L2XFNvpnEcnPSQvd" reveals the first section of a hidden JPG

Taking the previous rundown into consideration, it's to be expected that some bitcoiners use the blockchain to store and secure small amounts of data in creative ways. A practice that is so aligned with the blockchain's principles that Satoshi Nakamoto himself famously embedded the following string into the very first bitcoin block, the "genesis block" (though it was added to the coinbase since he mined the first block, not the previously explained method).

Satoshi Nakamoto's message embedded in the genesis block

It's at least evident that he designed bitcoin to allow for arbitrary data to take full advantage of the technology, serving both as an alternative to our current FIAT overlords and as a digital wall for the community to graffiti. So, when I learned that "someone" had supposedly hidden some very condemning blackmail of very important and influential people in the bitcoin blockchain, I was more than willing to believe it. In fact, Wikileaks had previously done something similar by embedding a 2.5MB zip file full of links to the actual data, though it was done in a rather obtuse way to prevent data scavengers from stumbling into a cache of leaked secrets.

Like this 'lucifer-1.0.tar.gz' file found in TX hash # aaf6773116f0d626b7e66d8191881704b5606ea72612b07905ce34f6c31f0887

Today, there is even an official way to add messages to the blockchain by using the OP_RETURN script opcode. Bitcoin was to be used as a broker of both credit and information from the very beginning.

If you'd like to 'mine' the blockchain for secrets, I wrote a simple script that decodes all the output addresses in a transaction, it queries, so you don't have to download the entire blockchain. As previously explained, this alone may not be enough to peel away all the potential layers of obfuscation possible, nevertheless it is enough to take a quick peek at what the bitcoin community has been sneaking into the blockchain.

Bitcoin logo added as a two part JPG in ceb1a7fb57ef8b75ac59b56dd859d5cb3ab5c31168aa55eb3819cd5ddbd3d806 & 9173744691ac25f3cd94f35d4fc0e0a2b9d1ab17b4fe562acc07660552f95518

ASCII tribute in TX 930a2114cdaa86e1fac46d15c74e81c09eee1d4150ff9d48e76cb0697d8e1d72

Just what could be hiding in the blockchain?
[#108] [Fri, 01 Jan 2021 16:21:03 CST][news]

The XENOBYTE.XYZ server has been upgraded to slightly beefier hardware to accommodate for incoming changes, including the return to locally hosted copies of all our freeware source code, as well as re-enabling the TOR hidden service that has been offline for around three months now. TOR traffic is almost negligible in comparison to that of the clearnet, but the growing uncertainty around the internet's legal limits and the shameless censorship of independent websites leaves us with no choice.

The original server with a whopping 480MB of RAM
The upgraded server sporting 25G of storage and an entire GB of RAM
In the meantime, we've made some minor changes to the site's frontend, sorted the media gallery, and added new content to the scripts section. Version 4.3 will hopefully be ready before February.
[#107] [Fri, 01 Jan 2021 00:00:00 CST][misc]
// A new decade

Argos usually wakes me up before sunrise but it seems that even he felt last night's chill. Fortunately, the first morning of the new decade was a warm one despite being halfway through the damp winter. Can't complain about a comfy morning in these times.

[#106] [Thu, 31 Dec 2020 00:12:00 CST][misc]
// New Year, new pain

It's been a painful year, one of global misery and shattered hope that will leave a scar on society. Every aspect of our civilization was tested to an extreme by both the virus countermeasures and the hit our already dying economy inevitably took. This all unfolding as the world's leading nations duke it out in yet another invisible war and the opportunistic mega-corporations slowly tighten their grip in the last remnants of personal freedom left.

The future that awaits, though still shrouded by uncertainty, paints an even worse scenario. One where the current system can no longer uphold all the accumulated negligence and ignorance we've been responsible for and begins to fail when it is needed the most.

[#105] [Sat, 05 Dec 2020 11:44:37 CST][sgl_devlog]
// SkeletonGL ver 2.0 finally released

SkeletonGL v2.0 has been released. The first stable and feature-complete build of the library, it has slowly but surely grown into exactly what it was envisioned to be and then some. The engine is now apt for more than just hobby projects by merit of its stability, performance and simplicity, it offers enough rendering capabilities to give the established, C++ 2D rendering choices like allegro, SFML & SDL2 some competition.

This latest version has seen major changes to almost every section of the source to accommodate for some of the more elaborate additions, but the interface has remained practically the same. Updating an SGL project to v2.0 should be almost as easy as pulling from the git and recompiling, however, the engine internals have been overhauled so if your build relies on custom changes I'd advise to take a look at SGL_DataStructures.hpp which now contains all the internal OpenGL resources names.

Internally managed OpenGL resources

    // GL_LINES uses this value top set the line's width, note that if AA is enabled it limits the linw width
    // support to 1.0f
    const float MAX_LINE_WIDTH = 20.0f;
    const float MIN_LINE_WIDTH = 1.0f;

    const float MAX_PIXEL_SIZE = 20.0f;
    const float MIN_PIXEL_SIZE = 1.0f;

    const float MAX_CIRCLE_WIDTH = 1.0f;
    const float MIN_CIRCLE_WIDTH = 0.01f;

    // These rendering consatants are the maximum amount of simultaneous instances to be rendered in a batch
    const std::uint32_t MAX_SPRITE_BATCH_INSTANCES = 10000;
    const std::uint32_t MAX_PIXEL_BATCH_INSTANCES = 10000;
    const std::uint32_t MAX_LINE_BATCH_INSTANCES = 10000;

    // Names assigned to the OpenGL objects used by the SGL_Renderer
    const std::string SGL_RENDERER_PIXEL_VAO                  = "SGL_Renderer_pixel_VAO";
    const std::string SGL_RENDERER_PIXEL_VBO                  = "SGL_Renderer_pixel_VBO";
    const std::string SGL_RENDERER_PIXEL_BATCH_INSTANCES_VBO  = "SGL_Renderer_pixel_batch_instances_VBO";
    const std::string SGL_RENDERER_PIXEL_BATCH_VAO            = "SGL_Renderer_pixel_batch_VAO";
    const std::string SGL_RENDERER_PIXEL_BATCH_VBO            = "SGL_Renderer_pixel_batch_VBO";
    const std::string SGL_RENDERER_LINE_VAO                   = "SGL_Renderer_line_VAO";
    const std::string SGL_RENDERER_LINE_VBO                   = "SGL_Renderer_line_VBO";
    const std::string SGL_RENDERER_LINE_BATCH_INSTANCES_VBO   = "SGL_Renderer_line_batch_instances_VBO";
    const std::string SGL_RENDERER_LINE_BATCH_VAO             = "SGL_Renderer_line_batch_VAO";
    const std::string SGL_RENDERER_LINE_BATCH_VBO             = "SGL_Renderer_line_batch_VBO";
    const std::string SGL_RENDERER_SPRITE_VAO                 = "SGL_Renderer_sprite_VAO";
    const std::string SGL_RENDERER_SPRITE_VBO                 = "SGL_Renderer_sprite_VBO";
    const std::string SGL_RENDERER_SPRITE_BATCH_INSTANCES_VBO = "SGL_Renderer_sprite_batch_instances_VBO";
    const std::string SGL_RENDERER_SPRITE_BATCH_VAO           = "SGL_Renderer_sprite_batch_VAO";
    const std::string SGL_RENDERER_SPRITE_BATCH_VBO           = "SGL_Renderer_sprite_batch_VBO";
    const std::string SGL_RENDERER_TEXT_VAO                   = "SGL_Renderer_text_VAO";
    const std::string SGL_RENDERER_TEXT_VBO                   = "SGL_Renderer_text_VBO";
    const std::string SGL_RENDERER_TEXTURE_UV_VBO             = "SGL_Renderer_texture_uv_VBO";

    const std::string SGL_POSTPROCESSOR_PRIMARY_FBO    = "SGL_PostProcessor_primary_FBO";
    const std::string SGL_POSTPROCESSOR_SECONDARY_FBO  = "SGL_PostProcessor_secondary_FBO";
    const std::string SGL_POSTPROCESSOR_TEXTURE_UV_VBO = "SGL_PostProcessor_UV_VBO";
    const std::string SGL_POSTPROCESSOR_VAO            = "SGL_PostProcessor_VAO";
    const std::string SGL_POSTPROCESSOR_VBO            = "SGL_PostProcessor_VBO";

    // Default shader uniform names, make sure they match your custom shaders.
    const std::string SHADER_UNIFORM_V4F_COLOR                  = "color";
    const std::string SHADER_UNIFORM_F_DELTA_TIME               = "deltaTime";
    const std::string SHADER_UNIFORM_F_TIME_ELAPSED             = "timeElapsed";
    const std::string SHADER_UNIFORM_V2F_WINDOW_DIMENSIONS      = "windowDimensions";
    const std::string SHADER_UNIFORM_M4F_MODEL                  = "model";
    const std::string SHADER_UNIFORM_M4F_PROJECTION             = "projection";
    const std::string SHADER_UNIFORM_F_CIRCLE_BORDER_WIDTH      = "circleBorder";

    const std::string SHADER_UNIFORM_I_SCENE                    = "scene";
    const std::string SHADER_UNIFORM_V2F_FBO_TEXTURE_DIMENSIONS = "fboTextureDimensions";
    const std::string SHADER_UNIFORM_V2F_MOUSE_POSITION         = "mousePosition";


Moving on to new features, the SGL_Renderer can now render GPU accelerated circles. Drawing circles in modern OpenGL is rather complicated since there is no native OpenGL function to do so, they must be manually computed and rendered using the available primitive types which can be unnecessarily costly. After some experimentation, it became apparent that the fastest way to render circles is to form a square by joining two mirrored triangles and using the surface as a canvas to render the circle on with a special fragment shader. Basically, circles are just sprites using a shader that renders a circle on top.

To complement the other primitive renderers, however, the new SGL_Circle object is a straightforward representation of a circle and can be rendered calling the renderCircle function, abstracting away the internal SGL_Sprite. Conversely, it's possible to draw a circle on top of a sprite by specifying the SGL_Sprite shader as a circle shader, the circle's width is parsed as a renderDetails.circleBorder.

struct SGL_Circle
    glm::vec2 position;                          ///< Circle position
    SGL_Color color;                             ///< Circle color
    SGL_Shader shader;                           ///< Shader to process the circle (because why the fuck not)
    float radius;                                ///< Circle size
    BLENDING_TYPE blending;                      ///< Blending type

// Added to SGL_Renderer
void renderCircle(float x, float y, float radius, float width, SGL_Color color);
void renderCircle(const SGL_Circle &circle) const; // Circles are just invisible sprites used as canvas

Rendering a batch of circles is as easy as calling renderSpriteBatch() with an SGL_Sprite that has been assigned either a custom circle shader or the included SGL::DEFAULT_CIRCLE_BATCH_SHADER (which is in reality a modified SGL::DEFAULT_SPRITE_BATCH_SHADER).

Circle batch example

    SGL_Sprite avatar;
    avatar.texture = _upWindowManager->assetManager->getTexture("avatar");
    avatar.shader = _upWindowManager->assetManager->getShader(SGL::DEFAULT_CIRCLE_BATCH_SHADER);
    avatar.shader.renderDetails.timeElapsed = _fts._timeElapsed/1000;
    avatar.shader.renderDetails.circleBorder = 0.06;
    avatar.position.x = 40;
    avatar.position.y = 10;
    avatar.size.x = 32;
    avatar.size.y = 32;
    avatar.color = SGL_Color(1.0f,1.0f,1.0f,1.0f);

    // Note the mismatch in shader and render call, in this case it will default to rendering a sprite with
    // default settings

    // Generate 4000 sprites worth of model data
    std::vector sBatch;
    for (int i = 0; i < 4000; ++i)
    // Prepare transformations
    glm::mat4 model(1.0f);
    float r2 = static_cast  (rand()) / (static_cast  (RAND_MAX / 3.12f));

    model = glm::translate(model, glm::vec3(rand() % 320, rand() % 180, 0.0f)); //move
    // rotate
    model = glm::translate(model, glm::vec3(avatar.rotationOrigin.x, avatar.rotationOrigin.y, 0.0f));
    model = glm::rotate(model, r2, glm::vec3(0.0f, 0.0f, 1.0f));
    model = glm::translate(model, glm::vec3(-avatar.rotationOrigin.x, -avatar.rotationOrigin.y, 0.0f));
    // scale
    model = glm::scale(model, glm::vec3(avatar.size, 1.0f)); //scale

    // In this case the sprite batch renderer matches with the CIRCLE_BATCH_SHADER and will draw a circle
    // on top of the sprite each instance render
    _upWindowManager->renderer->renderSpriteBatch(avatar, &sBatch);

GPU accelerated primitives (including circles!)

Pixel, Line, Circle & Sprite batch rendering test

This flexibility allows circles to be rendered on top of any sprite and for the circle's border width to be specified, all in a single draw call.

Behind the scenes, circles are just custom shaders and can be applied to any sprite

It's also possible to fill the circle by simply specifying a bigger border width value.

Same circle, different border widths

To better showcase what the library is capable of, the original plan was to release the v.20 update alongside an arcade game called Risk Vector. However, time constraints and the 2020 global fuckery in general left me with little time to develop the game. Opting instead to polish SkeletonGL as much as possible to then upgrade the software already using it and only then moving on to develop some games.

The few client projects using SGL as a means to render graphics have already been updated (check your email for notifications) and both CAS-SGL and Snake-SGL will hopefully soon follow.

[#104] [Tue, 01 Dec 2020 04:21:39 CST][news]

After months of short yet consistent bursts of effort, the Skeleton Graphics Library version 2.0 is finally here!

This marks a very important milestone for the library as it is now in its first stable release with all it's promised features plus some more:

SkeletonGL VER 2.0 features
  • ■ Updated to compile with C++20
  • ■ Full .INI configuration file parser
  • ■ Runs on both legacy and modern hardware
  • ■ Fully compatible with legacy systems like WindowsXP
  • ■ Integrated logging, and hardware profiling tools
  • ■ Specialized shader and OpenGL debugging
  • ■ Can be compiled for virtually any modern operating system
  • ■ Compatible with all modern (version 3.3+) OpenGL versions
  • ■ Simple yet fully featured 2D rendering
  • ■ GPU accelerated primitive geometry rendering (including circles!)
  • ■ Sprite and primitive batching
  • ■ Full access to both custom and default shaders
  • ■ Automated asset loading and management
  • ■ Support for TTF and Bitmap font rendering
  • ■ Texture atlas capabilities
  • ■ Deterministic (FPS limited) and uncapped rendering
  • ■ Included post-processor with custom shader support
  • ■ Lag free input system for mice, keyboards, gamepads and joysticks
  • ■ VSYNC options, window management and multi-monitor support
  • ■ Project template and makefiles included
  • ■ OpenGL blending modes for all renderers
  • ■ Tiny codebase, has five dependencies (all free)
  • ■ Doxygen commented, includes example programs
  • ■ Fully exposed OpenGL rendering pipeline for customized builds

There were also many changes to the engine codebase to comply with better practices as well as to allow for almost total customization of the rendering pipeline. It was also stripped of all vestigial, legacy code that served as testing grounds for what today are features. See the CHANGELOG for a thorough list of the updates and this devlog entry for more information on the ver 2.0 release.

For the time being the project will be prioritizing stability, bug fixing, updating the available example programs and finishing its third official game, Risk Vector.