LATEST ENTRIES

Click the post title for a direct link

[#131] [Sat, 01 Jan 2022 22:01:08 CST][tech]
// XENOBYTE.XYZ F.A.Q. #1

Q: What is this site all about?

The original concept for the website was that of a clearnet sandbox to host what used to be in third party domains such as gitlab and twitter. Not just as a learning experience, but as a way to present myself to the development community from an independent platform, not unlike the oldschool hacker and programming blogs of the 90s that I so much enjoyed surfing through.

As the site's content slowly grew it also doubled as a informal curriculum and as a showcase of my work to potential clients, with the latter eventually formalizing into our anonymous engineering services as a means to score more interesting projects than those locally available.


Q: What kind of traffic does a niche site such as this see?

I don't keep close tabs on the traffic statistics since the site has never been promoted outside two or three underground messageboards, and there are no monetization schemes to look after. That being said, the total unique visitor count for 2021 stands at ~40K, with an increase of around 300% from the previous year.

never made more than 20 facebook friends, but 40K cybernauts visited my site in 2021 alone


That's with bots, spiders, search engines and other miscellaneous requests filtered out.


Q: Is this an individual project or a team effort?

Everything published in xenobyte.xyz is authored by the admin.
However, in some rare instances where client requests call for extra manpower, I have a couple of longtime colleagues I like to team up with. All opinions, jokes, blog posts, media, grammar error and so on, are solely the opinion of the author etc. etc.


Q: Why does the site's name keep changing? Doesn't it hurt brand awareness?

Partially. Like previously stated, the site hasn't been formally promoted in any way, and it wasn't until late 2018 or so that our engineering services were opened up to the public. Not to mention that, although there has been a clear increase in anonymous project requests, most of our business come from local clients. I rather keep refreshing the site's name when the registry is about to expire to keep the project's aesthetic fresh.

However, the constant domain changes have definitely taken their toll on the site's visibility and overall placement in search engine rankings, which already punish xenobyte.xyz for clashing with a other similar domains and products, not to mention the absolute lack of data collection greatly lowers the site's position in most search engines (google).

I'm still undecided on whether to change the current domain name or keep the current. Should it happen, it will certainly be for the last time.


Q: How profitable is larping as a 1337 hacker, if at all?

Not at all. That's not to say that the concept of providing engineering services "anonymously" couldn't be profitable, rather the business model around a successful example of such services demand resources beyond my original and current means, not to mention the deplorable state of the global economy that makes navigating the ridiculous amount of bureaucracy and legal bullshit required to formalize the idea that much worse.

Thankfully, the demand for competent engineers is only increasing and as the supply fails to meet the demand faster than Microsoft add abstraction layers to its enterprise tools, many potential clients are starting to look beyond the mainstream sphere since, more often than not, their problems can be solved by a single, knowledgeable programmer. I'm also well aware of the changes required to try and steer the concept into something economically viable, it's just not currently feasible. Hopefully in the future.


Q: What is the cost of running a personal website such as this?

The cheapest plan on virtually any VPS provider will do. Currently, the site is hosted in a $5 per month machine to accommodate for the much more resource intensive gitea instance. For a site as relatively simple as this, however, even an alternative site builder like neocities should suffice just fine.


Q: Why the anonymity? Does it hurt the business?

For a random, law-abiding, personal site such as this, it's not even close to being worth the hassle that ensuring actual anonymity requires.
As for the business, since its purpose was to expand my potential client / project pool (regardless of profit) and there are plenty of alternatives to the official payment and organization frameworks, the stigma associated with faceless, online business is almost gone. Worth a shot.


Q: Do I have any formal engineering education? What are my thoughts in going to college for this kind of work?

I have a Computer Engineering degree from my state's university and yes, it's absolutely useless and a total waste of life. I should probably write a blog post explaining myself to thoroughly cover my reasoning, needless to say, the quality of modern academia is unacceptably low. The time, cost and effort commitment are beyond ridiculous just to ultimately receive what is already freely available. Autodidacticism and mentorship (in its archaic practice) are the last, true path to knowledge for the average person in this age of ignorance and hyperrealism.


Q: What happened to the TOR access?

Haven't found the time to set it up. TOR activity has been rather sketchy as of lately, too.


Q: What does xenobyte™ mean?

Ayylien code. It's a WH40K term (I think?).


Q: What happened to the tutorials section that was available back in the xenspace.net days?

It was supposed to be revamped into a proper tutorials section in a future update (it originally consisted of a list of links to blog posts), but it was eventually buried by actual priorities and haven't had the chance to get around doing it since.

The blog post format does have its advantages, and there are a couple of relevant entries in the works concerning a client request turned free software that will be complementing this blog post right here. I'll probably formalize the tutorials section and reintroduce the previously available entries after some revisioning.


Q: Is the RSS feed dead?

Not yet! Because this site is custom made using original software, I have to manually update every single relevant log file, be it an SQLite databse, a README.md inside a repo, the project's HTML page, or an RSS feed. As the website's content grew, so did the amount of work it takes to keep everything synchronized. I think I've organized my workflow to minimize so much redundancy now, I'll be refreshing the RSS feed from now on.


Q: Are any of the old social media accounts active? Can I follow you in any other platforms?

I used to manage an official xenobyte.xyz Twitter account, used solely as a backup news feed that has since remained inactive. There just wasn't much of an incentive to keep the account active beyond its purpose, and was slowly left for death. However, I occasionally receive emails asking if some seemingly related online profile belongs to me or represents the site, a potentially dangerous confusion that can only be remedied by either emphasizing our lack of participation in such services or keeping an officially endorsed presence in said platforms. It'll most likely be the latter, stay tuned.
[#121] [Thu, 17 Jun 2021 16:12:36 CST][tech]
// The Lainchan Webring Saga

For the past year or so, I've been following a series of threads on the lainchan imageboard centered around creating an anonymous webring of sorts. The idea behind is to unite the many personal webprojects that remain scattered throughout cyberspace into a decentralized confederation of related sites. A graph where every node is its own independent realm with a section that links to other such places, removing the need for any centralization while retaining the common context of the projects. I've yet to add the more recent ones, but I'm confident that most of the participating sites have been added to the links section by now, alongside many other unrelated links.

Current websites in the webring

The chain of threads (currently on it's fifth iteration) span hundredths of posts and around 60 or so participating domains, each an individual entity somewhere in the vastness of cyberspace, linked to the rest of the mesh by will and interest alone.
Needless to say, these types of communities are very reminiscent of the pre corporate internet, before "social media" giants held a monopoly on information exchange and finding like-minded netizens required extra effort and technical skills. Each new site first had to come across a webring, usually by surfing around forums or imageboards of interest, then joining the community and eventually the webring itself. This clever use of third-party platforms to strengthen individual domains, although not that common of an occurrence mainly because setting up a personal website acts as a filter, has been bridging personal projects since the early days of the web.

It's no surprise that these same users contribute all sorts of interesting ideas to their respective circles, some anons in the lainchan webring even started experimenting with scripts that illustrate the current state of the webring's matrix as a graph node.

The original concept post

A second poster then uploaded a graph generated by his own script.

A more readable and hilarious graph (post)

A quick glance at the graph makes it apparent that the xenobyte.xyz node introduces many unrelated links that the script still counted as part of the lainchan webring. To facilitate anon's efforts, I created a separate link that hosts only the lainchan webring links right here.
I'll make sure to contribute something cool of my own in the future.
[#118] [Sun, 18 Apr 2021 05:51:38 CST][tech]
// High accessibility with a discriminating filter

I constantly complain about the lack of website diversity in the modern internet. Sadly, the last remnants of the free, community driven internet have mostly faded into a concept of the past. Simply put, corporate influence over the infrastructure runs deep enough that, were not for the scattered efforts of dedicated enthusiasts, the internet would be completely devoid of its original potential, becoming indistinguishable from any other lifeless, monopolized, endeavor.

Surviving in cyberspace as a legitimately free and autonomous community requires total commitment to the engineering, administration, maintenance, legality and moderation of the platform, such that only those that manage to turn in a profit that justifies the effort and risk have any chance of survival. Even then, the myriad of ways that the all-powerful competition can undermine smaller platforms makes it implausible for anything but the most sophisticated, underground efforts (onion markets, private decentralized messageboard, etc.) to have a fighting chance. Digital platforms on the mainstream internet are, therefore, always at risk of being targeted and dismantled, further discouraging the rise of new online ventures and keeping the ones online in check. Step out of line, and you'll be silently culled.

Decimated as the autonomous domains have gotten, there has been a comparatively small, yet symbolically victorious resurgence of anti-conformism with the current state of affairs by the programmers and engineers that the monopolization of the net has displaced. The few, independent platforms that still hold some relevancy have made it this far partially thanks to the collective, organic efforts of their respective administration and user base in defiance of the dystopian alternatives.

However, finding such communities is becoming an increasingly difficult challenge. In particular for the newer generations that, despite being born in a post-internet world, rarely venture beyond their mainstream prisons and thus, have been conditioned to rely on corporate solutions for what, in reality, are very basic services. To make matters worse for these unfortunate individuals, their mega-corporations of "choice" are most definitely, ruthlessly undermining their clientele by abusing their trust and ignorance. Often through sophisticated means that most are not willing to even believe real.

Nevertheless, the aforementioned digital realms where the last exchanges of unfiltered, untainted ideas still happen remain open, eagerly awaiting new participants to nurture its community. A good example of such platforms is lainchan.

A cyberpunkish, anonymous imageboard named after a surprisingly introspective if somewhat niche anime that revolves around discussing related subjects. Not particularly known for the stereotypical shitposting inherent to anonymous communities, it's one of the last bastions of cyberpunk culture worth participating in, at least in the clearnet. It's publicly available and requires no registration to post, making it an easy target for hackers, trolls and shills, despite this, the board remains relatively clean, probably due to its moderate popularity and organized moderation. Regardless, responses on anything but the most active threads do take their time to pour in, it's no reddit or 4chan in terms of sheer population density, that's for sure. It's to be expected since it's pro-freedom / anti-censorship attitude gets invariably associated with some recently conceived social taboos (an excellent way of reinforcing mainstream, sterilized preferences) that filter many, if not most, potential new users.

Those that do stay around tend to be more than just wandering cybernauts, a good deal of them can be considered aficionados to at least one of the discussed subjects. Such users are among the best additions to any community because they have the drive to participate in the current discussions, and it's from the mix of attuned veterans, eager new members and the liberty to exchange ideas that solidifies the platform's culture, crucial for the long term success of any collective. It's the unwritten rules of the game that ultimately serves as a catalyst for the personality that will be associated with the brand, hence why the mainstream alternatives all seem to complement the same superficial image of sterility and safety that encourages it's users to stay within corporate influence. Lest they learn a wrong opinion or two.

Earlier this month, we agreed to help a video game studio with their upcoming MMO by developing a networking tool that aids the debugging and testing of their server's many features. In my opinion, the coolest request we've had yet.

Now, I've been meaning to write a few tutorials about some of the subjects that, in my experience, give aspiring programmers the most trouble. But 2021 has so far been a very busy year with no signs of slowing down, since I can't really spare the time to start yet another secondary project, I settled on trying to be more thorough as to how it is that we get actual engineering jobs done and documenting the relevant parts of the programming process in a tutorial format. Thankfully, the client agreed to release the finished software for free, making it a great opportunity to test out the idea.

Since pretty much all modern operating systems concern themselves with providing only the absolute minimum through their networking API, it befalls to library developers to implement a more robust solution for user programs to use as an intermediary. Features like multithreading, OS API abstraction, data serialization, packet queuing, SSL / TLS encryption and connection lifetime management are some of the more common demands of a typical, modern networking app. As for the sockets themselves, both the UNIX and Windows interfaces are very much the same but with enough minor differences to merit some abstraction, ironically the simplest problem to solve. The others? Not so trivial. Thus, we'll be employing the ASIO networking library as foundation.

Like its name implies, the library focuses in providing "Asynchronous Input & Output" that can be used in a variety of ways, networking and serial communication being among the more common. Socket centric projects, in particular, don't really have much of a choice but to adopt an asynchronous approach to combat the aforementioned problems, not so easy in the 90s but in actuality, tools like ASIO and modern C++ optimize the process quite nicely. Hence, the birth and raise of independent, high performance services, like private MMO servers (of which ASIO has sponsored many) and crypto trading bots have helped cement ASIO as arguably the best networking library for C++. So much so that I wouldn't be surprised if it (eventually) gets incorporated into the official C++ standard, not unlike a few other liboost projects.

The main idea is to get around the unpredictability of sockets by calling the relevant functions from a different thread to prevent pausing the rest of the program, and even though ASIO makes this (relatively) easy, there are still a few details to consider.

The client also specified that the program will be running on virtual machines without a GUI, and given the I/O (not to mention A5STH5TIC) limitations of a vanilla console, ncurses was added as a dependency. Even though it's older than me, it has yet to be trumped as the king of console UI libraries, and with good reason. It's very efficient, fast, tested, documented, easy to incorporate and likely already installed in whatever UNIX system the client will be using. However, it has absolutely no multithreading support. Working asynchronously with multiple threads can trigger memory related crashes if an ncurses function is called from anywhere other than its respective thread. A critical limitation for a tool that requires multiple threads to work, but it's by no means impossible.

Consolidating the threading discrepancies between ASIO's internals and ncurses is all about discerning how is it that ASIO will (asynchronously) communicate with ncurses to make sure this only happens in a predictable, thread-safe manner.
This extends to the internal I/O buffer storage and packet queue as well, any thread parsing a packet into a queue for incoming data must do so in harmony with the rest of the threads processing the ASIO io_context that may trigger an operation on the queue while it's already in use. So, for starters, we have to make note of where such events will happen.

The graph pinpoints the critical sections of the executable and their relationships, starting with the beginning of the main thread (top left). As previously stated, all ncurses functions have to be called from the main thread or risk undefined behavior, luckily there are only three relevant functions out of which only one should be accessed by secondary threads; the window output / system logging function. This is the only access to the window's output available, meaning its internal container must be protected to prevent simultaneous writes by the rest of the threads.

Do note that even though the UI drawing function will be invoked solely by the main thread, the container storing the window's output could be modified by a secondary thread as the draw function is processing it. This container has to be protected during UI processing as well, or it could be updated as it's being iterated through. That's makes two sections of the main thread that must be accounted for when writing the program. As for the rest of the threads, the relevant action will be happening between a handful of functions that interact with the socket themselves. Requesting or accepting connections, reading from and writing to the sockets, processing requests and logging through ncurses, it will all be done asynchronously from any of the secondary threads.

Now, what exactly is it to be done to protect these vulnerable data structures? The most common approach is to use locks and mutexes to limit access to the protected resources to one thread at a time. Despite not being the fastest spell in the book, it can still perform at a superb level as long as the locking period of the mutexes remains as short as possible. Another key reason as to why performance sensitive netcode should always be planned ahead of time, if we design a multi-threaded program around locking and waiting but fail to account for the workload of said function, we risk bottle-necking the tool's performance by often difficult to change code.

In our case, the only procedure that risks such behavior is the request resolution function, the rest shouldn't be doing much more than adding or removing a small object (representing a packet) to the connection's internal packet queue, but resolving requests may entail more complex behavior that wouldn't play well with lock based designs. Still, so far we've covered most of the expected pitfalls of working with high-octane netcode in the context of our project.

I'll cover the source code in the next post, this one has grown long enough.
[#112] [Sun, 07 Mar 2021 05:24:09 CST][tech]
// Project request: MMO server debugging tool

Got a project request from a game developer asking for a tool to thoroughly test custom, TCP based network protocols to help debug a developing MMO server. I was looking for an excuse to write a similar program to spice up the next SkeletonGL game and the client was happy to open source the final product, so I accepted the request despite the difficulty and time constraints. After all, a good chunk of our business involves these kinds of specialized, performance focused solutions. Few are this interesting, though.

Having had worked with game servers in the past I was well aware of how projects involving sockets tends to be either very fast and straightforward or a hard to debug, convoluted puzzles that demands a solid foundation of software engineering principles from the programmer and an efficient design philosophy to overcome the many pitfalls of bad netcode.

Getting around the limitations of the operating system's socket interface is not as trivial as their simplicity may initially appear, though much of the challenge lies in the dynamics of networking itself rather than interface shortcomings. A TCP socket, for example, may terminate a connection at any given moment by different means, suffer unpredictable time delays between endpoints in a synchronized system or slowed down to a halt serving a client with a particularly shit connection that bottlenecks the server, maybe someone tries to connect using networking tools (like the one requested) to mess with the service. The list of potential critical scenarios goes on.

With that in mind, here's a quick rundown of what the client asked for:

  • - Traffic simulation and analysis tool
  • - Support for custom, TCP based networking protocols
  • - IPv6 support
  • - Each instance must be able to act as a server as well
  • - Console based GUI, will be mostly run in a VM terminal
  • - Support for as many threads as the CPU allows
  • - SSL / TLS 1.2 support

It's apparent that the idea behind the request is to essentially outsource the aforementioned netcode issues into a templated solution that can be easily modified to fit their testing needs.
Since this is one interesting project I'll upload another blog post detailing the design and development process soon enough.
[#109] [Thu, 05 Jan 2021 22:12:31 CST][tech]
// Hiding in plain sight

Deep within the more inquisitive internet communities, beyond the reach of censorship where the net's original anarchy still reigns supreme, a group of anonymous freedom enthusiasts have been nurturing a very interesting rumor concerning the bitcoin blockchain that has recently gained traction in the mainstream cyberspace. I'm well aware of the countless conspiracies already circulating but unlike most, this one has enough objective merit to warrant suspicion.



Some basic knowledge of how bitcoin operates is required to fully appreciate the gossip, so for those unaware of how the btc blockchain technology works, it's basically a program that maintains a log file of transactions across a network, this oversimplification ignores a lot of details critical to the bitcoin environment but for the context of this blog post what matters is that:

1. The bitcoin blockchain contains the entire BTC transaction history
2. This blockchain is preserved across all participant nodes
3. It is possible to send bitcoins to an "invalid", personalized address


Bitcoins sent to these fake addresses will be lost, nevertheless they are still a transaction that must be logged, appending the transaction details (including the fake output addresses) to the blockchain and distributing it across the entire mesh. Valid bitcoin addresses are generated from 256-bit private keys as public keys hashed to 160-bit (or 20 byte) string that is then stored in the blockchain as hex and can only be redeemed by the original private key.

This also means the blockchain isn't responsible for ensuring that a certain output address can be indeed accessed, it's the user's responsibility to provide an address he holds the private key to. Providing any unique combination of 20 bytes as a destination address is perfectly valid though it would lock any bitcoins sent to it since the blockchain requires the private key used to generate the address to redeem said coins, such is the price for storing 20 bytes of (unique) data in the BTC blockchain.

In a typical bitcoin transaction, decoding the resulting output addresses from hex to ASCII shouldn't amount to anything but a string of seemingly random alphanumeric characters. That is, unless said address was generated by encoding a custom, 20-byte ASCII string into hex using Base58, in this case the decoded output would produce the original ASCII text.
This means that, as long as you don't mind loosing a few BTC credits, you can insert 20 bytes of custom (and unique) data into the blockchain. Preserving it for posterity by synchronizing across all participant nodes as a decentralized, uncensorable, nigh untamperable public database.

Decoding address "15gHNr4TCKmhHDEG31L2XFNvpnEcnPSQvd" reveals the first section of a hidden JPG



Taking the previous rundown into consideration, it's to be expected that some bitcoiners use the blockchain to store and secure small amounts of data in creative ways. A practice that is so aligned with the blockchain's principles that Satoshi Nakamoto himself famously embedded the following string into the very first bitcoin block, the "genesis block" (though it was added to the coinbase since he mined the first block, not the previously explained method).

Satoshi Nakamoto's message embedded in the genesis block

It's at least evident that he designed bitcoin to allow for arbitrary data to take full advantage of the technology, serving both as an alternative to our current FIAT overlords and as a digital wall for the community to graffiti. So, when I learned that "someone" had supposedly hidden some very condemning blackmail of very important and influential people in the bitcoin blockchain, I was more than willing to believe it. In fact, Wikileaks had previously done something similar by embedding a 2.5MB zip file full of links to the actual data, though it was done in a rather obtuse way to prevent data scavengers from stumbling into a cache of leaked secrets.

Like this 'lucifer-1.0.tar.gz' file found in TX hash # aaf6773116f0d626b7e66d8191881704b5606ea72612b07905ce34f6c31f0887

Today, there is even an official way to add messages to the blockchain by using the OP_RETURN script opcode. Bitcoin was to be used as a broker of both credit and information from the very beginning.

If you'd like to 'mine' the blockchain for secrets, I wrote a simple script that decodes all the output addresses in a transaction, it queries Blockchain.com, so you don't have to download the entire blockchain. As previously explained, this alone may not be enough to peel away all the potential layers of obfuscation possible, nevertheless it is enough to take a quick peek at what the bitcoin community has been sneaking into the blockchain.

Bitcoin logo added as a two part JPG in ceb1a7fb57ef8b75ac59b56dd859d5cb3ab5c31168aa55eb3819cd5ddbd3d806 & 9173744691ac25f3cd94f35d4fc0e0a2b9d1ab17b4fe562acc07660552f95518


ASCII tribute in TX 930a2114cdaa86e1fac46d15c74e81c09eee1d4150ff9d48e76cb0697d8e1d72

Just what could be hiding in the blockchain?
[#79] [Mon, 13 Apr 2020 21:00:41 CST][tech]
// On programming "culture"

During these trying times one would expect that those with so-called "higher education" would be among the few to manage through, yet most of my professional colleagues are having just a hard a time as those without degrees. Those outside this case are among the few that weren't completely and shamelessly abused by the academic system that promised them so much but delivered so little. This counterintuitive conclusion to their years of "sacrifice" shouldn't surprise them at all, for at some point during their training they inevitably realized an ugly truth about their environment (and by proxy, themselves) that the schooling system itself encouraged them to ignore.

It's truly incredible how the newer generation of software engineers is so relatively unskilled compared to the previous one. It turns out that the limited capabilities of the computers available in the 80s indirectly promoted software design based around pragmatism and simplicity at the cost of counterproductive "user friendliness". The initial effort required to progress beyond the casual tier of programming skill, was much better spent learning to code the most trivial of software on a Commodore64 than it is doing anything most college courses would have you do. These two paths have wildly different destinations, despite their superficial similarities he who understands an assembly language has nowhere to move but forward while he who understands a scripting language has skipped through many essential, core concepts. A terrible tradeoff considering the long term implications.

This ties to the newfound problem of wasting digital resources. With every leap in computational performance came a wider margin of error that eventually expanded into a pair of metaphorical training wheels that have today become an engineering standard. After all, why bother with the (slightly) slower, more methodical ways of the past when modern solutions like javascript cut away so much abstraction?

Even The highest tiers of programming jobs are disproportionately managed by the clueless with relative success, unaware of how Moore's law acted as a safeguard for their terrible design decisions all along and instead praising whatever corporate buzzword they were told to repeat to their team as the answer to their woes. All while ignoring the flaws in their ways that have caused the overall quality of AAA software to fall to hilarious lows.

20 billion dollar software company quality (source: @colincornaby)

Our lack of real standards have paved the way for incompetence in the name of "tolerance", we have become obsessed with what's trendy for the worst reasons, we've grown to accept nepotism as the arbiter of our professional hierarchies. It's not just programming culture that has been twisted into a cult of fools by the terribly assessed needs of the market. Every step forward gets overshadowed by the three steps back our tainted ways tax us, we are no longer subject to cold rationality but rather parasitic ideologies. As the coronavirus pandemic reaches its apex, how longer will we be able to keep this charade up?
[#62] [Sat, 04 Jan 2020 18:04:32 CST][tech]
// The war for cyberspace

Hackers managed to claim fdlp.gov less than 24 hours after the death of Soleimani, a high ranking Persian general that apparently had quite a following back home.

I was lucky enough to take a screenshot of the colorful changes the hackers made, a minute later the page defaulted to a generic MySQL error and finally a Cloudfare 404.
fdlp.org post pwn

The reactions I've seen so far have been surprising, as the matter has been mostly disregarded as a laughable response to America's direct attack. Vandalizing the Federal Depository Library Program website (regardless of the message, which was clear enough) may seem pointless to the uninitiated on such subjects, the truth is that the security of federal servers is a governmental responsibility that indeed failed. The true deterrence lies on how it was that they broke into the server, if they used a zero-day exploit the fact is that they could've been using similar methods to penetrate federal cybersecurity for who knows how long.

The source code itself was quite straightforward, consisting of just two images and some text, but interestingly enough there was a Google Analytics API call embedded as an inline JS script.
thanks google

It's worth pointing out that the images the site used were locally referenced, in other words, although it's not clear how they defeated the system, it's evident they could at least dump files into the server's filesystem.

This opens up a world of possibilities for some serious shenanigans...
[#49] [Sat, 24 Jul 2019 11:01:51 CST][tech]
// Super Mario 64 has been accurately decoded

Nintendo apparently left debugging symbols on the production ROM for the original Super Mario 64. On top of that, the ROM was compiled with no compiler optimizations whatsoever. Inevitably, someone took advantage of this and decompiled the ROM, then recompiled the resulting code and surprisingly enough, the hashes of both ROMs match. This means that the decompiled code is an exact reproduction of the original Super Mario 64 source code made back in the early 90s, with the only distinction being the loss of the all variable and function names as the compiler replaced them with its own identifiers. Making the source code a challenge to navigate.

Still, someone actually took the time to decipher all the gibberish, and by the time the team was around 60% done (by their own estimates) someone leaked the project. I was lucky enough to download it just before the sole link I could find was taken down and after a quick peek at the code I can confirm the team did a great job, the code has been restored enough to be mostly readable. The fact that so much has already been decoded means it's only a matter of time before the original SM64 source code is fully restored and released to the public.

Left: renamed code | Right: decode raw output
Games of this era were engineered for old, basic systems leaving little room for bloat. There were no fancy third party engines, no interpreted languages or scripting capabilities, even learning sources were extremely limited.

Super Mario 64 was written in a brutal mix of old school C and MIPS assembly that scares modern programmers, hence the lack of a reaction to the news despite the game's popularity. I found out about the leak about two weeks after it happened, hoping that at least that meant I wouldn't have to try and compile the ROM by myself, since cross compiling for old, very specific CPUs can be a long, tedious process for even the most seasoned veterans. After much fucking around in my main arch installation I finally got it to work by manually compiling build-utils and gcc for the MIPS R4300i architecture, but truth be told, I had to experiment with the dozens of available build parameters until I finally got it to compile. I would later find out that the entire process is unnecessary. Ubuntu 18 LTS actually comes with the entire MIPS build chain in their repos and it just so happens to just werk with the ROMs build process, so if you're interested in compiling your very own Super Mario 64 ROMS all you have to do is:

1. Set up an Ubuntu 18.04 LTS virtual machine (or use your main machine if you're a weenie)

2. Install the build dependencies
sudo apt-get update
sudo apt-get install sudo apt-get install make git binutils-mips-linux-gnu python3 build-essential pkg-config zlib1g-dev libglib2.0-dev libpixman-1-dev libcapstone3

3. Download the qemu-irix binary from here, give it permission to execute and add an environmental variable called "QEMU_IRIX" pointing to it
mv ~/Downloads/qemu-irix ~/Documents/ && chmod +x ~/Documets/qemu-irix
echo "export QEMU_IRIX='~/Documents/qemu-irix'" >> ~/.bashrc

4. Restart the machine.
5. Download the source code from here and compile with make. The ROM can be found in build/us/sm64.u.z64
6. Get sued by Nintendo.
ROM successfully compiled
Freshly compiled ROM running on mupen
Expect some very cool projects in the future.
[#28] [Sat, 23 Jan 2019 17:43:29 CST][tech]
// The economics of NEOHEX.XYZ

EDIT 08/11/2020 : The site was rebranded from NEOHEX.XYZ to XENOBYTE.XYZ

I was asked about the costs and efforts required to run a small, simple web project such as this by a curious visitor.
NEOHEX (previously xenspace) used to be hosted in an array of outdated and neglected computers that I had access to back in college. The infrastructure was there for students to host their projects but remained practically untouched for years. Since the cost of running the server was close to zero, I left the site to rot and focused on other projects, just to eventually come back to NEOHEX to host my newer work. With the site's codebase outdated and in need of an overhaul, I ended up updating it in its entirety and away from its boring, plain facade as well.

With its internals optimized, I was able to move the project to the cheapest hardware available and even gained a boost in performance, a major bonus that most modern frameworks and tools take away from the developer in exchange for a lower skill ceiling and overall control of the software.

The costliest part of it all is renewing the domain name for a total of around $3.3(USD) a month.
The NEOHEX.XYZ server

The picture above is the usual idle state of the server, should we require any of the other services (Tibia 7.2 server world, private git repos, VPN or proxy server, client APIs etc.) they can be simultaneously enabled without taxing the site's performance. All thanks to the meticulous engineering behind the service.