It’s Easy to Break Promises

Context

I’m currently writing a Node.js package for a wrapper I’m calling a curfew-promise. It’s simpler than I thought, but it is still worth doing in my opinion. The package exports a single function that returns a promise. The promise has “3” parameters with the function header being: (curfew, func, ...args). The promise then performs func asynchronously (so func itself can be sync or async). It passes ...args to that function when it’s called. Lastly, the key idea here is that if func takes longer than curfew milliseconds to complete, the wrapper will become a rejected promise.

At first, it seemed like a tough task, until I discovered the built in Promise.race() function which creates a promise which we can refer to as the racePromise. That function then also takes in multiple other promises. Whichever passed in promise resolves or rejects first, its value is then passed onto racePromise. I can achieve my curfewPromise then by creating one promise that runs func and one that rejects after curfew has passed. Whichever finishes first becomes the value of the promise the function returned. The only way I’ve found to pause an async operation in JavaScript is via await new Promise((resolve) => { setTimeout(resolve, duration); }); This line, when placed into any async function, will pause operation until duration has passed. You can also remove the redundant brackets and shrink it down to await new Promise(r => setTimeout(r, duration)); , but I prefer to be more consistent.

JavaScript is NOT Multi-Threaded

JavaScript runs in a single thread. The way it pretends to be multi-threaded is by switching back and fourth between tasks to give all of them a little bit of time until they all finish. JavaScript is also an interpreted language. This means while trying to test my package, I had a lot of trouble. I spent a few hours trying to understand what was going on and now that I have, I’m going to write about it to save you trouble.

The Code

I was completely convinced that JavaScript itself was just broken. Here is my package, at least in its current state: (If you’d like to use this or see the modern version, check the GitHub page (soon to also be on npmjs.com))

module.exports = (curfew, func, ...args) => {
    let waitId;

    async function wait() {
        await new Promise((resolve) => { waitId = setTimeout(resolve, curfew); });
        return Promise.reject(new Error("Curfew of " + curfew + "ms have elapsed."));
    }

    async function perform() {
        const value = await func(...args);
        if(waitId) clearTimeout(waitId);
        return Promise.resolve(value);
    }
    
    return Promise.race([wait(), perform()]);
}

wait() creates a promise (using async notation, since from what I’ve read that is more ideal than direct promise notation) which waits until curfew has passed, which then rejects with a useful stack trace message. You’ll notice I’m also storing waitId, which is the ID of the setTimeout call. This way, if func finishes before curfew, I can cancel the timeout and not waste performance. I’ll also be looking into ways to create cancellable promises. I’m aware there are already packages that do this, but I think I’ll benefit from doing it by hand. I could make wait() a synchronous function that simply returns a waiting promise that calls setTimeout for reject, but I chose making it like this because then it matches the form of perform() (two async functions), and it allows me to write two lines that are visibly neat rather than trying to force everything into one line.

perform() creates a promise that waits until func is finished. Once it is, it’ll attempt to stop the waiting promise. If the promise is still waiting, it’ll stop it. If the promise has already rejected, it will do nothing. Then, the function returns a resolved promise with the value from func. I chose to write return Promise.resolve(value) instead of return value, which I understand to be the same thing, for consistency once again, and I think it makes the code more readable overall.

Lastly, I am creating the promises off of these functions and making them race. The function returns a promise that resolves or rejects whenever the first of the two – wait() and perform() – finish.

The Problem

I was able to narrow down the problem to this line:

        const value = await func(...args);

All of my tests had worked until I did something like the following:

const curfewPromise = require("./index");

// Broken Promise 1
console.log(curfewPromise(10000, async () => { 
    for(i = 0; i < 1000000; i += .001);
}));

// Broken Promise 2
console.log(curfewPromise(10000, async () => { while(true); }));

I found that for broken promise 1, despite the fact that curfewPromise returns a promise, it wouldn’t finish until the for loop finished. Even if I set the curfew to 0, it would take until the for loop completed. Broken promise 2 would never even end. I purposefully wanted to test these edge cases, where something would actually take a long time for ever (a good example for why I want the ability to cancel a promise).

The problem that I realized so frustratingly last evening is, and say it with me, JavaScript is NOT multi-threaded. Yes, when you run an asynchronous function, you can do other things in the meantime while it finishes. But as far as I can tell, JavaScript does this on a line by line basis. If you have a single line that is very slow or infinite, JavaScript will start running it, as that’s how asynchronous functions work, and then after some progress, it will move on. The thing is, you can’t make progress until that line ends. In the case of an infinite while loop, it will never end. In the case of a very slow one-line for loop, it can’t move on until the for loop is finished. Promise.race() cannot return a promise, so curfewPromise cannot return a promise. Once the for loop actually ends, it’s a gamble to decide which non-neutral-state promise will be chosen to have won the race. (Also keep in mind when using very small curfews, there is internal delay and it may not behave how you expect).

Conclusion

JavaScript is not multi-threaded. Make sure you keep that in the back of your mind. I was inadvertently testing for a case that shouldn’t ever even happen. While it was frustrating trying to figure out what was wrong, I still think that this kind of struggle is necessary in both life and learning. I always learn the most in coding when I’m trying to find out how to do something and I have 20 tabs open.

While this is a simple package that does something many developers probably can do easily, I still want to spend time making it and even upload it to npm. I think even though it’s simple, the function still helps clean up syntax to improve readability and reduce lines. I also think making it an external package helps to reduce complexity in your projects and helps me use it in multiple projects. Maybe someone can also look at it for help when learning how promises work.

At the end of the day, promises and asynchronous functions are an incredibly valuable tool in JavaScript and the illusion of multiple threads helps in most situations. Just be careful to not break your promises.

Running Bash Commands in Node.js

Disclaimer:

This is not explicitly a “guide”. There are far better resources available on this topic; This is simply a discussion about the child_process module.

Anecdotal Background

I’ve been in the process of creating programs to automate the running of my Minecraft Server for my friends and I. In that process, I’ve created a functional system using a node server, a C++ server, and a lot of bash files. While it works, I can’t be sure there aren’t bugs without proper testing. Trust me, nothing about this project has has “doing it properly” in mind.

When learning front-end JavaScript, I recall hearing that it can’t modify files for security reasons. That’s why I was using a C++ server to handle file interactions. Little did I realize node can easily interact with files and the system itself. Once I have the time, I’m going to recreate my set up in one node server. The final “total” set up will involve a heroku node server, a local node server, and a raspberry pi with both a node server (for wake on lan) as well as an nginx server as a proxy for security for the local node servers.

Log

As a bit of a prerequisite, I’ve been using a basic module for a simple improvement on top of console.log. I create a log/index.js file (which could simply be log.js, but I prefer having my app.js being the only JavaScript file in my parent directory. The problem with this approach, however, is that you end up with many index.js files which can be hard to edit at the same time).

Now, depending on what I need for my node project I might change up the actual function. Here’s one example:

module.exports = (label, message = "", middle = "") => {
    console.log(label + " -- " + new Date().toLocaleString());
    if(middle) console.log(middle);
    if(message) console.log(message);
    console.log();
}

Honestly, all my log function does is print out a message with the current date and time. I’m sure I could significantly fancier, but this has proved useful when debugging a program that takes minutes to complete. To use it, I do:

// This line is for both log/index.js and log.js
const log = require("./log"); 

log("Something");

Maybe that’ll be useful to someone. If not, it provides context for what follows…

Exec

I’ve created this as a basic test to see what it’s like to run a minecraft server from node. Similar to log, I created an exec/index.js. Firstly, I have:

const { execSync } = require("child_process");
const log = require("../log");

This uses the log I referenced before, as well as execSync from node’s built in child_process. This is a synchronous version of exec which, for my purposes, is ideal. Next, I created two basic functions:

module.exports.exec = (command) => {
    return execSync(command, { shell: "/bin/bash" }).toString().trim();
}

module.exports.execLog = (command) => {
    const output = this.exec(command);
    log("Exec", output, `$ ${command}`);
    return output;
}

I create a shorthand version of execSync which is very useful by itself. Then, I create a variant that also creates a log. From here, I found it tedious to enter multiple commands at a time and very hard to perform commands like cd, as every time execSync is ran, it begins in the original directory. So, you would have to do something along the lines of cd directory; command or cd directory && command. Both of which become incredibly large commands when you have to do a handful of things in a directory. So, I created scripts:

function scriptToCommand(script, pre = "") {
    let command = "";

    script.forEach((line) => {
        if(pre) command += pre;
        command += line + "\n";
    });

    return command.trimEnd();
}

I created them as arrays of strings. This way, I can create scripts that look like this:

[
    "cd minecraft",
    "java -jar server.jar"
]

This seemed like a good compromise to get scripts to look almost syntactically the same as an actual bash file, while still allowing me to handle each line as an individual line (which I wanted to use so that when I log each script, each line of the script begins with $ followed by the command). Then, I just have:

module.exports.execScript = (script) => {
    return this.exec(scriptToCommand(script));
}

module.exports.execScriptLog = (script) => {
    const output = this.execScript(script);
    log("Exec", output, scriptToCommand(script, "$ "));
    return output;
}

Key Note:

When using the module.exports.foo notation to add a function to a node module, you don’t need to create a separate variable to reference that function inside of the node module (without typing module.exports every time). You can use the this keyword to act as module.exports.

Conclusion

Overall, running bash, shell, or other terminals in node isn’t that hard of a task. One thing I’m discovering about node is that it feels like every time I want to do something, if I just spend some time to make a custom module, I can do it more efficiently. Even my basic log module can be made far more complex and save a lot of keystrokes. And that’s just a key idea in coding in general.

Oh, and for anyone wondering, I can create a minecraft folder and place in the sever.jar file. Then, all I have to do is:

const { execScriptLog } = require("./exec");

execScriptLog([
    "cd minecraft",
    "java -jar server.jar"
]);

And, of course, set up the server files themselves after they generate.

Don’t be Overwhelmed by Refactoring

As I’ve mentioned previously, I taught myself how to code a few years ago. I’ve learned a lot more since then, but I’m always learning. The other day, I had an assignment for another course that involved going back and refactoring old code. Since it was so bad, I’d like to discuss it.

Concept

The code itself is a terminal based Java calculator. I imagine it was really tricky for me at the time considering the way in which I implemented it. I didn’t know about the static keyword nor about how to use methods. This serves as a great example of what not to do:

The Original Code

Now, the code itself is so horrendously bad that I have to get creative to even display it. So here’s a few entertaining lines. Keep in mind the entire program is within the main method:

int load;
double num1, num2, ans;
boolean on, autoclose, autocontinue, loading, numchecking, divide, multiply, add, subtract, other, help;
String in, in2, in3, in4, in5, operation, fa, status1, status2, status3;
char op;

Now, a sane person might ask “Why are there so many variables and why are many of them named so badly?” Well the reason is because I didn’t use methods. I used variables to separate control flow. I also find it funny that I had a String for every user input rather than just reusing one, as well as the fact that I declared all variables at the top for no reason. Here’s an example of that horrible control flow in action:

in = input.next();

if (in.equals("/")) {
	divide = true;
	numchecking = true;
	operation = "divide";
	fa = "by";
	op = '/';
}

else if (in.equals("*")) {
	multiply = true;
	numchecking = true;
	operation = "multiply";
	fa = "by";
	op = '*';
}

I would ask for input, set these variables, and then later on:

if (numchecking) {
	Thread.sleep(1000);
	System.out.println("Enter your first number.");
	num1 = input.nextDouble();
	System.out.println("Enter another number to " + operation + "  " + fa + " " + num1);
	num2 = input.nextDouble();

	if (divide) {
		ans = num1 / num2;
	}
	if (multiply) {
		ans = num1 * num2;
	}
	if (add) {
		ans = num1 + num2;
	}
	if (subtract) {
		ans = num1 - num2;
	}

	Thread.sleep(3000);
	System.out.println("Calculating...");
	Thread.sleep(1000);
	System.out.println(num1 + " " + op + " " + num2 + " = " + ans);

I have no idea why I used if else previously but not here. I can understand I didn’t know how to use a switch statement yet I guess. Anyway, I used Thread.sleep() to add artificial delay for some reason in the program. This code I’ve shown is pretty tame honestly. Notice the line counts. I recommend taking a look at the original source code so you can really appreciate how horrible it is. Unfortunately, I had to convert the .java file to a .docx file to upload it to WordPress:

The Refactoring Process

I find myself fall into the same hole: When I realize I can do something in a better way but it’s large and intimidating, I prefer to start from the beginning rather than modifying the current product. Sometimes, that is extremely useful and you just need a solid clean start. However, often that’s overkill and wastes time. I tend to do that even in video games.

As an example, one of my all time favorite games is Factorio which is an indie game about building factories and trying to automate everything. The goal of the game is to get the game to play itself. Anyway, I have over 500 hours in this game and I haven’t actually reached the end goal of launching a rocket. It’s not because I don’t know how to do it or I die too quickly. It’s because I’m never satisfied with my factory layout or my world generation settings. When it comes to world generation, I do actually have to start over. When it comes to factory layout, I could take the time to manually replace the entire layout and keep my current research. Despite that, I almost always start over.

The saving grace with code is that, when its scale is manageable, it can be incredibly fun and relaxing to refactor. Sometimes it’s tough to get started but once you do, it’s a really fun time. I honestly had less trouble refactoring as I did with the assignment itself. The assignment wanted me to make a change based on a guide. Make the change, write it down, and continue. However, I fell into refactoring and made change after change extremely quickly. With code this bad, it was really easy to just aim at one thing and make 40 changes that all fit into unique categories. So I prioritized refactoring the code well over documenting it well.

Virtually all of those variables I had before are gone. Here is the refactored beginning of the code:

private static final String[] OPERATORS = {"+", "-", "*", "/"};
private static Scanner scanner;
private static boolean autoContinue;
private static boolean autoClose;
private static boolean continueOnce;

public static void main(String[] args) {
    boolean programIsOn = true;

You’ll notice I made use of static variables. I only have one variable in the main method and it controls the loop of the program. Other variables are now in methods or simply don’t exist. I even created an array of operators to allow for easier expansion of functionality later on, despite the fact that I’ll almost certainly never come back to this project. I also have 3 booleans on 3 separate lines. This is my personal preference, but with only 3, I would understand simply writing: private static boolean autoContinue, autoClose, continueOnce; I just tend to lean on keeping things on separate lines. Although now that I’ve written that, I kind-of do prefer that. Although it would mess up the width aesthetic going on because it’s such a wide line.

Before, I showed part of how I took in user input and managed arithmetic operations. I set a bunch of variables and handled it later. Here’s how I manage it now:

private static void handleOperations() {
    String operation = scanner.next();
    System.out.println();

    if(isArithmeticOperator(operation)) {
        arithmeticOperation(operation);
        return;
    }

    switch(operation) {
        case "help":
            help();
            continueOnce = true;
            break;
        case "exit":
            autoClose = true;
            break;
        default:
            System.out.println("Sorry, I don't understand that operation. Try again.");
            continueOnce = true;
    }
}

I created a method to handle it that is called in the main loop. It has good variable names, uses a switch statement, and calls methods that have descriptive names. It could be better but it is leagues better than what it was before. Originally in the refactoring process, I had:

private static boolean doUserRequestedMethod() {
    switch(scanner.next()) {
        case "/":
            divide();
            break;
        case "*":
            multiply();
            break;
        case "+":
            add();
            break;
        case "-":
            subtract();
            break;
        case "help":
            help();
            return true;
        case "exit":
            autoClose = true;
            break;
        default:
            System.out.println("Sorry, I don't understand. Try again.");
            return true;
    }

    return false;
}

It returned a boolean value to allow the loop to continue if it needed to, such as if the user entered an unknown command. I replaced that with global (static) variables to control that. I renamed the method for clarity, and I created a single arithemticOperation() method to avoid code repetition. However, that function itself needed a switch statement, so rather than check the operation twice, I split off the arithmetic operations from the original switch statement in a way that allowed me to add more arithmetic operations in the future. I’m pretty happy with that solution.

Lastly, since I showed how the numbers are calculated originally, I should show that in the refactored version. First I have to check if the input was for an arithmetic operation:

private static boolean isArithmeticOperator(String potentialOperator) {
    for(String operator : OPERATORS) {
        if(potentialOperator.equals(operator))
            return true;
    }

    return false;
}

This was part of why I created the OPERATORS array, so that I could avoid a long boolean of &&s. Then for the actual calculation:

private static void arithmeticOperation(String operator) {
    System.out.println("Enter a number: ");
    double leftOperand = scanner.nextDouble();

    String partialEquation = leftOperand + " " + operator + " ";
    System.out.print(partialEquation);

    double rightOperand = scanner.nextDouble();
    double result = leftOperand;

    switch(operator) {
        case "/":
            result /= rightOperand;
            break;
        case "*":
            result *= rightOperand;
            break;
        case "-":
            result -= rightOperand;
            break;
        case "+":
            result += rightOperand;
            break;
    }

    System.out.println(partialEquation + rightOperand + " = " + result);
}

Again, you’ll find decent variable names and a more clear control flow. Obviously this code isn’t flawless but that’s not the real goal in refactoring. The point of refactoring is to create an improvement. You’ll never have perfect code, but you can always improve your code. This is pretty analogous to life itself. Recognize the value and functionality currently there, recognize that you will never be perfect, but always aim to be better.

Here is the current state of the refactored code. It’s honestly amazing how much better it is:

Conclusion

Refactoring is not something to be feared; it should be enjoyed. There is something very relaxing about it if you enter it with the right mentality. Focus on small things you can easily tackle that would make a big improvement and do that. As you slowly cross off changes you need to make, it’ll become more manageable and more readable. In that program above, I cut the number of lines in half after refactoring. I can actually read it and understand what it’s doing. It should be satisfying to go through and make progress towards simplicity and organization. You just have to want it.

The Insatiable Quest for Prime Numbers

I have been coding at least since 2013. I started off self taught and now I’m majoring in CS, while still constantly learning outside of classes. I went from language to language, but something that almost always followed me was the idea of a prime number generator.

I must’ve tried this in almost every language I’ve used. It’s a very simple idea: create a console based program that can both check if an integer is prime, and print out a list of primes in order as quickly as possible. Usually, I don’t get too far for reasons I’ll get into. Although, I did succeed mostly with C++. I had an incredibly quick program (once I realized how terribly slow it was to print out the results to the screen) that ran on multiple threads and could fill up a file with prime numbers. I had two main problems: the file it filled up became so big that notepad wouldn’t be able to open it and the max size of an unsigned long.

Looking back at that code, it was a horrible mess and I’ve come a long way in both C++ and general code design. However, ever since then my quest began to create a data structure from scratch that could allow me to handle enormous primes. I managed to create a class that used strings to hold values. I defined all necessary arithmetic operations and it worked. Incredibly slowly. I should also mention that while I’m constantly coming back to this idea, my attention wanes as I run out of time between vacations.

Using strings was a horrible idea. Not only are they really slow when you try to treat them like numbers, but they are a huge waste of memory. Each character is 1 byte and in base 10 I would only be using 10 values per character. Even using base Z (where I use 0-9 and then a-z and finally A-Z), it would be a very stupid waste of space. So I started thinking about how I could store numbers more efficiently. In C++, it really isn’t that hard.

The solution I came up with involved std::size_t’s (size_t). It’s just a compiler name for the largest possible unsigned integer. Since I have a decent background in number bases (see my blog post on Duodecimal and why we should all switch to it), I was inspired. My design was a linked list structure of size_t’s. I used a linked list rather than an array because I wanted “small” numbers to not take up a lot of space, I wanted to avoid reallocation, and I also wanted to prevent an upper limit for the size. In theory, this object can be as large as memory allows. If I used something like a std::vector, it’s size itself is stored as a size_t.

Each size_t is the largest possible integer in C++. And each node in the linked list acts like a digit of a base. Let n be the largest size_t number. Suppose you have n + 1. That number would basically just be 1<—>0 in this form. Then you keep ticking up the 1’s digit, etc. This means that the numbers are stored incredibly densely. We’re dealing with base (n+1). In my case, I think n is 64 bits. So n=264 – 1. Anyway, to store n2 , you would need 2 nodes. To store nk you only need 64*k bits.

This is insanely better than using strings. Since they’re integers already as well, I get to avoid conversion. My problem recently has been finding the time and motivation to work on this. I try to create unit tests so I can guarantee it’s working as expected and I get bored. I also need to find efficient algorithms for arithmetic operations.

Despite my inability to complete this project, it does a good job of showing the benefits of Object Oriented Programming. I can take all of this complexity and shove it into a class. Once that class is done, I can create the functions to simply calculate primes the normal way.

Checking Primality

The most basic way to check if an integer is prime is to check if any positive integers less than it (other than 1) divide it. From there, you might realize that you can skip all of the even numbers after 2 because if 2 doesn’t divide it, then neither will any of the evens. Then you can also skip all integers larger than it’s square root as factors come in pairs. A few proofs would easily show these results to be true. Then you can continue to progress downwards. However, I’ve come up with a method as well.

How come after you check 2 you can skip all of the even numbers? Okay I suppose I should do one simple proof here:

In fact, let me do this in general:

I enjoy a nice simple proof every now and then, even though I could’ve just cited transitivity of divides. Anyway, what this is telling us is that this applies for any factors. Any time an integer does not divide our potential prime, we can ignore all multiples.

What I’m getting at is that the hypothetically most efficient method for verifying primality is to check all primes up to it’s square root. Since every number is either prime or the multiple of primes, checking a prime also checks every single multiple of that prime. The problem is that you need a list of primes to check. Hence, my program would store primes and use them to verify if a number is a prime.

You may be thinking that if I have a list of primes, why bother at all with arithmetic. And thinking about it, having an “exhaustive” list of primes (an exhaustive infinite list…) and simply checking that list could be very efficient to verify a number is prime. However it would be pretty inefficient to verify that the average number is not prime. The reason I’m not using that method is because my list of primes is small and slowly grows. I only need to store primes up to the square root recall. That means to check 100, I’ll only have 2,3,5, and 7 stored. To check 10,000, I’ll only have all 2 digit primes.

So as I verify primes the list would grow and add more factors in order. You would also want to check the smaller primes first. While you can’t exactly say that more integers are even than are multiples of 5 due to the set being infinite, any complete finite subset of consecutive integers has this property. Multiples of 2 will make up half of the elements, multiples of 3 one third, etc. So you’ll want to start with the smaller primes first.

How to determine divisibility is also up to you. For instance, in base 10 we can rule out any integer who’s first digit is a 0 or 5 (except the number 5 itself) since that rules out multiples of 5 and 10. However, when working with duodecimal, I realized different number bases have different properties for divisibility. For example, in base 12 every multiple of 4 and 8 ends in 0,4, or 8. Every multiple of 3 and 9 ends in 0,3,6, or 9. Every multiple of 6 ends in 0 or 6. Every multiple of 12 ends in 0.

You could potentially have an algorithm to convert the integer to a different number base that is more efficient than performing an arithmetic calculation. Especially considering that for these tricks, you only need to convert the first few digits of the number. I think it’s a really interesting idea, however you risk circumventing the benefits of the machine code. Often times you try to make something more efficient, and then it’s either unnecessary or makes performance worse because the compiler was able to do a better job “fixing” your code. I think either way I’ll have to experiment with these idea in the future. Thinking about it, you only need to convert the number to the base of the prime and then check if the 1s digit is 0. You can then modify how many digits you actually convert based on how many digits the base-prime number will be. For larger numbers this might have potential.

Conclusion

While this project may not be the best example, I think it’s very useful for programmers to have a “go to” project for practicing an unfamiliar language. Something simple enough to allow you to write it from memory, while complex enough to give you a good grasp of the language. In this primes project, I have to handle the terminal, arithmetic, functions, objects, files, threads, etc. If I simplified it down and ignored my ambitions, it would server as a very functional example to allow you to learn the syntax and libraries of a new language. Ideally, you wouldn’t even have the original code to look at. You would just program the functionality from memory using your IDE and Google to figure out syntax. The best way to learn a language is to use it. If you have something familiar in your mind, then you have a great example project to work on already. I find struggling to implement functionality and spending a lot of time searching for a solution has taught me an invaluable amount about the languages I use.

A Strange Way to Use Get Requests

Strap in because this post is going to get very anecdotal.

Backstory

I have a history of running Minecraft servers for my friends and I. A few years ago, I learned how to port-forward a locally hosted server and use it to play on. While it’s not the most secure thing in the world technically, it works really well and it’s produced a lot of fun. For whatever reason, I was thinking about it again recently.

When I’m out of school, that’s usually when we play. I’ve slowly been learning more and more. I’m currently at the point where I can make intermediate datapacks for the game and use shell/bash/batch scripts to run the server. Over the summer for instance I created a shell script to run the server, back it up, and reload it if it crashes.

There’s still one major problem with my servers, however. I need to manually turn on one of my computers and run it. Then that computer has to stay on until I want the server off. It might waste power if no one is using it (also increasing the risk of the world becoming corrupted) and it’s just slowly going to wear out my hardware.

Recently, I remembered that I had two old laptops. Unfortunately, the newer one didn’t have a power cord. So I broke my way in and took out the ram, hard drive, and wifi card. Then I swapped those components into the other laptop. Lastly, I installed Lubuntu onto it. It works surprisingly well. I then set up NoMachine so I can remote into the machine without having it in front of me. The BIOS supports wake on LAN so I took a few hours to get that working. Now, I can leave that laptop plugged in next to my router and send a signal over the internet to wake it up. While the router blocks the magic packet required on ports 7 and 9, a simple port forward allows wake on LAN to work from anywhere. I would definitely not recommend that for most scenarios, but until I find myself the victim of attacks on my router, I think I’ll take my chances.

Now, my goal has been to create programs that run all the time that laptop is running to be able to receive requests to launch Minecraft servers. I’ll work out a script later for handling crashes and such, as well as a datapack for automatically stopping the server when no one has been online in 30 minutes or so. So I started work on a node server.

Node.js

While I am learning about node in my CS-343 course, I have already taken the Web Developer Bootcamp by Colt Steele on Udemy and I highly recommend it. This is the basis of my server so far:

const express = require("express");
const app = express();

const server = 
app.listen(process.env.PORT, process.env.IP, function() {
     console.log("The server has started.");
});

var minecraft_server1_on = false;

app.get("/minecraft/server1/start", function(req, res) {
     res.send("Turning on the server…\n");
     minecraft_server1_on = true;
});

app.get("/minecraft/server1/ping", function(req, res) {
     res.send(minecraft_server1_on);
});

app.get("/minecraft/server1/declare_off", function(req, res) {
     res.send("The server is off.\n");
     minecraft_server1_on = false;
});

app.get("/exit", function(req, res) {
     res.send("Exiting…\n");
     server.close();
});

app.get("*", function(req, res) {
     res.send("The server is running.\n");
});

Now, I’m certain that I’m making mistakes and doing things in a strange way. It definitely is not in a state I would use for anything other than a private server, and even then I’m still working on it.

Here’s the important parts. It’s an express app with a few get routes. I’m running this server from a bash script using nohup so that it can simply run in the background while the pc runs. However, I want to be able to stop the server from another bash script without using process ID’s so I don’t risk closing the wrong thing. While I was considering my options, I thought of something really interesting. I can use the curl command to perform a basic get request in a bash script. So I created a route /exit that when requested, it shuts down the server.

It’s incredibly simple and there are definitely ways around it, but I never even thought of using get requests in this way. There is information in the requesting of a web page, excluding the normal stuff. Just the fact that a request occurred can be useful. What this means is I can check if the server is running by pinging most routes. I can also both send and receive boolean data without any real complexity. Once again, I’m sure this violates some design principle. However, for something like this that is meant to be mostly casual and private, why should I need to set up proper post routes and databases when I can use this.

My method of turning on a Minecraft server now; I store the state in a variable in the node server. This is fine because when this server stops, all other servers should have stopped so I don’t need a database. Then, by pinging certain routes, I can turn a Minecraft server on, check if it’s on, and declare that I have turned it off somewhere else. The node server can maintain the state of Minecraft servers (I can possibly run multiple on different ports) as well as handle inside and outside tracking.

Keep in mind again, I’m looking for the easiest way I know how to make this work right now, not the best way to do it. So from here, I had no idea how to make the node server actually start a Minecraft server. I know how to run one from a bash command though. So I created a C++ program that will also run all the time. It can periodically check the status of the node server. For instance, if I send a request to the node server to turn on a Minecraft server, the C++ program can detect that change by running system("curl http://localhost:PORT/minecraft/server1/ping"); Once again, I’m using a UNIX command in C++ rather than using a wrapper library for curl because it was an easier solution for me. The node server can then return true or false. In fact, the C++ server won’t directly run the command. It will run a bash script that runs the command and stores output into a file. C++ can then read the file and get the result.

I’m currently still in the process of making this work. After this, I’ll make another node server hosted on Heroku that has a nice front end to allow myself and other people to request the laptop to wake on LAN, and then directly interact with those local node server. I may even make a Discord bot to allow people to simply message in chat to request the server to turn on.

Conclusion

Once again, I do not recommend anyone actually does this the way I have. However, the whole point of coding is to make a computer do what you want it to. If you can hide the get requests behind authorization (which I will probably do) as well as fix any other issues, this could be useful. It’s not even specifically about using get requests in this way. Abstract out and realize that it’s possible to do something in a way you never thought of, and it’s possible to use something in a way you never have. Consider what you know and explore what’s possible. Figure out whether or not it’s a good practice based on what problems you run into. I think that’s one of the best ways to learn and if you can find a functional example to fixate on, the way I have, you can find yourself learning new things incredibly quickly.

Don’t Spend So Much Time Coding Cleanly!

There comes a point in every developer’s career where they transition from “make it work” to “oh my God I need to make this look decent”. Initially, our goal is to create a functional Hello World program. However, as we develop, we slowly begin to learn proper code styles and naming conventions. After reading enough Stack Overflow Forums, we start to become self aware of how our code looks and we begin to focus on code aesthetics. While there’s merit in that, we often focus on the wrong things.

This video does a great job of explaining just that: there isn’t really any such thing as clean code. Now, that isn’t to say that there aren’t wrong ways to code. I’m sure we can all name plenty. That being said, the main point is that we shouldn’t spend our time trying to code perfectly the first time.

The best way to code is to do so as well as we can without spending too much time overly focused on getting everything right the first time. If you know you have the opportunity to do something right or cut corners, do it right. However, if you’ve spent 10 minutes trying to name a variable, give it some placeholder name and worry about it later. The first priority is functionality. Later on, you can review and refactor your code.

The main goal of refactoring is readability. In a compiled language especially, there is only so much efficiency the programmer can add. Compilers do a remarkable job of optimizing code by themselves, so your main goal should be readability. Make sure that other developers, as well as your future self, can read and understand your code. Add comments where necessary, but aim for self-commenting code. If your code is its own comment, that helps save space.

In conclusion, focus on simple and readable code. Don’t waste your time doing something that can simply refactored later, within reason. With that being said, try to do things right the first time if it’s obvious how to do it right.

Adapter Design Pattern

I’ve always found Derek Banas’ channel really useful. In late middle school, I started watching his tutorials for how to code. So it’s really interesting to go back and watch his videos, to say the least.

I feel like the adapter design pattern is something that shouldn’t really need to be used. Don’t get me wrong, it’s very useful. Like real adapters, however, the problem itself shouldn’t really exist. That’s especially true with code; it seems as though adapters become required when interfaces aren’t abstracted well enough. In his video, for example, he has an EnemyAttacker interface that represents some kind of enemy. However, the methods are very specific and presume certain characteristics about the attacker. We then need to use an adapter to get around that specificity. It seems to me that writing the interface more generally to begin with would be more ideal. But given that it’s a bad idea to modify old working code, an adapter is a great solution.

In principle, an adapter does exactly what you expect it to: it adapts code. It takes one “interface” and connects it to another “interface”. That’s the colloquial interface as opposed to a coding specific interface. We use adapters every day. The best example is a phone charger. It converts (or adapts) 120V AC power to a low DC voltage. It also adapts the physical plug into USB type-c. All of the functionality is hidden inside of the charger and from the perspective of a user, it’s literally plug and play.

An adapter in code acts the same. I honestly like the example he gave so if you want to see an example of actual code, check out his video. Conceptually, one description of an adapter is the following: We have two interfaces A and B that have many differences, but are conceptually similar. We can create an adapter so we can use any A as a B, or vice versa. It’s okay because of their similarity (or perhaps they need not even be similar!). Really, it’s an incredibly basic pattern so the specifics aren’t that important.

Fundamentally, an adapter is just code that allows other code to operate together. Again, I think the best way to conceptualize it is through the imagery of any real life adapters. However, it should ideally not be necessary in scenarios such as the one from the video. Abstraction, within reason of course, should be prioritized ahead of time.

Introduction to the Duodecimal System

Prerequisites

  • A basic understanding of base systems
  • A basic understanding of arithmetic

Terminology

  • I will be using the symbol Χ to mean the number dek. This is the duodecimal single digit for the number ten. I personally prefer the upside-down 2 symbol; however, this is easier to type.
  • I will be using the symbol Ɛ to mean the number el. This is the duodecimal single digit for the number eleven.
  • All plain numbers here will be decimal numbers.
  • I will write duodecimal numbers in brackets unless specified otherwise.
    • 12 = twelve = [10]
    • .25 = [.3]
    • ½ = “one half” = [½]
  • When I’m giving an example of another number base, I will write that number in parenthesis. I will specify the base in words.
  • I might ignore those last two rules when I’m very clearly writing a long example in a non-decimal base.
  • To represent repeating decimals, I will be using 3 notations for clarity because I’m apparently unable to overline text:
    • .453… means .45333333 repeating
    • .12r means .1212121212 repeating
    • .16’56’r means .16565656 repeating

What makes base n better than base m?

We use a base system for writing numbers because it allows for complex math in a way that something derived from a tally system wouldn’t. It’s easy to add 500 and 400 in a bass system. If you had to actually count 500 and 400 lines to determine there are 900 in total, you probably wouldn’t even bother.

So, are some bases better than others? Yes. The number 500 in binary is (111110100). That requires far more space to express the same value. However, in base 500, 500 = (10). Does that make base 500 even better than base 10 because it takes up even less space? No. While it’s good to condense the size of a number for writing purposes, that’s only true to an extent. If you have to memorize 490 new symbols it’s hardly any easier to write (10) than 500. And as the number of digits you allow increases, the more dense the base becomes. So we need to find a balance.

How should we then compare base 7 and base 6? Base 7 will only have 1 extra symbol, which isn’t even a new symbol to us in base 10, and it will allow for increased density. That doesn’t mean that base 7 is better than base 6, however. 7 is a prime number. So the only factors of 7 are 1 and 7. It’s also odd. Which means counting by 2 isn’t as easy as it is in an even base. Notice that the numbers it’s easiest to count with and perform other arithmetic operations with are factors of the base you’re using.

Let’s count in base 7 until we hit a repeat 1s digit:

  1. 0 1 2 3 4 5 6 10
  2. 0 2 4 6 11 13 15 20
  3. 0 3 6 12 15 21 24 30
  4. 0 4 11 15 22 26 33 40
  5. 0 5 13 21 26 34 42 50
  6. 0 6 15 24 33 42 51 60
  7. 0 10

It takes a long time to hit a repeat digit. This means there aren’t simple patterns that are easy to remember. Now let’s count in base 6 until we hit a repeat 1s digit:

  1. 0 1 2 3 4 5 10
  2. 0 2 4 10
  3. 0 3 10
  4. 0 4 12 20
  5. 0 5 14 23 32 41 50
  6. 0 10

As you can see, it’s far easier to count in base 6 than it is in base 7. You may also notice some patterns.

  • Anytime you count by a factor of base n, you will repeat after (10) = n.
  • Anytime you count by a non-factor a, you will repeat after an.
  • Anytime you count by a multiple of a factor km, where mn, you will repeat after kn.

You also might notice that certain patterns you’re familiar with in base 10 are present in these bases. For instance, the pattern for counting by 9 in base 10 works to count by 6 in base 7. This is because, in general for base n, all numbers share patterns. The reason 9 has that pattern of count up by 1 10 and count down by 1 1 is because 10 = 9 + 1. In a sense, 9 and 1 are inverses. You can also notice their 1s pattern is the reverse of the other’s. This is true for all digits less than n. Look at the above patterns and look at how the numbers that add to the base has inverse patterns in the 1s place.

This isn’t limited to counting. Since addition is repeating counting, subtraction is the inverse of addition, multiplication is repeated addition, and division is the inverse of multiplication, the patterns that show up here apply to those arithmetic operations. Let’s look at fractions in base 7:

  1. 1/1 = 1
  2. ½ = .333…
  3. ⅓ = .222…
  4. ¼ = .1515r
  5. ⅕ = .1254r
  6. ⅙ = .111…
  7. 1/10 = .1

As you can see, basically all of them are infinite rational septenary-decimals. Now let’s look at base 6:

  1. 1/1 = 1
  2. ½ = .3
  3. ⅓ = .2
  4. ¼ = .13
  5. ⅕ = .1111…
  6. 1/10 = .1

As you can see, fractions of non-factors result in infinite rationals. It’s clear they’re rational by the fact that we began with a fraction. This means that while there are infinite terms, the terms repeat. That being said, it’s still clearly simpler to use decimals that have finitely many digits. Ideally 1. Also notice that in base n, n-1 is never a factor except when n = 2. So the fraction of n-1 will always be .1111…, since x * .1111… is .xxxx… where x is the largest possible digit in base n. That is simply equal to 1.

Counting and fractions are only two of the ways these patterns manifest themselves. However, by now, it should be clear that the more factors a base has, the easier it is to use on a day to day level. It’s important to know that at a high level, math won’t change. It’ll remain the same to mathematicians regardless of what base we use. The benefits of one base over another are primarily to ease of learning and ease of day-to-day use. So the following are the main ways a base can be better than another:

  • Less space required
  • Less digits required.
  • More factors.
  • More prime factors.

I’ve yet to explicitly talk about prime factors. It is more useful to have 2 and 3 as factors than it is to have 3 and 5 as factors. This is because ½ of all integers are even and ⅓ are multiples of 3. I’m aware that those statements are not necessarily correct when we’re dealing with infinite sets, but I believe you understand what I mean. There are a lot of multiples of small primes. So we want as many factors as possible without having too many new symbols to memorize for an ideal base.


Duodecimal (Dozenal / Base 12)

So, what’s so good about 12? Well let’s start by comparing 10 and 12 in terms of the aforementioned properties:

  • Base 10 has 10 digits: 0,1,2,3,4,5,6,7,8,9.
  • Base 12 has 12 digits: 0,1,2,3,4,5,6,7,8,9,Χ,Ɛ
  • 10 has 4 factors: 1,2,5,10
  • 12 has 6 factors: 1,2,3,4,6,12
  • They both have 2 prime factors.

Now, how do they compare? In terms of number size, base 12 has smaller numbers because it has 2 extra digits. Now 2 isn’t that many extra to remember, but it should still be considered. Notice that [10] = 12, [100] = 144, [1000] = 1728. In base 12, you can represent 2 more 1-digit numbers, 44 more 2-digit numbers, and 728 more 3-digit numbers. This compounding effect continues to infinity. For just two more digits, you can increase the density of numbers considerably.

Now let’s talk about the elephant in the room: the amazing properties of the factors of 12. 12 is divisible by 2 and 3, as well as 4. This means the most common fractions ½, ⅓, and ¼ will be one digit duodecimals. Let’s compare the patterns of base 10 to base 12:

  1. 0 1 2 3 4 5 6 7 8 9 10
  2. 0 2 4 6 8 10
  3. 0 3 6 9 12 15 18 21 24 27 30
  4. 0 4 8 12 16 20
  5. 0 5 10
  6. 0 6 12 18 24 30
  7. 0 7 14 21 28 35 42 49 56 63 70
  8. 0 8 16 24 36 44 52 60
  9. 0 9 18 27 36 45 54 63 72 81 90
  10. 0 10
  1. 0 1 2 3 4 5 6 7 8 9 Χ Ɛ 10
  2. 0 2 4 6 8 Χ 10
  3. 0 3 6 9 10
  4. 0 4 8 10
  5. 0 5 Χ 13 18 21 26 2Ɛ 34 39 42 47 50
  6. 0 6 10
  7. 0 7 12 19 24 2Ɛ 36 41 48 53 5Χ 65 70
  8. 0 8 14 20
  9. 0 9 16 23 30
  10. 0 Χ 18 26 34 42 50 5Χ 68 76 84 92 Χ0
  11. 0 Ɛ 1Χ 29 38 47 56 65 74 83 92 Χ1 Ɛ0

Now at a glance, duodecimal might appear more complicated. And I will admit that the pattern for 5 and 7 is more complex than any of the patterns in base 10. However, if you look closely, the rest of the patterns are incredibly simple. Let’s now look at fractions:

  1. 1/1 = 1
  2. ½ = .5
  3. ⅓ = .333…
  4. ¼ = .25
  5. ⅕ = .2
  6. ⅙ = .1666…
  7. 1/7 = .142857r
  8. ⅛ = .125
  9. 1/9 = .111…
  10. 1/10 = .1
  1. 1/1 = 1
  2. ½ = .6
  3. ⅓ = .4
  4. ¼ = .3
  5. ⅕ = .2497r
  6. ⅙ = .2
  7. 1/7 = .186Χ35r
  8. ⅛ = .16
  9. 1/9 = .14
  10. 1/Χ = .1’2497’r
  11. 1/Ɛ = .111…
  12. 1/10 = .1

I’ll be the first to admit that when base 12 has repeating fractions, they are more complex than when base 10 has repeating fractions. However, because of 12’s factors, this will happen less often. It also means the most common fractions are simple one digit duodecimals. Since education is such a big worry when it comes to number systems, learning fractions in base 12 will be easier. We’ll also still have examples of repeating duodecimals to teach to kids.

A likely reason why we use base 10 is because we have 10 fingers, so counting by hand is easy. However, look at your hands. Each finger is split into thirds by your knuckles. This means with 4 fingers, you have 12 sections. So kids can be taught how to count by tapping each section with their thumb from the top of the index finger to the bottom of the pinky. And notice, this is superior to base 10 counting! You can count up to 24 on your hands in base 12. In base 10 you can only count up to 10 before you have to get creative. That being said, if you were showing someone a number through your hands, it would be the same as base 10. So perhaps the fact that kids would have to learn two ways to count could be a negative for duodecimal.

One last thing I’d like to mention about duodecimal is that it’s basically the ideal base for humans. You might ask why hexadecimal isn’t ideal. After all, it’s incredibly easy to translate between hexadecimal and binary. Perhaps it would even significantly increase the speed of our computers if we did use hexadecimal. That being said, 6 new digits is a lot to learn. Also, 16’s factors are 1,2,4,8,16. Only 5 factors and only 1 prime factor. Even numbers and powers of 2 are very convenient in hexadecimal, but odd numbers are not. I believe duodecimal is still the better base for daily use. The main reason why is simply because 12 = 3 * 4. Having a factor of 4 means you have two factors of 2. That means you have 3 powers of 2: 1,2,4 as factors. You also have 3 meaning you have both of the simplest primes. Another reason why factors are so important is because of number theory. In duodecimal, every number that ends in a multiple of 3: 0,3,6,9 is a multiple of 3. Every number that ends in a multiple of 4: 0,4,8 is multiple of 4. Every ,number that ends in a multiple of 6: 0,6 is a multiple of 6. Every multiple of 8 ends in 0,8,4, or 2. Every multiple of 9 ends in 0,9,6, or 3. Every multiple of 3 and 4 ends in 0. Having a factor of 3 and 4 is ideal and 12 is the smallest multiple of 3 and 4 with a total of 6 factors.


Conclusion

If we lived in an ideal world, we would be using duodecimal with a version of the metric system based on 12s instead of 10s. Unfortunately, the world is already set on base 10. If you think it would be hard for the US to switch to the metric system, imagine trying to switch the entire world to the duodecimal system. It’s almost certainly never going to happen.

With that said, it doesn’t change the fact that duodecimal is objectively more convenient and useful than decimal in many ways. Hopefully, this has helped you understand that and maybe you’ll become one of the few of us who are wishing that maybe one day we can make the switch. If you’re interested in more, check out the duodecimal category for similar posts. Or leave a comment about what you want to hear more about!

If you find spelling errors or logic errors please let me know in the comments. I’ll be sure to come back and edit this post. My goal as a blogger is to be scientific and non-biased. I don’t want to add to the massive amount of misinformation online, so I’m willing to point out when I’m wrong and make changes later on. I usually write off the top of my head so I expect there to be a lot of errors, especially with the informal way I’m writing this.

Introductory Post for CS-343

Welcome! First I would like to point out that all posts for this course should all be placed in roughly the same categories. However, they all will be tagged with CS-343. This should make them easy to find.

These posts will be centered around me searching for online materials other people have posted that relate to the core topics of CS-343. I will then be sharing them, interpreting them, etc. It will be a collection of information primarily useful to the following topics (as taken from the course syllabus):

  • Design Principles
    • Object Oriented Programming
    • SOLID
    • DRY
    • YAGNI
    • GRASP
    • “Encapsulate what varies.”
    • “Program to an interface, not an implementation.”
    • “Favor composition over inheritance.”
    • “Strive for loosely couples designs between objects that interact”
    • Principle of Least Knowledge
    • Inversion of Control
  • Design Patterns
    • Creational
    • Structural
    • Behavioral
    • Concurrency
  • Refactoring
  • Smells
    • Code Smells
    • Design Smells
  • Software Architectures
    • Architectural Patterns
    • Architectural Styles
  • REST API Design
  • Software Frameworks
  • Documentation
  • Modeling
    • Unified Modeling Language
    • C4 Model
  • Anti-Patterns
  • Implementation of Web Systems
    • Front End
    • Back End
    • Data Persistence Layer

Hopefully, these posts provide you with many resources to help you learn these topics for the first time or to help you recall them after a long time has passed! I wish you the best of luck in whatever you’re hoping to achieve!