Mirror of my unlimited GitHub Actions "VPS". NOTE: I am using an alternate GitHub account. The username is randomly generated. https://github.com/Frail7487Real/laughing-spork
Find a file
2024-10-18 13:15:54 +07:00
.github/workflows refactor some more & prepare for the sussy forgejo mirror 2024-10-18 13:15:54 +07:00
.pagekite.rc move the pagekite config to here 2024-10-04 15:32:55 +07:00
checkTime.js make runtime 5 hours 30 minutes WOW 2024-10-13 12:16:21 +07:00
loop.sh refactor A LOT 2024-10-16 17:02:12 +07:00
muse-compose.yml add minecraft :D 2024-10-05 18:34:43 +07:00
README.md refactor some more & prepare for the sussy forgejo mirror 2024-10-18 13:15:54 +07:00

Global

NOTE: this repo is meant to be used for me only. if you somehow stumbled on this, be sure to change some stuff in runglobal.yml.

This repository transforms GitHub Actions into a (12 * 2)/(3.5 * 2) free s36v36 no CC required. It is originally made out of Docker-VNC, but I got too addicted into this and turned it into another project with a lot more improvements.

How It Works

(totally not ChatGPT generated (with some modifications), original prompt)

During the first runtime, a data directory named globalData is created in /mnt/, which is Azure's temporary storage partition. This partition is used for storing the swap file by default, but also offers a lot of space for storing large files.

The runtime will make a globalData directory in that temporary storage partition. You can store the stuff that you don't want to keep throughout the runtimes inside that folder. Theres also another folder called toBackup inside globalData. Files in this folder will persist across the runtimes. It also contains these two scripts:

  • postinstall.sh runs after the runtime finishes setting things up.
  • postruntime.sh executes 30 minutes before the runtime stops (which has a maximum limit of 6 hours).

You can configure tasks that you want to start or stop at each runtime.

For the toBackup data bridge, I first archive the folder into the file archive.tar.gz. Then I used serve, which runs on the old runtime's port 5000, then on the new runtime, I used aria2 to download the archive in parallel and finally extract the archive.

And with all of this, I also have Discord Webhook integration so you get notified on new runtimes, and Tailscale set up so you can actually access the runtime.

Why not just use actions' on.schedule.cron?

The time is not accurate, which may lead to a downtime gap between each runtime.

Secrets that are NEEDED to set

  • WEBHOOK_URL - discord webhook url
  • TAILSCALE_KEY - tailscale auth key