Mirror of my unlimited GitHub Actions "VPS". NOTE: I am using an alternate GitHub account. The username is randomly generated. https://github.com/Frail7487Real/laughing-spork
Find a file
2024-10-23 12:09:55 +07:00
.github/workflows add name setting instead of hardcoding 2024-10-23 12:09:55 +07:00
.pagekite.rc move the pagekite config to here 2024-10-04 15:32:55 +07:00
checkTime.js make runtime 5 hours 30 minutes WOW 2024-10-13 12:16:21 +07:00
loop.sh add name setting instead of hardcoding 2024-10-23 12:09:55 +07:00
muse-compose.yml add minecraft :D 2024-10-05 18:34:43 +07:00
README.md It's not made automatically but manually. 2024-10-18 16:39:36 +07:00

Global

This repository transforms GitHub Actions into a (12 * 2)/(3.5 * 2) free s36v36 no CC required. It is originally made out of Docker-VNC, but I got too addicted into this and turned it into another project with a lot more improvements.

How It Works

(totally not ChatGPT generated (with some modifications), original prompt)

During the first runtime, a data directory named globalData is created in /mnt/, which is Azure's temporary storage partition. This partition is used for storing the swap file by default, but also offers a lot of space for storing large files. You can store the stuff that you don't want to keep throughout the runtimes inside that folder.

There's also another folder called toBackup inside globalData. Files in this folder will persist across the runtimes. You can also create these 2 scripts inside the directory and it will run at the following time periods:

  • postinstall.sh - runs after the runtime finishes setting things up.
  • postruntime.sh - executes 30 minutes before the runtime stops (which has a maximum limit of 6 hours).

You can configure tasks that you want to start or stop at each runtime.

For the toBackup data bridge, the script first archives the folder into the file archive.tar.gz (inside globalData). Then I used serve, which runs on the old runtime's port 5000, then on the new runtime, I used aria2 to download the archive in parallel and finally extract the archive.

And with all of this, I also have Discord Webhook integration so you get notified on new runtimes, and Tailscale set up so you can actually access the runtime.

Why not just use actions' on.schedule.cron?

The time is not accurate, which may lead to a downtime gap between each runtime.

Secrets that are NEEDED to set

  • WEBHOOK_URL - discord webhook url
  • TAILSCALE_KEY - tailscale auth key