Archive for the ‘Uncategorized’ Category

Notes: Extracting Trailer Videos from Steam for Tweeting

Thursday, April 6th, 2017

The highest quality video trailer on Steam are typically found here:

Where 256677064 is the SteamID of the game.

When you right click on a trailer video in Chrome, you can select “Open Video in New Tab“.

The t= part is probably some unique ID, such as your Steam UserID.

Edit the URL accordingly, from movie480.webm to movie_max.webm. Alternatively, full-screening the video, and after waiting a moment for it to go high quality, you can right click on the video and open the High Quality video in a new tab.

Save the file.

The video file is in webm format, but Twitter requires an mp4.

Twitter makes these recommendations:

It’s worth noting that Twitter requires that videos be under 140 seconds (lol, I see what you did there) and under 512 MB. Fortunately the later shouldn’t be a problem, but if a trailer is over 2 minutes it could be an issue.

Make sure you have a recent version of FFMpeg installed. If this fails, that’s probably why.

I stole the snippet from here:

The script is very simple. To use it, give it a file, and after a few minutes it spits out a file with an added .mp4 extension.

You should now have a file suitable for tweeting.

Doing this has the added benefit of not wasting any of Twitter’s 140 characters per tweet, in addition to videos auto-playing.

Bonus: PSN Store

The PSN store uses regular MP4’s that can be tweeted as-is.

Video URLs look like this:

Which as you can see is quite unsightly.

If you browse to a page on the PSN store website, open up the developer tools, from the network tab you can filter by Media.

With this open, once you click the play button, the video file that’s referenced will appear under Media. Open in its own tab and save it.

Xbox One Store (No Video)

At the time of this writing, there are no videos on the Microsoft store.

Xbox Wire uses YouTube.

Nintendo Switch (HLS)

Find the game page on

Behind the scenes, unfortunately it appears Nintendo is using a combination of Flash Player and HLS. If you dig in to flash variables you can extract the HLS URL.

Through a few levels of HLS responses, you can eventually find video, but my understanding of the protocol is limited. I was able to find a short ~15 second clip without sound, when it should be a full-on few minute HD trailer.

Notes: Creating an rsync jail

Saturday, April 1st, 2017

Configuring this properly required me to learn a few new things.

Where to store files

If you have files that should belong to a single user, place them in the user’s home folder.


If the files are shared across multiple users, place them in a folder under the service folder.


This main purpose of specifying this is so that users may find the location of the data files for particular service, and so that services which require a single tree for readonly data, writable data and scripts (such as cgi scripts) can be reasonably placed. Data that is only of interest to a specific user should go in that users’ home directory.

Depending on the purpose of the server, you need to decide if tasks are per-user or shared.

If you do decide to use the /srv/ folder, consider placing a symlink to the folder each user cares about inside the user’s home folder. This is simply to remind them that the data they care about is elsewhere.

Hardlinks, Symlinks and Mounts

As a Linux user, you probably know symlinks.

TARGET is something we want to reference, and name_of_link is where we want to put it (if you omit name_of_link, it gets placed in the current folder).

Generally speaking, this is the preferred way to link things on Linux.

Symlinks however do require that you have access to the file linked to. In other words, you have permission to manually go to the location of the file and use it. Later on when we start talking about jailing, this will be something we don’t have.

Hardlinks are created the exact same way as Symlinks, but without the -s.

Internally, a hardlink creates a brand new file that references the same data (inode) used by another file.

Using -i with ls shows the inode number. Every file has one. This is how you spot a hardlink. When 2 or more files share the same inode number, it’s not that one is a link to the other, they ARE the same file.

With that in mind, you can’t actually detect hardlinks like you can detect symlinks. When you delete a file on Linux, it doesn’t necessarily delete the file. Not until an inode runs out of references to to it is data deleted.

Important: Hardlinks can only be files. They can’t link folders. For what I’m doing here, I don’t need this feature, but I’ve included it for completeness.

To get the equivalent of a hardlink on a folder (i.e. access to the original isn’t required), you’ll need a binding mount.

This makes /else/where appear to contain everything /some/where did. Beware of recursion when mounting!

A mount can be made read-only like so:

The -o option is used to pass alternative options to mount. --bind is actually a shorthand for -o bind.


Creating a jailed user

Before we make the jail, we need user(s) to put in the jail.

The disabled password is to disallow them from logging in via password authentication. Also for security’s sake, try to avoid making a jailed user a sudoer.

The above can be used to check the status of a user.

Display account status information. The status information consists of 7 fields. The first field is the user’s login name. The second field indicates if the user account has a locked password (L), has no password (NP), or has a usable password (P). The third field gives the date of the last password change. The next four fields are the minimum age, maximum age, warning period, and inactivity period for the password. These ages are expressed in days.

If we did things correctly, our user should have a locked password (L).


Setting up and generating RSA SSH keys for jailed users

On the client PC, you’ll need to generate a public+private key pair.

The default these days is RSA 2048. RSA 4096 is a bit safer, but it’s encryption so who knows for how long that will be the case. ECDSA (specifically ED25519) is on track to replace RSA, but the situation is a bit fishy right now (ECDSA has a potential hole, which ED works around, but it’s new’ish).

Then you’ll need to install the public key.

As root, you’d typically want to do this:

Then paste the contents of your file here.

Save the file, exit the user, and restart the SSH server.

You should now be able to connect to the server over ssh as the jailed user.

Addendum: There’s also a command ssh-copy-id that can be used to install the public key for you, but only if have password access.

Without access, this command is useless (included here just for reference).


Setting up the Jail

Enabling the jail is actually really simple. The problem is the jail will have nothing in it.

Open up /etc/ssh/sshd_config

For simplicity, you should change the Subsystem line to the following:

Then at the end of the file you can add tiny bit of script to immediately lock the user in the jail.

Where username is the user’s name, and /home/username/my-prison is any folder you decide to make in to the root / folder. The folder should belong to the root user (even if it’s their home folder).

Save and restart the ssh server, and from now on, any time that user attempts to SSH in, they’ll get locked to that folder.

HOWEVER! The user lacking some basic tools. Most important: /bin/bash. Without /bin/bash, the connection will close immediately after logging in.

Now you need to build a filesystem.


Building the jailed users file system

You should do this as the root user.

NB: When I first started writing this note article, I expected I was going to use hardlinks to reference the currently installed version of all tools and libraries. While this does work, I realized there is an issue: dependency filenames. As long as dependencies don’t change filenames this is a non issue, but there is a change they may, as the OS updates. The chance of changes might be lower on an LTS version of Ubuntu, but I’m using a derived version of Ubuntu that regularly switches-out the Kernel. So instead, one should just cp the files, not hardlink them, and keep an ear out for known exploits of the tools you use.

Installing Bash (required to open an SSH connection and execute commands).

This will report to us what library dependencies bash needs to be run. The printout may look something like this.

What’s important is to pay attention to the lines with paths. All those /lib/‘s.

Using cp here makes this process far simpler. Many of these files are actually symlinks, so using a hardlink would create a dependency on yet another file.

That is everything needed to use Bash.

Installing rsync:

The process is fairly similar for other tools.

With the above 2 tools installed, you should be able to rsync to this machine… and that’s it. Other commands like ls or cp wont be available out-of-the-box, but an rsync-only user really shouldn’t need them anyway.

Notes: CORS, the thing you wish you could ignore

Saturday, November 19th, 2016

It’s 2016, and that means security… even if it’s just sandboxing.

Modern web browsers implement a protocol called CORS, i.e. Cross-Origin Resource Sharing. This is a fancy protocol that gives a web browser hints that a transaction should be allowed or not. It was a few years ago that for the sake of security, browsers switched from trusting every request to trusting no request. For the sake of compatibility, some requests are still honoured (HEAD, GET, POST with specific content-types), but some of the most useful ones are not.

Combined with Fetch, the modern/correct way to fetch data from the internet in current browsers (previously XmlHttpRequest), this can messy. But hey, it’s for the greater good… I guess.

Fetch, Promises and Lambda Arrow Functions

JavaScript’s new Fetch method is the recommended way to handle what we used to call “XHR” requests (i.e. getting data by URL) for any new code that’s written. It’s supported by all the major current browsers, and can be polyfilled for backwards compatibility.

The old way (“XHR”) was inelegant, and poorly named (XML HTTP Request). Fetch has a much improved syntax.

Fetch relies on another modern JavaScript feature: Promises. Promises let you wire up code that can be run asynchronously immediately after (in this case) the Fetch completes, be it a success or failure. As with Fetch, this can be introduced in older browsers with a Polyfill.

Furthermore, Promises benefit from another modern JavaScript feature: Lambda Functions or Arrow Functions as they’re sometimes called. In essense, this a new syntax for creating functions in JavaScript. Unlike Fetch and Promises, Lambda Functions cannot be added to JavaScript with a Polyfill. They require a modern JavaScript compiler (or transpiler) to add them in a compatible way.

Or any combination of the above.

And these can be further enhanced with some new features.

Rest parameters (i.e. “the rest of”), which let you write varadic functions.

As well as Destructuring, a new syntax that lets you expand or extract data from arrays.

And at the time of this writing, Rest Destructuring is starting to pop up as a feature (currently unsupported in Buble, without a patch… a patch that exists, and is one click away from being merged in, tee hee).

Legacy Fetch Support

We can do a number of things without worring about Preflights or Cookies, but we still need a CORS header (Access-Control-Allow-Origin). These also work if the origin (protocol+domain) is the same, but CORS is the whole mess when origins (protocol+domain) differ.

You can also do HTTP POST, but when we start talking HTTP POST, we need to start caring about content-type.

In legacy mode, HTTP POST only supports 3 different content types.

  • text/plain
  • multipart/form-data
  • application/x-www-form-urlencoded

That doesn’t mean you can’t use other content-types, but it introduces a new “feature” that we’ll get to soon.

Bypassing CORS

There is a mode you can set…

But this is effectively the same as a HEAD request. It will correctly pass (.then) or fail (.catch) depending on the response code, but you can’t look at data.

Not very useful, ‘eh?

Preflights (i.e. the HTTP OPTIONS request)

To make matters worse, if you want to be modern and use an alternative content type (such as application/json), you now need to handle OPTIONS headers.

That means JavaScript now does 2 HTTP requests per transaction. The first, an HTTP OPTIONS request, and if that succeeds, your actual requested request (HTTP GET, POST, PUT, etc).

This is the ideal case. If server handles these, then you can write optimal Fetch code.

Unfortunately if you PHP, the content type for the above is application/json, which is routed to php://stdin and not the $_POST variables you may be used to.

Server Side CORS

Somehow you need to include CORS headers on your server. You can do this with Apache.

Or as part of the code that emits stuff.

If you only need basic CORS support (no cookies), you can be simple with your headers.

If you require cookies, you NEED to be specific about the origin.

If you are not specific about the origin, it will fail.

Fact, this fail case is the reason this post exists. Gawd. I spent way too long trying to diagnose this, with no really good references. I had to dig through the spec to find this line:

If credentials mode is “include”, then Access-Control-Allow-Origin cannot be *.

In hindsight, now that I knew what I was looking for, I did find a PHP example how to do it correctly.


Anyways, I think I’ve suffered through CORS enough now. Like always, this post is here so when I have to revisit the topic (uploads), I’ll know where to start (configure server to Allow-Origin: * (i.e. readonly GET requests), but get specific in the PHP upload script so that credentials matter (PUT/POST)). (PS: I could stop hot-linking if Allow-Origin was specific to Jammer sites).

Notes: Customizing Ubuntu

Saturday, October 29th, 2016

Yay more notes.

Changing the File Manager (Nautilus to Nemo)

So, I hate the default file manager in Ubuntu. Unity is fine (meh), but the file manager is dumb. Super dumb.

In this article, a dude did a comparison of file managers available for Linux.

Nautilus is the default, but dude liked Nemo (very much a Sea theme going on here).

His instructions for installing Nemo weren’t too useful (old), but these are totally fine.

Long story short:

Keep in mind, this has changed the default. If you search applications, you should 1 or more programs named “Files”. Click on it and see if it start the correct program.

Remember, you still have Nautilus installed, so if you have an icon on the Unity bar for Files, it links to the old program. Start Nemo, pin it, and unpin the old one.

UNFORTUNATELY this has no effect on the File->Open or Save dialogs. Those are rooted in a GTK 2+ vs 3+ issue, which is unclear. Bah.

A summary of Mike 2016

Tuesday, October 25th, 2016

Hello! If you’re here, you probably saw me mention that I’ll be looking for work in 2017.

I haven’t had to look for a job since 1999, so I don’t have a resume/portfolio handy. If you follow my work, you’ll know that I’m very busy right now working on Ludum Dare, and I will be for the rest of the year. Formal stuff is going to have to wait until then.

But thanks to Ludum Dare, I know that a number of people do follow my work. So I wrote this to confirm that yes, I am looking for work, and give folks an opportunity to reach-out early. My apologies if I don’t get back right away.

So here’s a brief post about myself. If you’re feeling nosy, you can browse my public GitHub repo, or even this blog, but this blog is an anomaly (some parts a decade out of date).

These days my blog is more of a notebook; Collecting thoughts and details on topics I’ve researched, so that I can more easily repeat them or pick-up where I left off.

About Me

I run Ludum Dare. I didn’t start it, but I am its caretaker. I have been a part of it since the beginning (2002).

Besides that, starting in 1999 I worked for several game companies over the years (Sandbox Studios, Digital Illusions Canada, and Big Blue Bubble). I’ve done contracting as well. I’ve shipped more than a dozen commercial games (mostly licensed games, including a few more Barbie games than I’d care to admit); Written lots of low-level C and C++ code for Nintendo and Sony consoles, and a few games entirely in Assembly. I’ve also written lots of OpenGL, ES, and SDL code, shaders, and ported code to dozens of popular and exotic mobile and embedded platforms (most don’t exist anymore). I can 3D math, build engines, assemble a toolchain, and wrangle my way through physics. I’m formerly the Technical Director of a large Canadian game studio (Big Blue Bubble), and I ran a “financially underwhelming” indie studio for many years. That was until this side project of mine (Ludum Dare) became my focus.

I’m based in London Ontario Canada (yes, there’s a London in Canada).

I enjoy doing low-level, performance, and optimization work. I’m at-home on Linux, but spent many years on Windows using Cygwin. I’m not a fan of “black boxes”. I like to know how everything works, and know exactly what to expect. Thanks to Ludum Dare, I also know a lot about PHP, JavaScript, MySQL, Linux Servers, and all those trendy web technologies and standards that are all-the-rage. I like Vulkan and VR, but I haven’t done anything real with them.

In my spare time (ha) I toy with a bunch of other projects. I like to dabble with Arduinos, Electronics, Retro Computers, exotic SBC’s (Single Board Computers), and IOT devices. I get nerdy about getting the most out of the least expensive devices (the old cost vs power ratio), and I think eSports is cool (I used to be really in to Starcraft 2 and Smash).

I’m a pretty good cook too. 😀

I’m looking for something interesting. Not necessarily gamedev, but who knows. Something compatible with me having my own projects and commitments, like Ludum Dare. I can’t relocate to the US, but I can visit (I have no diploma. I dropped out of college to take my first industry gig). I really don’t know what options I have, but I’ve done soooo much low level and backend work, that despite industry trends, taking a Unity gig just seems like a waste (I’ve barely touched it, and I have “opinions” of C# 😉 ).

To summarize, I’m looking for: “interesting”, “flexible”, and “not Unity”.

Still here? Want to tease me for being so picky? You can get in touch with me here: [email protected]

Notes: USB/IP

Sunday, September 25th, 2016

USB/IP is a Linux tool for sharing USB ports with other computers on your network.

It’s been available as part of the Kernel since 3.2, but thanks to the older package still being in the Ubuntu repository, it causes confusion. The following is the proper way to use it.