Archive for the ‘Uncategorized’ Category

Notes: Advanced OBS Stream Config

Friday, June 23rd, 2017

OBS Studio ships with a bunch of audio plugins (Gate, Compressor). On Windows you can use VST Plugins too.

Like most DAW’s, the VSTs used must match the Architecture (i.e. 32bit vs 64bit).

A good set of plugins for this are these VSTs from the developer of Reaper. They are available in both 32bit and 64bit.

http://www.reaper.fm/reaplugs/

Configuring decent Audio

I’m using a 3-stage setup.

  • ReaFir (FFT)
  • ReaEQ (EQ)
  • ReaComp (Compressor)

ReaFir can be used to capture the noise profile of the room.

Simply select the SUBTRACT mode, and click the checkbox beside it to toggle capture mode. Also, you may want to up the FFT size for better fidelity (at the cost of more CPU).

You should do this any time your noise conditions change. i.e. you turn on a fan or such.

ReaEQ can be used to tweak the dynamics, remove muddyness from audio.

My current setup is a 5 part EQ.

  • High Pass: 50 Hz, 0 dB gain, 2 oct – Reducing the sound of thumps from tapping mic
  • Band: 80 Hz, 5 dB gain, 2 oct – Giving my voice more of a bassy boom (~100 Hz)
  • Band: 230 Hz, -3 dB gain, 1 oct – (theoretically) removing the mud (~300 Hz)
  • Band: 4000 Hz, 2 dB gain, 1 oct – (theoretically) raising my S, TH, F accents for more clarity
  • Low Pass: 21000 Hz, 0 dB gain, 2 oct – Something in my room is resonating at ~20k Hz, so it’s to hide that

ReaComp is the compressor.

The realtime graphs are extremely useful here (since they *cough* actually have numbers).

  • Drop the Threshold slider to where you want the compressor to kick-in. Depending on your goals, this may only be once the audio goes loud. Alternatively you can watch the level for when any talking (even quite talking) kicks in, and adjust accordingly. I’m currently at -44.0 dB.
  • Pre-comp: 5 ms – Seems to stop some of the spikes I was causing.
  • Attack/Release: 3 ms/100 ms (default)
  • Ratio: 4:1 – I tried much higher values (32), but if you can have a lower ratio compressor,
    the sound quality is nicer.
  • Knee: 8 dB – Typically when the volume hits the threshold, it is immediately divided by the Ratio. With a Knee, the ratio divisor is smoothly interpolated until it reaches the knee
  • Output Wet: +22 dB – My mic is set rather quiet. Yes I could tweak it.

The microphone is on an arm stand now, placed 6+ inches from my face, with the sock-top roughly at the same level as the bottom of my nose.

Audio Volumes

The above configuration puts my mic volume around -12 dB to -6 dB at 100%. Game audio needs to be adjusted accordingly.

Games with Chiptune music should be about 20% volume (-14 dB). i.e. Shovel Knight, Creepy Castle.

Games with more normal music should be between 30% (-10.5 dB) and 40% (-8 dB). Freedom Planet was a touch too loud at 40%, so I’d suggest 35% (-9.1 dB).

Games with pre-balanced Music and Sound FX might need more volume. Monster Hunter internally defaults to 80% Music Volume, and 100% SFX volume. I found playing with an OBS volume of 45% (-6.9 dB) worked fine.

Routing Windows Audio

Unless specifically supported, applications route their audio to the current Default audio interface. The default can be changed to any attached audio device, or with the help of 3rd party software: to a virtual device.

This can be done with software like Virtual Audio Cable. The software is shareware.

http://software.muzychenko.net/eng/vac.htm

A free alternative that gives you 1 virtual device is VB-Audio.

http://vb-audio.pagesperso-orange.fr/Cable/index.htm

You can then use Audio Router to route the audio from an application to specific audio interfaces.

https://github.com/audiorouterdev/audio-router

As an example, on my setup my “LG TV” is my main audio output (Optical). Devices can be routed to either the default, or 1 or more specific devices. For example, to both capture and listen to game audio, I have to make a route to the “LG TV” (not the Default), and to the virtual device.

Last Minute VPN Notes

Friday, June 2nd, 2017

Just a short one. This is an excellent article on how to get OpenVPN running on Ubuntu 16, and how to utilize it on a variety of OS’s.

https://www.digitalocean.com/community/tutorials/how-to-set-up-an-openvpn-server-on-ubuntu-16-04

This article is simpler, conversely doesn’t explain what’s going on as well. Notably though, as it tells you how to get the VPN working on an OpenVZ VPS.

https://www.rosehosting.com/blog/install-and-configure-openvpn-on-ubuntu-16-04/

Though as of this writing I haven’t been able to get this to route traffic correctly.

EDIT: Okay, I figured it out.

It seems the iptables aren’t persistent across reboots. This line:

Is very important.

You can check the status of the iptables as follows.

Here is a recommended way to persist iptables:

https://askubuntu.com/a/373526

Unfortunately BuyVM OpenVZ Ubuntu installs are misconfigured, so neither package will install.

EDIT2: looks like it was a DNS failure.

https://askubuntu.com/questions/91543/apt-get-update-fails-to-fetch-files-temporary-failure-resolving-error

After doing that, I was able to successfully install the iptables-persistent package.

Notes: Extracting Trailer Videos from Steam for Tweeting

Thursday, April 6th, 2017

The highest quality video trailer on Steam are typically found here:

Where 256677064 is the SteamID of the game.

When you right click on a trailer video in Chrome, you can select “Open Video in New Tab“.

The t= part is probably some unique ID, such as your Steam UserID.

Edit the URL accordingly, from movie480.webm to movie_max.webm. Alternatively, full-screening the video, and after waiting a moment for it to go high quality, you can right click on the video and open the High Quality video in a new tab.

Save the file.

The video file is in webm format, but Twitter requires an mp4.

Twitter makes these recommendations:

https://dev.twitter.com/rest/media/uploading-media#videorecs

It’s worth noting that Twitter requires that videos be under 140 seconds (lol, I see what you did there) and under 512 MB. Fortunately the later shouldn’t be a problem, but if a trailer is over 2 minutes it could be an issue.

Make sure you have a recent version of FFMpeg installed. If this fails, that’s probably why.

I stole the snippet from here:

https://twittercommunity.com/t/ffmpeg-mp4-upload-to-twitter-unsupported-error/68602/2

The script is very simple. To use it, give it a file, and after a few minutes it spits out a file with an added .mp4 extension.

You should now have a file suitable for tweeting.

Doing this has the added benefit of not wasting any of Twitter’s 140 characters per tweet, in addition to videos auto-playing.

Bonus: PSN Store

The PSN store uses regular MP4’s that can be tweeted as-is.

Video URLs look like this:

Which as you can see is quite unsightly.

If you browse to a page on the PSN store website, open up the developer tools, from the network tab you can filter by Media.

With this open, once you click the play button, the video file that’s referenced will appear under Media. Open in its own tab and save it.

Xbox One Store (No Video)

At the time of this writing, there are no videos on the Microsoft store.

https://www.microsoft.com/en-us/store/p/candleman/bs95882kbb4f

Xbox Wire uses YouTube.

http://news.xbox.com/2017/02/01/candleman-available-now-xbox-one/

Nintendo Switch (HLS)

Find the game page on Nintendo.com.

http://www.nintendo.com/games/detail/snipperclips-switch

Behind the scenes, unfortunately it appears Nintendo is using a combination of Flash Player and HLS. If you dig in to flash variables you can extract the HLS URL.

Through a few levels of HLS responses, you can eventually find video, but my understanding of the protocol is limited. I was able to find a short ~15 second clip without sound, when it should be a full-on few minute HD trailer.

Notes: Creating an rsync jail

Saturday, April 1st, 2017

Configuring this properly required me to learn a few new things.

Where to store files

If you have files that should belong to a single user, place them in the user’s home folder.

/home/username/

If the files are shared across multiple users, place them in a folder under the service folder.

/srv/my-project/

This main purpose of specifying this is so that users may find the location of the data files for particular service, and so that services which require a single tree for readonly data, writable data and scripts (such as cgi scripts) can be reasonably placed. Data that is only of interest to a specific user should go in that users’ home directory.

http://www.pathname.com/fhs/pub/fhs-2.3.html#SRVDATAFORSERVICESPROVIDEDBYSYSTEM

Depending on the purpose of the server, you need to decide if tasks are per-user or shared.

If you do decide to use the /srv/ folder, consider placing a symlink to the folder each user cares about inside the user’s home folder. This is simply to remind them that the data they care about is elsewhere.

Hardlinks, Symlinks and Mounts

As a Linux user, you probably know symlinks.

TARGET is something we want to reference, and name_of_link is where we want to put it (if you omit name_of_link, it gets placed in the current folder).

Generally speaking, this is the preferred way to link things on Linux.

Symlinks however do require that you have access to the file linked to. In other words, you have permission to manually go to the location of the file and use it. Later on when we start talking about jailing, this will be something we don’t have.

Hardlinks are created the exact same way as Symlinks, but without the -s.

Internally, a hardlink creates a brand new file that references the same data (inode) used by another file.

Using -i with ls shows the inode number. Every file has one. This is how you spot a hardlink. When 2 or more files share the same inode number, it’s not that one is a link to the other, they ARE the same file.

With that in mind, you can’t actually detect hardlinks like you can detect symlinks. When you delete a file on Linux, it doesn’t necessarily delete the file. Not until an inode runs out of references to to it is data deleted.

Hardlinks can only be files. They can’t link folders. For what I’m doing here, I don’t need this feature, but I’ve included it for completeness. To get the equivalent of a hardlink on a folder (i.e. access to the original isn’t required), you’ll need a binding mount.

This makes /else/where appear to contain everything /some/where did. Beware of recursion when mounting!

A mount can be made read-only like so:

The -o option is used to pass alternative options to mount. --bind is actually a shorthand for -o bind.

Reference: http://askubuntu.com/a/801191
Reference: http://unix.stackexchange.com/a/198591

Creating a jailed user

Before we make the jail, we need user(s) to put in the jail.

The disabled password is to disallow them from logging in via password authentication. Also for security’s sake, try to avoid making a jailed user a sudoer.

The above can be used to check the status of a user.

Display account status information. The status information consists of 7 fields. The first field is the user’s login name. The second field indicates if the user account has a locked password (L), has no password (NP), or has a usable password (P). The third field gives the date of the last password change. The next four fields are the minimum age, maximum age, warning period, and inactivity period for the password. These ages are expressed in days.

If we did things correctly, our user should have a locked password (L).

Reference: http://unix.stackexchange.com/a/184975

Setting up and generating RSA SSH keys for jailed users

On the client PC, you’ll need to generate a public+private key pair.

The default these days is RSA 2048. RSA 4096 is a bit safer, but it’s encryption so who knows for how long that will be the case. ECDSA (specifically ED25519) is on track to replace RSA, but the situation is a bit fishy right now (ECDSA has a potential hole, which ED works around, but it’s new’ish).

Then you’ll need to install the public key.

As root, you’d typically want to do this:

Then paste the contents of your id_rsa.pub file here.

Save the file, exit the user, and restart the SSH server.

You should now be able to connect to the server over ssh as the jailed user.

Addendum: There’s also a command ssh-copy-id that can be used to install the public key for you, but only if have password access.

Without access, this command is useless (included here just for reference).

Reference: http://askubuntu.com/a/46935

Setting up the Jail

Enabling the jail is actually really simple. The problem is the jail will have nothing in it.

Open up /etc/ssh/sshd_config

For simplicity, you should change the Subsystem line to the following:

Then at the end of the file you can add tiny bit of script to immediately lock the user in the jail.

Where username is the user’s name, and /home/username/my-prison is any folder you decide to make in to the root / folder. The folder should belong to the root user (even if it’s their home folder).

Save and restart the ssh server, and from now on, any time that user attempts to SSH in, they’ll get locked to that folder.

HOWEVER! The user lacking some basic tools. Most important: /bin/bash. Without /bin/bash, the connection will close immediately after logging in.

Now you need to build a filesystem.

Reference: https://www.linode.com/docs/tools-reference/tools/limiting-access-with-sftp-jails-on-debian-and-ubuntu
Reference: https://www.marcus-povey.co.uk/2015/04/09/cross-server-ssh-rsync-backups-done-more-securely/

Building the jailed users file system

You should do this as the root user.

NB: When I first started writing this note article, I expected I was going to use hardlinks to reference the currently installed version of all tools and libraries. While this does work, I realized there is an issue: dependency filenames. As long as dependencies don’t change filenames this is a non issue, but there is a change they may, as the OS updates. The chance of changes might be lower on an LTS version of Ubuntu, but I’m using a derived version of Ubuntu that regularly switches-out the Kernel. So instead, one should just cp the files, not hardlink them, and keep an ear out for known exploits of the tools you use.

Installing Bash (required to open an SSH connection and execute commands).

This will report to us what library dependencies bash needs to be run. The printout may look something like this.

What’s important is to pay attention to the lines with paths. All those /lib/‘s.

Using cp here makes this process far simpler. Many of these files are actually symlinks, so using a hardlink would create a dependency on yet another file.

That is everything needed to use Bash.

Installing rsync:

The process is fairly similar for other tools.

With the above 2 tools installed, you should be able to rsync to this machine… and that’s it. Other commands like ls or cp wont be available out-of-the-box, but an rsync-only user really shouldn’t need them anyway.

Notes: CORS, the thing you wish you could ignore

Saturday, November 19th, 2016

It’s 2016, and that means security… even if it’s just sandboxing.

Modern web browsers implement a protocol called CORS, i.e. Cross-Origin Resource Sharing. This is a fancy protocol that gives a web browser hints that a transaction should be allowed or not. It was a few years ago that for the sake of security, browsers switched from trusting every request to trusting no request. For the sake of compatibility, some requests are still honoured (HEAD, GET, POST with specific content-types), but some of the most useful ones are not.

Combined with Fetch, the modern/correct way to fetch data from the internet in current browsers (previously XmlHttpRequest), this can messy. But hey, it’s for the greater good… I guess.

Fetch, Promises and Lambda Arrow Functions

JavaScript’s new Fetch method is the recommended way to handle what we used to call “XHR” requests (i.e. getting data by URL) for any new code that’s written. It’s supported by all the major current browsers, and can be polyfilled for backwards compatibility.

https://developer.mozilla.org/en-US/docs/Web/API/GlobalFetch/fetch

The old way (“XHR”) was inelegant, and poorly named (XML HTTP Request). Fetch has a much improved syntax.

Fetch relies on another modern JavaScript feature: Promises. Promises let you wire up code that can be run asynchronously immediately after (in this case) the Fetch completes, be it a success or failure. As with Fetch, this can be introduced in older browsers with a Polyfill.

https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Promise

Furthermore, Promises benefit from another modern JavaScript feature: Lambda Functions or Arrow Functions as they’re sometimes called. In essense, this a new syntax for creating functions in JavaScript. Unlike Fetch and Promises, Lambda Functions cannot be added to JavaScript with a Polyfill. They require a modern JavaScript compiler (or transpiler) to add them in a compatible way.

Or any combination of the above.

https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Functions/Arrow_functions

And these can be further enhanced with some new features.

Rest parameters (i.e. “the rest of”), which let you write varadic functions.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/rest_parameters

As well as Destructuring, a new syntax that lets you expand or extract data from arrays.

https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment

And at the time of this writing, Rest Destructuring is starting to pop up as a feature (currently unsupported in Buble, without a patch… a patch that exists, and is one click away from being merged in, tee hee).

Legacy Fetch Support

We can do a number of things without worring about Preflights or Cookies, but we still need a CORS header (Access-Control-Allow-Origin). These also work if the origin (protocol+domain) is the same, but CORS is the whole mess when origins (protocol+domain) differ.

You can also do HTTP POST, but when we start talking HTTP POST, we need to start caring about content-type.

In legacy mode, HTTP POST only supports 3 different content types.

  • text/plain
  • multipart/form-data
  • application/x-www-form-urlencoded

That doesn’t mean you can’t use other content-types, but it introduces a new “feature” that we’ll get to soon.

Bypassing CORS

There is a mode you can set…

But this is effectively the same as a HEAD request. It will correctly pass (.then) or fail (.catch) depending on the response code, but you can’t look at data.

Not very useful, ‘eh?

https://jakearchibald.com/2015/thats-so-fetch/

Preflights (i.e. the HTTP OPTIONS request)

To make matters worse, if you want to be modern and use an alternative content type (such as application/json), you now need to handle OPTIONS headers.

https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS

That means JavaScript now does 2 HTTP requests per transaction. The first, an HTTP OPTIONS request, and if that succeeds, your actual requested request (HTTP GET, POST, PUT, etc).

This is the ideal case. If server handles these, then you can write optimal Fetch code.

Unfortunately if you PHP, the content type for the above is application/json, which is routed to php://stdin and not the $_POST variables you may be used to.

https://davidwalsh.name/fetch

Server Side CORS

Somehow you need to include CORS headers on your server. You can do this with Apache.

Or as part of the code that emits stuff.

If you only need basic CORS support (no cookies), you can be simple with your headers.

If you require cookies, you NEED to be specific about the origin.

If you are not specific about the origin, it will fail.

https://fetch.spec.whatwg.org/#cors-protocol-and-credentials

Fact, this fail case is the reason this post exists. Gawd. I spent way too long trying to diagnose this, with no really good references. I had to dig through the spec to find this line:

If credentials mode is “include”, then Access-Control-Allow-Origin cannot be *.

In hindsight, now that I knew what I was looking for, I did find a PHP example how to do it correctly.

http://stackoverflow.com/a/9866124/5678759

LOL.

https://www.html5rocks.com/en/tutorials/cors/

Anyways, I think I’ve suffered through CORS enough now. Like always, this post is here so when I have to revisit the topic (uploads), I’ll know where to start (configure server to Allow-Origin: * (i.e. readonly GET requests), but get specific in the PHP upload script so that credentials matter (PUT/POST)). (PS: I could stop hot-linking if Allow-Origin was specific to Jammer sites).

Notes: Customizing Ubuntu

Saturday, October 29th, 2016

Yay more notes.

Changing the File Manager (Nautilus to Nemo)

So, I hate the default file manager in Ubuntu. Unity is fine (meh), but the file manager is dumb. Super dumb.

In this article, a dude did a comparison of file managers available for Linux.

https://artfulrobot.uk/blog/whats-best-file-manager-ubuntu-gnome-1404-trusty

Nautilus is the default, but dude liked Nemo (very much a Sea theme going on here).

His instructions for installing Nemo weren’t too useful (old), but these are totally fine.

http://www.webupd8.org/2013/10/install-nemo-with-unity-patches-and.html

Long story short:

Keep in mind, this has changed the default. If you search applications, you should 1 or more programs named “Files”. Click on it and see if it start the correct program.

Remember, you still have Nautilus installed, so if you have an icon on the Unity bar for Files, it links to the old program. Start Nemo, pin it, and unpin the old one.

UNFORTUNATELY this has no effect on the File->Open or Save dialogs. Those are rooted in a GTK 2+ vs 3+ issue, which is unclear. Bah.