Catch-22 Oct 27, 2022

After a long while, I was in a Catch-22 situation today. I was trying to close two cabinets together quietly. One had latched on to the other and I had to pick which one to stop with my free hand as they swung to a close unexpectedly. Of course when I picked the latter, the former banged hard upon closing.

Sigh.

Remind me to get door softener thingies off Amazon.

Running Compass on Vultr

Intro

Recently, I came across a tweet by Aaron Parecki, where he talked about a lifelogging app he built (and recently released) which tracks our location constantly.

I’ve been using Moves on-and-off over the years and partly due to it being now owned by Facebook, and partly because it’s a very crashy app (first time works fine, doesn’t open ever after that and stops tracking properly soon after; I assume the developer is now working on some darker features for the Facebook apps and so doesn’t spend as much time on his own creation), I’ve never been satisfied with Moves.

So, I downloaded Aaron’s Overland GPS Tracker app (free!) and set it up. The app is rather bare and the functionality is not well explained (within it). But it’s free, open source, a one-man job, and in line with the vision for indie dev, so it’s up to us to figure things out. I asked a few questions, got pointed to the settings explainer here. Well worth a read if you download the app.

The next step of the app was to install a remote server which ingests the data and makes it human readable and useful. As Aaron explains, the quest is to answer the question – “where was I at blah date at blah time?” The app’s official homepage recommends one of two servers to send the data to – a service called Icecondor and a server Aaron wrote called Compass. Compass looks nicer than Icecondor, is self-hosted, and I’ve been itching to play with Vultr.com‘s SSD Cloud, which competes with DigitalOcean in pricing and resources. So, here’s a walk-through for getting yourself setup with Vultr, installing Compass, and setting it up with Overland GPS to start tracking your location as creepily as Facebook and Google do it! 🙂

Vultr

Vultr is a nice competitor to DigitalOcean. At $2.50/mo for their cheapest VPS, it’s half the price of what DigitalOcean offers ($5/mo for the same RAM, storage, and CPU, but DO offers twice the bandwidth and, well, is trusted more). There had to be a caveat, right?

I signed up and the first thing I was told to do was to add money to the account. I had the option of not adding any cash and just attaching my credit card, but I’m going to end up using Vultr for something or the other, so I threw $10 at them (shut-up-and-take-my-money style!).

Then, they told me I can deploy a new server! I picked Seattle as my server location, Ubuntu 17.10 as my poison (which was probably a bad idea; more on that later), and scrolled down to the Server pricing. The $10/mo server was pre-selected for me and the $2.50 option was grayed out! (Seriously though, they should give names to these tiers. It’s silly to keep referring to the price.)

I googled around a bit and found out that they keep disabling the cheapest tier (they call it “Temporarily Sold Out”) as a sort of bait-and-switch model to drive new users to the more expensive options. But that sounds somewhat bullshit. If this was truly the behavior, I’d like my money back. But, and I’m glad I did this, I went back and started clicking around to look for solutions. It came in the form of New York! Turns out, they try to drive users to lesser used data centers while everyone who’s trying to set things up actually tries to use the “Silicon Valley” data center (seriously? Who the heck put a data center there???)

New York and Miami currently have open $2.50/mo tiers (ugh, that naming is so needed! I guess I’ll call it the Micro tier and the next one Mini), and networking is not a problem for me (who cares if a little more bandwidth is needed to get this non-time-sensitive data to New York and back), so I picked New York and threw my hat in the ring.

The server came up within… minutes? (Seriously, it was fast!) and I had an IP address to point to! Yay! But, what’s the password? The usual Ubuntu password didn’t work and I looked around at their docs and there wasn’t much to go by (Vultr’s docs aren’t as awesome as DigitalOcean’s. They’re good, just not there yet. They have a documentation bounty program if you’re interested, dear reader.) Then I checked the email which I would have received on server activation. It said that the password is on the dashboard (silly me!).

As I said before, Vultr’s documentation isn’t great, so I followed a mix of Vultr’s LEMP install here and DO’s LEMP stack installation instructions here. I installed PHP 7.1 with FPM (which, I must admit, was a little leap-of-faith because I wasn’t sure Aaron’s code would work without throwing up legacy issues, which it didn’t) and skipped most of the tweaking that Vultr recommends (YMMV).

Compass

Then, I copied over the Compass files (from here) and started following the Setup. The first issue was the .env file. There’s a few settings in there which are confusing, so here’s what I did –

BASE_URL -> This is your website. It uses HTTPS. More on that below.

STORAGE_DIR -> This is the data directory which is supposed to store your incoming data. Oddly enough, it doesn’t. When you use the application, the GUI prompts you to make a ‘database’ (it should be called a ‘project’ Aaron). This database makes its own folder in the Compass directory, so this variable invariably doesn’t get used. Set it anyways.

APP_KEY -> This confused me a bit. I don’t think this is a password. But I set it to something like a password. It’s a 32 char string, so have fun setting it up.

DB_CONNECTION -> Set this all up as you would any other MySQL application. Use the WordPress tutorial by DigitalOcean as a hint of what to do.

DEFAULT_AUTH_ENDPOINT -> This was one of the more confusing things I saw. Was the idea that this was some generic authorization? To figure out, I found Aaron’s own Compass website and tried to login. Turns out Aaron uses a very neat authorization process. There’s no password. All you do is tell which Indie authorization website you want to use to authenticate who you are and it’ll allow you to login. Specifying this URL will mean that if you can login to that other website, you can login to this website. The default is set to ‘https://indieauth.com/auth’. If you let this remain, it’ll mean that anyone who has an indie auth login anywhere will be able to create an account on your Compass server and potentially use it for their own data. So, I authenticated myself into Aaron’s server and now I have an account there! Of course, I don’t recommend this. I changed this Endpoint to my withKnown.com site. That way, only people who can login to my withKnown site can login to my Compass server. Who can login to my withKnown server? Only me. 🙂

There’s a piece of the puzzle which needs addressing. APP_DEBUG is set to true right now. So whenever there’s an error, Compass spits out the entire MySQL connection string, including password, as well as very important system information out to anyone to see. I suspect that once you’re done setting up this server and you trust it, you should follow the Laravel process of ‘migrating’ the application from dev mode to production. This will help secure your application.

 

After this, I moved on to running Composer to install all the dependencies which I needed for Compass. Here’s all the issues I faced there –

“Composer not installed” – Install using

"apt install composer"

“danielstjules/stringy 1.10.0 requires ext-mbstring” –

"apt install php7.1-mbstring"

“phpunit/phpunit 4.8.21 requires ext-dom” –

"apt install phpunit"

“zip extension and unzip command are both missing” –

"apt install zip unzip"

Now, you can run ‘composer install’ and it’ll work.

 

nginx

I recommend using nginx. You’ve got a small server and you don’t want Apache to drown the memory, so just use nginx.

Aaron’s config for nginx were clear, but not helpful, because it doesn’t go with the usual nginx config floating around tutorials. So here’s mine (relevant portions only) –

index index.php index.html index.htm;
root /var/www/nitinkhanna/html/compass/public;

location / { 
    try_files $uri /index.php?$args; 
}
location /index.php { 
    include snippets/fastcgi-php.conf;
    fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;    
    fastcgi_param   SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ \.php$ {
    include snippets/fastcgi-php.conf; 
    fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
}

At this point, I thought I was done. But then, when I tried to open the site, I ran into some very nice errors in the application. First of all, notice the root. The root of the application is not the compass folder itself, but the public folder inside it. This is not mentioned anywhere in the documentation and was well worth twenty minutes of “what the heck?” and then some. But you have it on good authority that this is what you’re supposed to do.

Secondly, the application wasn’t done making me install stuff. So I also had to install curl –

apt install php-curl

Then, I wanted to digress a little and make my life a little more difficult (or easy, depending on who you ask). Aaron’s own Compass server uses Let’s Encrypt based SSL. I’ve always wanted to secure my own sites using SSL, but I’m lazy. For this, I thought, why not!

I found the CertBot instructions for installing with nginx and Ubuntu here. They’re pretty straightforward, with a small error that I ran into – Cloudflare. I use Cloudflare as my DNS, security, loadbalancer, God of Small Things. Cloudflare provides SSL. It’s literally a one click. When you add a new A record to your domain (such as compass.p3k.io), it adds DNS and security itself by routing traffic through Cloudflare’s network. CertBot doesn’t work with that. CertBot needs direct access to the server. So, I had to disable Cloudflare’s lovely protection for my subdomain and let certbot do it’s job. It did so. It automatically modified the nginx config to accept HTTPS-only connections and to route all traffic to HTTPS. I was even able to setup crontab to auto-renew certs –

43 6 * * * certbot renew --post-hook "service nginx restart"

After this, you run the job queue commands as listed by Aaron and you should technically have a running website. But there’s a catch, as there always is. This server that I’ve got is not a ‘mini’. It’s a ‘micro’. 512 MB RAM is not enough to run MySQL, Ubuntu 17.10, nginx, php-fpm, and actually run an application on top of that. So, I ran into a very cryptic error –

[PDOException]                                    
SQLSTATE[HY000] [2002] No such file or directory 

At this point, I had the application running and I was able to visit the site and all, but try to login and it threw this error. The php artisan command also started throwing this error (by the way, you’re supposed to run the ‘php artisan queue:listen’ command in the background for this server. Follow the instructions here to set up supervisord to do so). Most people on StackOverflow seemed to think that if you replace ‘localhost’ with ‘127.0.0.1’ in the app’s settings, it’ll start working again. But that didn’t help. Finally, someone recommended (not in real-time. I’ve only once ever in my life used StackOverflow in real-time to get answers to a question) restarting MySQL. Well duh.

Oh? MySQL won’t restart. Why???

It was this community question on DigitalOcean that gave me the answer I was looking for – I had run out of RAM. Turns out, 512 MB is just enough to play with a server, but not enough to run it for reals. Nonsense. Let’s just add a swap!

I used this excellent and very easy DO tutorial to add swap to my VPS. Notice the shade it throws at you for trying to use swap on SSDs. They specifically say that it doesn’t recommend using swap for DO “or any other provider that utilizes SSD storage” and that this degrades hardware performance for you and “your neighbors”. DO recommends upgrading your instance so it has more RAM instead of using swap. We don’t listen.

Added swap and voila! It’s working! MySQL fires up and the app stops throwing silly errors! I ran htop all night on the instance to monitor for Memory and Swap use and it works just fine! At last, we can login!

 

Overland

OK, we logged in using our designated Indie Auth website! Now what? You’re staring at the blank screen that recommends you create a database. Do it. You give it a fancy name and it spits out a bunch of configuration. Now what? First of all, change the Timezone in the settings to where you are. It’s set to UTC right now, but for me, it’s PST. Also, use

dpkg-reconfigure tzdata

in your Ubuntu command line to change the timezone of your server to where you are. Remember, my server is in New York. But I told it that its timezone is America/Los Angeles. Because.

OK! You’re good to go! You can throw some data at this server! Head over to the Overland GPS app and add this endpoint to it. Only, what’s the endpoint? I added just my compass server’s URL and that didn’t seem to work. Then I looked at the app screenshots and there it was –

https://compass.p3k.io/api/input?token=E6ncEYWxT...

That’s your Receiver endpoint! But, where should I find this? In your Compass ‘database’ settings, You’ve got a read token and a write token. Next to the write token is a link which says “show API endpoint”. Click it and out pops another line which shows you the above. Simply copy this and magically move it to your phone (I WhatsApp myself these things) and you can plug it into the app and start sending data! The first time you plug it in, the app will collect all the data you’ve accumulated till then (I had some 25000 points of data to transmit) and smoothly move everything to the server (Aaron really has done a great job with the app). After that, it’ll move the data in batches the size of which you can specify (God knows why).

But. You’ll see some odd things. For example, in the afternoon, the server’s map changed the date over to the next data (I suspect this is because my server was still on UTC time. Running the tzdata command above should solve this). Also, whenever there’s no data (or the data hasn’t loaded yet), the map points to Portland. I get that Aaron is from there, but I think we should be able to configure this (Seattle, woooo!) because it’s a little jarring. Finally, this will teach you how bad your GPS data is anyways. Most of the time, the map has me squarely in the water, or swimming out and coming back, or has me cross the I-90 bridge by, well, not crossing the bridge but swimming along it). But, that’s just the world we live in.

 

Questions/Issues
  1. Why does this server need MySQL? The Compass documentation says that the data is stored in flat files. Then is the MySQL database only used for temporary storage of data before it’s processed and saved to flat files?
  2. Is HTTPS a requirement of the server or a nice-to-have? I am not sure about this and I just took the safer route.
  3. The app, in debug mode, spits out way too much information which it shouldn’t. I’d like clear instructions on migrating it off debug mode.
  4. Did I decipher the meaning of DEFAULT_AUTH_ENDPOINT correctly? Not sure. Also, Aaron, if you’re reading this – what do I do with my login on your Compass server? Can you allow people to store their data on there, just for visualization (and wiped every night so as not to flood your server).
  5. I still don’t know what the best configuration is for the app (battery-use to tracking). If you’ve got pointers, throw them in the comments below!

Unraveling the future of Day One Sync

Thursday was an important milestone for Day One and its users. The launch of the Day One browser extensions marks a time when the Day One team is ready to launch API based products outside of their default apps, a somewhat return to the time when Day One 1.0 was a beautiful, open garden of apps and services that could plug-in (and out) without much trouble. Day One 2.0 robbed a lot of people of those options and these browser extensions allow us to come back into the fold.

I’m not under the impression that this means that Day One will suddenly be as open and accepting as it once was. No, the walled garden that the team has created will remain. Their promises to end-to-end encrypt all data (while allowing complete access through their API), their wish to remain free of third-party sync services such as Dropbox, and their interest in keeping their company growing, mean that Day One is never headed back to the old days.

But that doesn’t mean things can’t move forward to a good place. Of course, with the launch of Day One Premium, what that good place is, is a little unclear. Yesterday, while launching extensions on Instagram, the Day One team answered a few questions and that gives us a hint of how things are going to work from now on.

Let’s first summarize what we understand of the customer ‘levels’ for the Day One service –

  1. Basic – This is a new tier. If you download the Day One app today (or are a Day One Classic user updating to Day One 2.0 today), on iOS or Mac, you’ll be a free user. All your data will be saved locally on the device which you use and any time you want to a. Create multiple journals, or b. Sync your data to the Day One Sync service, you’ll be prompted to pony up and become a Premium member.
  2. Premium – This too is a new tier. If you want to sync your data across devices, get access to the encrypted journals feature, support Day One in their awesome venture, and get 25% off print book orders), you get to buy into the Day One subscription service. It’s currently $35/year for new users and $25/year for older users, as explained in the FAQ.
  3. Plus – This is the new name for the old tier. If you downloaded Day One 2.0 on any platform before the Premium tier was introduced, this is where you stand. You get access to Day One Sync, get to make up to 10 journals, get access to data encryption, use cloud services such as IFTTT, etc.

Here’s what you don’t get with the Plus subscription –

  1. If you bought Day One on one platform (iOS or Mac) before Premium was launched, and bought it (for free) on the other platform after, you don’t get Sync between devices. You can still export your Day One journal and import it at the other end, but that’s just too cumbersome.
  2. Similarly, you don’t get access to more than 10 journals, and can have no more than 10 images per post.

But yesterday’s release taught me something interesting –

Day One is still a company that cares for its users. So, it seems that if you’re a Plus member, many future features and launches will work for you. Day One browser extensions currently work only with unencrypted journals. However, since Plus members do get access to encryption and Sync, in the future, it’s possible that support for end-to-end encryption will be added to the extensions and as a Plus member, you’ll still be able to use them.

Similarly, right now IFTTT is the only third-party sync service allowed to plug into Day One. You can use it in a lot of ways – saving your Instagram posts to Day One, emailing an entry into Day One, stashing away your tweets, your weight (using Withings), the day’s weather, your Instapaper Likes, and your Evernote entries.

But I suspect that when Day One launches their API, Plus members will definitely get access to it. They’ll get access to it for both encrypted and unencrypted journals, and will be able to use a lot of the tools and services they were using with Day One 1.0, updated to work with the API, of course. This seems not only likely, it seems definitive with the way the Day One team launched the browser extension.

Why am I even talking about Plus? It would seem that most future users of Day One will be either Basic or Premium members, right? But most of their current users are Plus members. On top of that, I believe that a large percentage of Day One users fall in one of two categories – they either have only on Apple device (iOS or Mac) and so don’t care for a lot of Premium features, or they went ahead and bought both Mac and iOS apps, and so they didn’t get affected by being pushed into the Plus tier either.

However, I am part of a significant majority which either want Day One access on Windows or Android. This is why understanding the Day One team’s motives behind every move they make is important to me. From what I’ve understood, they’ve got nothing but good intentions when it comes to treating Plus users with fairness, even if it comes at the cost of Premium subscriptions in the short-term.

Future Day One apps (for Windows and Android) will be free and siloed the way the new versions of the iOS and Mac apps are. You’ll have to be a Premium user in order to sync between these devices. But the devices on which you’re a Plus member right now will give you a pretty premium experience, and any third party tie-ins and API based features should be available to Plus members without having to move to the subscription model.

Of course, with the launch of the browser extensions, the Day One team has solved a very big problem – getting journal entries in, on Windows (and Mac for iOS-only) for Plus users. That saves people like me from a lot of time and effort!

p.s. According to the Day One team, Day One Classic is still around, just not under active development. Most of us (specially if you read till the end), have moved on to Day One 2.0, but if you download the Day One Classic app (or still have it installed on your system), Day One Sync is still working on it and syncing to it. So if you have that, keep using it!