27 Jun 2018, 22:24

Did X in Y, without Z

Many people reflect positively on things like glide years (a year long-ish trip between jobs, schools, or opportunities), saying: “I had great memories in France, Italty, and Croatia during my 6 month trip”.

What they often leave out is what they had to forsake: training, mentoring, opportunity costs related to finances, building community, networking, and so forth.

It’s critical to understand that everything has an opportunity cost. That includes generating fun memories in foreign countries, which often upon reflection become pure positives, but do have concrete costs. Of course, practical things like language learning are not exempt: becoming a master in golang, or Spanish, means that time is no longer allocated for trying out functional programming, or Portuguese.

When deciding on how to spend the next day, month, year, or other amount of time, try to answer this question before starting:

What will I do (x) in y time? What will I forsake (z)?

This is a good way to frame questions like: Should you do a “glide year”? If so, what should you actually do? Should I quit my job to start my own business?

Or for reflection on the past:

I did x in y [time units]. I had to forsake z.

This provides context for big and small decisions. Watching a 7 hour YouTube movie, for example, may have a significant cost if you’re able to bill $150 per hour to a client.

Of course, this doesn’t mean that decision should be vetoed, but keep in mind opportunity costs when both making and reflecting upon decisions.

25 Jun 2018, 03:17

Your Character Sheet

Imagine for a moment that you’re writing a small document about a fantasy character for a complete stranger, whose purpose is to see how this character you created fits into a fantasy world.

This is largely what I imagine happens in Dungeons and Dragons. The leader of the game, the dungeon master, needs information about the characters playing in order to craft a realistic story. The players must offer up some information, hopefully interesting, about their characters. What makes them unique? Why are they here?

The leader of the game would have trouble adjusting his plot given a character with a lack of purpose. Similarly, without a brief historical background, it’d be difficult to morph the story to this character.

Creating a real-life character sheet seems like a great exercise. It’s like a resume, but for yourself.

You should be able to write down:

  • Purpose
  • Strengths
  • Weaknesses
  • Events that have signficantly shaped your history

All of these seem like reasonable things to both be aware of and frequently remind yourself of. Not only that, but they’re necessary to communicate to others in order to have a meaningful relationship with them. An investor looking for his first engineer likely would not accept “I’m not sure” as a purpose in his character sheet (resume).

Having trouble? Identify what you can do to create your purpose, or find the courage to share your weaknesses and imperfect past. Remind yourself frequently of your strengths. Most importantly, update the document or align yourself when your behavior diverges from it.

25 Jun 2018, 01:59

Writing more

I’m going to try to write more for the blog.

Looks like it’s been about a year since the last update. I think I’m going to focus on writing short snippets and less on long form, niche technical articles.

I’m also looking for a strong, focused description for the blog. For now, it’s empty.

It’ll be interesting to see what shakes out.

03 Apr 2017, 23:31

Additional layers of https and ssh security

After moving to the new blog host and scoring only a B on the SSL report, I wanted to make sure I was doing everything I could do secure the server from exploits. Part of the reason for moving was exactly that - increased security! On my journey to improve the score, I learned a fair bit about SSL, server and browser based attacks, and how to mitigate them.

I also did a bit of digging into how to secure my personal connection to the VPS over ssh. I’ll cover some information on that at the end of this post.

Weakness is unacceptable

Turns out that by default, nginx allows weaker versions of cryptographic technology that can be more easily penetrated by nefarious users. Specifically, an attacker can eventually intercept traffic between a browser and server. More reading, including a technical paper and mitigation, can be found here.

For now, we’ll focus on mitigation. Luckily enough, the same site has some information for sysadmins. Let’s get started.

Generating a better DH group

The above group is convinced that state-level actors are capable of penetrating 1024-bit groups. It wasn’t clear whether that is by sheer computing power or exploits. Nevertheless, the group recommends using at least a 2048-bit length group.

I figured that doubling wouldn’t hurt anything, so chose 4096 bits as the length. Running this command generates the .pem file we need to give to nginx.

openssl dhparam -out dhparams.pem 4096
sudo mv dhparams.pem /path/inside/your/jail

While we’re editing nginx configuration, I’ll also suggest that you enable Strict Transport Security (HSTS). It is helpful in preventing the same encryption downgrade attacks.

I added both HSTS and dhparams info to the nginx configuration file in the jail I created before. Mostly taken straight from their sysadmin suggestion page:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
add_header Strict-Transport-Security "max-age=31536000";

ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';

ssl_prefer_server_ciphers on;
ssl_dhparam /path/inside/your/jail/dhparams.pem;

Afterwards, restart nginx:

sudo service nginx reload

This was enough to get me to an A+ on the SSL test. Skeptical readers can run it themselves. Again, I’m not entirely sure how well qualified they are to give that grade - but as long as the grade is improving, things can’t be getting worse… right?

Hardening ssh

Now that the https part of the server was mitigated against most attacks, I wanted to see if there was anything I was missing on the ssh side of things. Knowing that OpenSSL is basically full of vulnerabilities and odd nuggets (here’s one of the latest ones I’ve seen), I figured the defaults weren’t good enough. And, according to at least one guy on the ‘net, I was right!

The information there is great. There’s also a very brief justification for most of the changes, enough to satisfy a sysadmin without needing a ton of security background. Those paying attention will notice the same information about weaknesses in DH that was covered earlier, in the https part of the post.

A small note: in the section about enabling Protocol 2, you’ll be asked to accept the new fingerprint (as the blog says). Even if you skip this step, however, you’ll likely be asked to accept the new fingerprint. The scary warning that pops up might make you think your server has been compromised, but it’s just using the ED25519 keys instead of the old, normally RSA keys.

That’s not to guaranteed your server wasn’t compromised, of course, but you’re probably fine if it happened in between updating your configuration.

Other avenues

Right now I can’t add too much to the linked resources, due to time constraints. I think a few things left unsaid, such as port knocking, are also worth considering. I’m considering adding that to my setup - expect a future blog post on its pros and cons to come.

26 Mar 2017, 16:05

Moving the blog host to DigitalOcean, FreeBSD, and nginx

Important: This blog post contains an affiliate link for DigitalOcean. To try to prove I’m not fishing for your dollars, you’ll have to find it yourself!

While I’m not in it for the money, I stand to gain a small credit (as do you) if you click the affiliate link located on this page. If you choose to get started with DigitalOcean through that link, I’d certainly be very appreciative.

TL;DR link to how-to

Little Green Locks

Today I decided to investigate what it would take to add an SSL certificate to the blog.

I had been using GitHub Pages to host the blog. This is a great service offered by GitHub that allows an individual to commit files to a git repository and serve these out to the web. Because this blog is powered by Hugo, which generates .html files, essentially the preferred file format of a web browser, it is naturally very easy to generate these files and have GitHub Pages serve them up.

Normally, when hosted in the github.io domain (the default), GitHub Pages provides an https connection automagically - here’s an example. Unfortunately, this blog is hosted under my custom domain, which prevents me from using GitHub’s SSL certificate. Obviously, this is completely unacceptable for a security-conscious web site. More on why in a later post.

As far as I could tell, this could be remedied in at least two ways:

  1. Purchase an SSL certificate through my registrar, Gandi (or via many other companies)
  2. Move to a server under my (remote) control, e.g. a VPS on DigitalOcean, allowing me to set up Let’s Encrypt for free
Option 1 Option 2
Can be easily done through Gandi’s site Requires set up of Let’s Encrypt
No control/management of server Full control of/management of/responsibility for server
DNS still handled through Gandi Need to switch DNS to DigitalOcean, including domain mail
$18 USD / year (Gandi) $60 USD / year (DigitalOcean)
Not that much work Way more work
Nothing new to learn Way more new things to learn

I chose option 2 because it would allow me to something I’ve always wanted an excuse to work on: learning how to use FreeBSD, jails, and nginx. Plus, it’ll be way more fun.

FreeBSD and its philosophy

I’ve always been intrigued by the BSD family of operating systems. For the less technical audience, you can think of BSD as yet another operating system, similar to Windows, OS X, or GNU/Linux. In fact, Apple’s OS X shares a lot of code with BSD.

More technical readers might appreciate a core difference between BSD and GNU/Linux, which are often lumped together in the *nix-like domain. As far as I can tell, there are two main idealistic differences between GNU/Linux and BSD, and several quasi-myths that I could find little or no evidence for:

  1. BSD seeks to provide both the kernel and the initial application layer that actually makes the kernel useful. Thus, GNU/Linux: one cannot really have “just linux” or “just GNU”, but one can clearly have, say, just FreeBSD or just OpenBSD.
  2. Licensing. There’s a lot of e-ink spilled on this topic, and there may be more from me about this in future posts; for now, the TL;DR is that GNU/Linux has chosen the GPL, and FreeBSD has generally not.

Less clear differences include:

  1. BSD is faster, more performant, etc.: Seems impossible to test this claim on its own merits - how would one set up such a test? - but many anecdotes of less memory usage have cropped up around the tubes.
  2. BSD does releases better: given that BSD is both the kernel and a core set of tools, it behooves them to have a bit more polished releases - the viability of the OS depends on it. Again, though, these broad claims require a bit of careful analysis and thought.
  3. Security: Many will claim BSD is more secure. Again, a somewhat wild claim, not really empirically testable apples to apples without some serious thought. What is clear, however, is that OpenBSD really prioritizes security. It’s less clear that FreeBSD does the same, though they may benefit from OpenBSD’s attitude (and thus, security updates).

Throw your apps in jail

One major similarity between the two exists in their approach to virtualization. FreeBSD uses a concept of jails to provide virtualization to applications. GNU/Linux has a similar feature, LXC, utilized by tools like Docker. Many think these two are distinct, but in reality, the mechanisms are quite similar. More on this in a future post as well.

The FreeBSD approach to this problem is the use of jails. You can think of jails as Docker-containers built inside of the FreeBSD operating system itself. It’s a bit more nuanced, obviously, but good enough for now.

In any event, software/systems engineers at large have started to appreciate things like virtualization at the level of applications. Docker’s popularity has absolutely exploded, now containing a massive ecosystem of images, applications, and support from higher-level applications like Marathon. And for good reason: isolated environments prevent entire servers from going down, provide an additional layer of security between application exploits and root access on the OS, and provide modular ecosystems for applications.

The awesome benefit of added security is one of the main selling points for jails. If, say, there’s a security hole in the program you’re using to serve up web pages to browsers, the attacker will also have to penetrate the virtual environment that you’ve (hopefully) set up. While this is not impossible, it is far superior to the non-partitioned alternative, where compromising the web server would result in an attacker owning the box.

nginx, the new King of the Hill?

In a bygone era - about 10 years ago - the Apache web server was the silent workhorse of the internet. Serving up LAMP stacks (and even WAMP stacks!!) all over the internet, the Apache web server project was arguably one of a few flagship projects that helped power some of the world’s most popular web pages.

Nowadays, there’s a new kid on the block: nginx. One of the project’s main goals was to beat up on the Apache web server’s apparent poor performance numbers, especially with regards to memory usage.

As someone interested in new technologies - especially more performant ones - I decided trying out nginx would be something worth doing. Due to its popularity, nginx is now becoming the target of many security researchers and hackers alike. It’ll be interesting and useful to learn a bit more about this super-popular web server.

Free SSL from Let’s Encrypt

To obtain the actual SSL certificate for the server, I’ll be using Let’s Encrypt. It has emerged as an organization with great interest in promoting https, aka “the green lock”, across much of the internet: essentially anyone with their own server can register and get an SSL certificate for their web site - for free! They are also making openness a priority. As someone interested in security and privacy, that is something I’m absolutely on board with. Interested readers can get more information here.

Action

In this section I’ll document the process I used to go from GitHub Pages to a FreeBSD Digital Ocean droplet, running nginx inside of a jail.

I’ll treat this as a dumb server serving up .html files. Therefore, the server won’t have hugo, go, or anything else used to actually generate the site. Think of this droplet/server as a Docker container that serves up images from a directory via nginx. Less software on the server means less opportunity for exploits. Simple and clean.

I’m doing this all from Arch Linux. I’ll assume you’ve got SSH keys and are comfortable with the command line. If you aren’t, just read through the docs on DigitalOcean - which, by the way, has excellent documentation (so far, anyway).

Start the FreeBSD droplet

First, you need to sign up for DigitalOcean. If you want to support me, you can use this link, or you can use their web site directly.

Start a droplet with the FreeBSD operating system. You’ll need to add your SSH public key, as the root user will not have password authentication/SSH authentication enabled. All options are optional; the only one I checked was Enable IPv6. Put blog in the name and/or tag it appropriately if you expect to use DigitalOcean a lot.

Once running, you can test your setup like so:

ssh freebsd@your-droplet-ip

For now, close the connection after you ensure it works.

Handling DNS

Next up is transferring DNS from your current provider to D.O. If you’re using Gandi, or a host of other registrars, you’ll certainly find this link useful. Gandi allows you to postpone this to a date in the future, which is probably wise - otherwise, you might have a brief outage in email delivery, web site/blog being unavailable, and things like that.I recommend saving this until after completing the DigitalOcean setup, but because the next document suggests doing it now, I thought I’d cover it here.

In any event, take a look at this document to map the proper DNS entries to DigitalOcean. Make sure you’ve gotten all of the ones from your current DNS provider into the droplet.

Do not forget to add your mail records if you use a custom mail server for your domain’s email address! Forgetting this can seriously hose your email, and you may have delayed/bounced emails from others sending mail to your domain. For anyone running a business from their domain and relying on email, this is crucial to get right. Double check the mx records, especially before making the switch.

Setup FreeBSD with a new user, jails, and a firewall

We’ll now switch gears again and head back to the command line. ssh into your droplet.

Optional: Install modern shell

One interesting note is that FreeBSD defaults to the use of csh. That is absolutely insane! I also installed zsh (yes, I’m a hypocrite about installing stuff):

sudo pkg install zsh

I then made this the default shell of my own user (see below).

Optional: non-default user

Using the freebsd user is OK, but it’s a small security issue as it gives an attacker a (perhaps) guaranteed user-name to target. I recommend creating a new user that you’ll use to interact with your droplet. Whether or not you decide to do this is up to you.

Copy-pasta for doing this:

sudo adduser
# follow prompts
# when prompted, add user to wheel group, OR:
# wait until after and run:
sudo pw groupmod wheel -m max      # replace 'max' with your user in next commands
sudo mkdir /home/max/.ssh
sudo cp ~/.ssh/authorized_keys /home/max/.ssh/
sudo chmod 700 /home/max/.ssh
sudo chmod 600 /home/max/.ssh/authorized_keys
sudo chown -R max:max /home/max
sudo vim /usr/local/etc/sudoers    # ugh, vim

After opening that file, uncomment this line towards the bottom:

%wheel ALL=(ALL) NOPASSWD: ALL

If your vim is rusty, remember: ctrl-p/ctrl-n to navigate, hit i to start to edit (??!?!?), then esc-wq! to save-exit.

Set up firewall, jail, nginx

Follow this excellent post to set up this stack. Huge props to that author for that post.

Get the SSL cert set up

Let’s Encrypt recommends this tool for getting up and running. I gave it a shot as of course I have shell access.

This site is the version we’ll want to use. There’s no plug and play for FreeBSD. Bummer. For some reason I’m not surprised.

I chose to install the python stuff via pkg as the ports thing didn’t work for me. I think it was because I didn’t have ports configured - or something like that - or maybe it’s straight up broken. Unfortunately, this installed a bunch of python2.7 gunk. I’ll be looking into how to do this better soon.

Note: I’m running these commands outside the jail. The firewall prevents running these commands inside the jail, I think; the software will time out, leaving you wondering WTF is going on (it did for me, anyway).

Here we go:

pkg install py27-certbot
sudo certbot certonly --webroot \
    -w /usr/jails/YOUR_JAIL/usr/local/www/blog \     # or whatever
    -d maxthomas.io \
    -m 'your@email.com' \
    --staging \                                      # you can easily get rate limited on 'prod'
    --config-dir /usr/jails/YOUR_JAIL/usr/local/etc/letsencrypt

If that works, run without the --staging flag. Note the locations of your stuff; you’ll need them later when editing the nginx config.

Editing nginx config

After heading back into your jail:

vim /usr/local/etc/nginx/nginx.conf

This post claims that the best way to do redirects is to have two server blocks. I followed along and create the two blocks, one for the redirect and the other for my actual site.

Add the following lines for your ssh cert in the second block; obviously, your domain path will differ, so update it accordingly:

  ssl_certificate /usr/local/etc/letsencrypt/live/my.domain.com/fullchain.pem;
  ssl_certificate_key /usr/local/etc/letsencrypt/live/my.domain.com/privkey.pem;

I also changed the location/root from /usr/local/www/nginx to something else as I was not sure if the former was potentially going to go away with package updates. There is, hilariously, a file called EXAMPLE_DIRECTORY-DONT_ADD_OR_TOUCH_ANYTHING, so I followed its instructions and didn’t do anything in there.

Back to the shell to copy error pages and the index.

mkdir /usr/local/www/blog
cp /usr/local/www/nginx/*.html /usr/local/www/blog/
service nginx restart          # if all went well, no config errors

You should now be able to hit http://yourdomain and get auto-redirected to the https version. Slick! Unfortunately, this only got my website a B grade on the SSL report. I have no idea if that grade is worth the disk it’s stored on; more on that in the future as I try to improve the grade.

I didn’t yet try to automate the renewal with cron. I’ll likely update this post with information about how to do that, but at present, I’m ready to get this thing over with.

Move, Move, Move

Well, we’ve now got the server up and running, but our .html files are conspicuously absent.

As stated earlier, I want this to be a dumb server that just serves up .html files. As a result I’ll be rsyncing the contents of the generated page over to my node, then moving those into the jail. I’ll probably automate this in the future, but for now:

# on the computer with hugo installed
hugo new post/blahfooblah.md
# write something half-decent for all of our sakes
hugo
# now have a bunch of .html files and so forth
rsync -raz --progress doc/ server:blog/                                   # pick your own directories
ssh server
sudo rsync -raz ~/blog /usr/jails/YOURS/usr/local/www/yours/blog/blah     # fix paths yourself

Assuming all your paths line up, you should now be able to hit the link to your blog and have it served up. Now mix yourself a Black Russian a cup of ginger tea and enjoy your new droplet, SSL certificate, and blog host.

Well, there you have it: about a day’s work learning about DigitalOcean, DNS, nginx, FreeBSD, jails, firewalls, Lets Encrypt, and a bunch of other stuff. I’m pretty satisfied with the setup, in all honesty: it’s exactly what I want, a jailed nginx that serves up content I’m producing on another machine. Look out for future updates about automating the certificate renewal as well as automating content movement.

11 Mar 2017, 15:58

From CyanogenMod to LineageOS on an Honor 5x

In case you hadn’t heard, the Android operating system CyanogenMod is no longer around. I had been using CyanogenMod on both my phones (Nexus 5 and Honor 5X) with great success: I’m a big fan of the project, even though at least one important person would contend it’s less secure than stock Android, at least for the plebs. Like your parents.

Personally, I’ve chosen to flash CyanogenMod on my devices to avoid some of the bloatware that comes packaged with Android phones upon purchase. Not only that, but the Honor 5X must have been shipped with some absolutely horrible software choices, because it was very sluggish, even with at-the-time impressive hardware specs.

I decided I’d take a crack at “upgrading” to LineageOS 14.1 from CyanogenMod 13.1 on the Honor 5X, mostly because I was having trouble maintaining a WiFi connection to a 2.4 GHz network, which may have been resolved in a nightly update. I also figured that I’d have to update eventually, given the absence of future CyanogenMod updates.

These notes are here to help others who may undertake the same task. For those following along, I’m using Arch Linux on my laptop, so commands in terminal form represent things that were run on Linux.

TLDR

  • Find your phone here
  • If yours is also Honor 5X: keep reading. If not: hope the docs are correct and follow them.

Kiwi users

  1. Download the build you want
  2. If you want Google Play Store / Android goodies, download OpenGApps. Be sure to read what each version offers before choosing. Kiwi uses arm64 so be sure to pick that.
  3. Get adb installed and set up
  4. If needed: enable developer mode on phone: Settings, About, Build version x 7 taps
  5. Enable USB debugging, advanced restart options in developer options
  6. Plug in phone via USB to machine with adb on it
  7. Run these commands, noting possible file name differences:
    1. adb push lineage-14.1-20170308-nightly-kiwi-signed.zip /sdcard/
    2. adb push open_gapps-arm64-7.1-pico-20170311.zip /sdcard/
  8. Reboot phone into recovery: hold power button, restart, look for Recovery Mode in options
  9. Once TWRP loads, go to Wipe, then individually check each option, and swipe to wipe. E.g., Dalvik then swipe, cache then swipe, … until finished. WARNING: you may want to save the contents of your microSD card, so don’t wipe that if you do!
  10. Still in TWRP, click install, queue up the LineageOS zip, then if you downloaded GApps, the GApps zip.
  11. Flash
  12. Reboot
  13. Win

For those interested in the journey, read on.

Downloading the appropriate files

It was easy to find the right files on the LineageOS site. I’m using the kiwi device, which has a nice guide that walks a user through installation instructions.

I had to download two files: the update itself, and Open GApps.

I chose the pico version because I do want the app store, but I have no interest in using “OK Google” nor unlock via face.

I then used adb to push these files to my phone’s /sdcard/ folder:

adb push lineage-14.1-20170308-nightly-kiwi-signed.zip /sdcard/
adb push open_gapps-arm64-7.1-pico-20170311.zip /sdcard/

For those without adb set up but are running linux, the arch wiki has a nice guide. Those using other OSes may appreciate the info but need other software.

Backing up

The first thing I did was back up all my data to my laptop. I did this the “old fashioned way”, mounting the device and simply rsyncing files to my laptop’s hard disk. I used simple-mtpfs from AUR. All I had to do was enable file transfer mode in the USB options on my phone.

I then thought it’d be a good idea to take a more wholesome backup, perhaps done via the phone itself. That’s where I hit the first of many issues.

Device encryption

As someone concerned about my privacy, I’ve chosen to encrypt my device. If you have an Android phone and are not sure if your device is encrypted, you may want to get educated about it. In the part of the world I’m currently in, phone theft is not uncommon so it’s important the data on my device cannot be easily accessed by potential thieves.

In CyanogenMod, it was very easy to encrypt the device. Unfortunately, there seems to be no way to decrypt the device. Adding more annoying issues, the recovery mod I’m using doesn’t even prompt for an option to decrypt the device, and produces several errors when attempting to access the encrypted partitions; this makes sense as it can’t access them.

That meant no possible way to back up the device without a micro SD card (which somehow I decided against purchasing before leaving the US

  • yuuge mistake!) or a special cable that allows micro-SD to USB storage transfers.

Not only that, it was impossible to even wipe the device due to the protected partitions! I then decided to try my luck with sideloading the device. Looking back I should have probably upgraded the recovery software to the latest version, but a few issues didn’t inspire confidence.

I then decided to take a crack at sideloading the LineageOS update.

Sideloading via adb

I’m not an expert in adb but apparently it is possible to “side load” things into the phone using this method. I’m still not exactly sure what that technically means, but I foolishly convinced myself it was worth a try after reading a few light-on-info blog posts.

It was easy enough to sideload the update:

ll2 :: ~ ยป adb sideload lineage-14.1-20170308-nightly-kiwi-signed.zip
Total xfer: 1.00x

Was that all I needed to a slick new phone operating system?

Of course not. But I wanted to see if it’d work at all, so I eagerly rebooted the phone.

com.android.phone has stopped working

The phone began to restart, and after some churning, it even prompted me for my encryption password. I was actually thinking that this turkey would run right out of the box.

Then it occurred to me that I had forgotten to sideload OpenGApps, so I was quite pessimistic that actual apps would work - defeating the point of having a “smart phone”.

What I didn’t expect, however, was the “black screen of death” that awaited me about 30 seconds after unencrypting the device. After that, a cascade of processes stopped working, including very important-sounding ones, like com.android.phone and the system UI (!). Looks like I was not successful on my first pass.

At this point I realized I was going to be Having Fun, so I got a snack and put on a playlist, because this was not going to be a quick project.

Recovery mode

It took me awhile to figure out that booting into recovery mode on the Honor 5X involves holding volume-UP and power (not down as the wiki indicates). After figuring that out, I figured I’d first try to update the recovery mode software in case they enabled some feature to decrypt the device during recovery.

They have some instructions which I followed. I chose to try to install via TWRP itself: look towards the middle section of that page TWRP Install.

I downloaded the latest version for my phone and pushed it:

max@ll2:~ $ adb push $HOME/Downloads/twrp-3.1.0-0-kiwi.img /sdcard/
[100%] /sdcard/twrp-3.1.0-0-kiwi.img

Per their instructions: go into recovery, click install, select the image, click the pushed image, select recovery, and flash. Reboot the phone.

After doing this, you’ll be tempted to hold the recovery mode keys, but don’t do that - it just results in a boot loop. Letting go will take you right back to the newly updated TWRP recovery screen.

It looks like the software update didn’t help: I’m still stuck with an encrypted partition and no way to unlock it, and a currently unusable phone. Next up: try to sideload both the update and OpenGApps:

max@ll2:~ $ adb sideload lineage-14.1-20170308-nightly-kiwi-signed.zip
Total xfer: 1.00x
max@ll2:~ $ adb sideload Downloads/open_gapps-arm64-7.1-pico-20170311.zip
Total xfer: 1.43x

Side note: the percentages for adb don’t seem to match the progress bar on TWRP’s screen. I know progress bars are often just made up graphics to reassure users that “stuff is working” but that’s a little sketchy.

Anyway, more importantly, things seemed to work, other than the inability to access /data - presumably due to encryption. That doesn’t seem ideal.

Will it blend?

Progress?

After rebooting, I now got to see my beautiful phone’s home screen.

For about a second.

Then, the warnings about apps closing started appearing, and everything went black… again. Hardly ideal. I decided to reboot into recovery, which is about the only thing I’ve figured out how to do reliably at this stage.

But wait: when I try to boot into recovery, I’m now prompted by a Huawei eRecovery software splash screen. The hell? Did sideloading both apps somehow annihilate TWRP? Or did the recovery mode keys change yet again?

I was somewhat tempted to give up, plow ahead with the Huawei software, then just nuke everything once it had - hopefully - restored my phone to Working Condition. At least it was an option. But I figured I’d try to at least get TWRP to show up before folding.

Finally, I was able to get TWRP to show up by holding volume-UP and power during the duration of the boot until TWRP shows up. Letting go prematurely somehow invokes the Huawei recovery utility. Perhaps that will be useful info.

Let’s try once more with TWRP to get something going.

Act with intent

I decided to use TWRP to, individually, wipe everything, but instead of checking all the boxes at once, check them individually. In other words:

  • Click wipe
  • Click advanced wipe
  • Select Dalvik / ART => Swipe
  • Select cache => Swipe
  • Repeat for everything related to the phone - except maybe MicroSD card if you value its contents

To my surprise, this actually worked. There were no issues, except an odd error about an invalid .zip file when the wiping Dalvik. I’m not sure how or why it worked. But now I was very optimistic that I would be able to normally install LineageOS and OpenGApps per the original instructions, and hopefully, end up with a usable phone.

I had to repush them as I had wiped internal storage, using the same adb push commands from earlier.

I then clicked Install, selected the lineage zip, then added the open gapps zip to the queue. Somewhat eerily, the process failed when I checked zip file verification. So I unchecked it, queued up both zips again, waited, … and hoped.

Boom - outta here

And just like that, after some serious loading, I was greeted by a setup screen much like the first time I put CyanogenMod on the device.

I finally have got a LineageOS enabled phone with OpenGApps pico.

Wrapping up

Well, as one should expect with any major software transition, switching things around wasn’t the smoothest experience. But with a little patience, some music, a cup of decaf coffee with a hint of soy milk, and the Internet, I was able to upgrade to LineageOS - leaving CyanogenMod behind for good.

I’ll be forking and contributing the LineageOS wiki with things that I found that may help others. Here’s hoping the process is not as cumbersome as the software update was.

19 Feb 2017, 17:47

Now Entering an Analytics-Free Zone

Telling people you are starting a blog is one of those things that typically garners a reaction. Mostly, it involves questions: what will you discuss? How often will you post? Are you going to quit before you start (poignant!)?

Occasionally, however, you’ll also get advice on what to discuss, how often to post, and many other things. I’m always grateful to learn from people who have blogged more than me, so naturally I was excited to hear about others' blogging experiences. Some people even showed me their blogs.

At least two friends, whom I would consider non-technical, offered to show me their Wordpress setup, complete with statistics about which users from what countries had visited their site, what pages had been the most popular, and so forth. I believe this functionality is offered by WP out of the box as an analytics package.

I’m not sure if they knew, but they also had Google Analytics (GA) set up on their sites. I wasn’t sure if GA was powering Wordpress or was an independent thing. In any event, data was being collected via the ever-present web analytics on the sites.

A brief primer on web analytics

Web analytics essentially involve tracking data generated by interacting with a web site. Much of this data is associated with a user’s behavior - how they got to the site, how much time they spent on a page, whether or not the next page they went to was on the same domain, how long pages are loading on average, what city and country visitors are coming from, and other related measurements. Wordpress, Google Analytics and others provide useful aggregations of this data and expose this data to the owner of the particular blog or web site.

Generally speaking, this data can be very useful for improving the website. There’s certainly utility in knowing, for example, the most popular referrers (perhaps there’s strategy to develop there when it comes to ad placement or identifying an audience), the most popular articles, the percentage of your audience who can afford to spend money on what you’re selling, and so forth. I believe there are even some social media components: for example, number of users coming from the ‘book and Twitter.

Of course, if all the numbers read 0, you’ve got some work to do. But don’t fret about that too much.

I can see how this data is both useful and fun to look at, for non-technical people as well as those of us who work with and make decisions based off data on a daily basis. There’s a certain positive reaction, according to my friend, when a new visitor from Bolivia stumbles upon one of his articles about India (I didn’t have the heart to explain that it may just be a bunch of bots).

I imagine the insights are even more relevant if your blog is at a point where significant traffic is driving referrals, products, or ad revenue - in other words, the money you get to take home. Pretty cool, huh?

It’s just a numbers game

But the data must come from somewhere, and in this case, it’s almost certainly from users unaware of, or unable to stop, the tracking. There’s no information, opt-out process, or any other indication that the users’ mouse clicks, location, and next site visited are being tracked.

Many users that are not tech savvy probably can’t even identify what web analytics are capable of, much less identify sites (hint: virtually all) that use them. It’s not possible to know if you’re on a Google Analytics site until it’s too late - when GA has already loaded and presumably sent data back - without your consent. And, as far as I can tell, Google Analytics does not respect do-not-track, which is already a somewhat technical feature.

More advanced users can protect themselves to some extent. Or, if you truly trust Google, you can install some plugin that allegedly tells Google Analytics not to send data back when visiting sites with GA. I won’t link to this as I don’t want to appear to encourage users to download this archive without seeing its contents, and I’m certainly not volunteering to scope it.

Piwik, the savior?

Before you think I’m putting Google on blast unfairly, I won’t spare other analytics software either. While I appreciate some of their sales points - owning the data, “user privacy protection”, F/OSS - at the end of the day, it is exposing collection of user data that, generally, the users aren’t aware of. Lastly, it involves standing up MySQL (in 2017?) and PHP (?!) thus requiring a running server, not to mention exposing a plethora of potential security issues. The average web site owner has absolutely no chance of fulfilling the advertised user privacy protection when faced with administering a PHP-MySQL stack on their own. Postgres is not even supported, and given that issue has been open for 8 years now, probably won’t be.

Back to user space

Needless to say, the user gets the raw end of this deal. At minimum, they’re blissfully unaware of the massive amounts of data they leave behind when visiting web sites, how they are tracked leaving and entering other sites, and so forth.

I will focus on these issues in more (and less, for those who don’t work in tech) detail in upcoming articles, but for now, it’s sufficient to say that it’s completely unacceptable to do this to users here. As someone interested primarily in helping users, as well as maintaining and protecting privacy, this sort of hidden data collection is not very attractive.

A matter of principles

I won’t be adding or using analytics to this blog, no matter how useful they may be to developing it. To be honest, it’s a principled (and therefore simple) decision: as someone whose intent is to help users as well as someone who values privacy and has put in effort into not being tracked online, I can’t possibly consider using privacy-compromising technology here.

At first, I thought this might be somewhat of a Stallman-esq stance: this blog would be the only site, or one of a few, that chose not to embed analytics or social media trackers buttons. Many would claim: what’s the point? Join in on the massive vacuuming of your users' data for your benefit (as well as the benefit of a large advertising company)!

To my delight, this had been done by a couple of other blogs I had seen recently. Mike’s blog, from which I essentially taxed the entire setup of my own blog, has chosen to omit both GA as well as Facebook and Twitter buttons on his site. He also went to the trouble of removing GA from the otherwise excellent purehugo theme, contributions which I gratefully used myself. He justifies not using analytics with a privacy argument. Another blog takes a different but also principled approach, reducing cruft and inefficiency and discussion about how it these things reflect the state of our society in general.

I personally appreciate both positions, but the privacy one rings closer to home.

Core purposes as guidance

One theme I want to be a part of this blog is the idea of helping others navigate a world in which data they emit is frequently collected, sold, bought, stolen, and battled over in courts, both normal and secret (among other things).

While I have some goals for the blog, I can’t share the specifics due to psychological reasons. None of the goals, however, involve monetizing users. As a result, web analytics won’t be a part of the experience around here.

With that said, enjoy a non-intrusive (and hopefully speedy) experience on this blog, now and in to the future.

12 Feb 2017, 22:42

first

Welcome to my blog. I’m told that perfection is the enemy of the good - or something like that - so I’m getting on with starting this blog up…

… just as soon as I choose a theme…

As I see it, this blog will contain topics related to technology, software, security and privacy, and may be a little bit of financial and business information. That sounded super dry, but hopefully we’ll have a bit of fun along the way.

I’ve done a ton of work that has never seen the light of day. I hope this blog will be a way for me to give a little something back to the community.

Thanks for reading.