GraalVM (a bit) beyond Hello World

TLDR;

GraalVM native images are a real innovation for Java apps, providing much smaller Images and faster startup, especially useful for apps that are horizontally autoscaled. However, this causes a higher development effort, especially for existing apps. If included in new apps from the start, this effort should be smaller, but on the other hand we’re pinned to JDK8 with native images.


After working as developer with Java and Docker for many years (and more recently also as trainer) it’s obvious to me that Docker and Java are not a perfect match. As for all interpreted languages, the interpreter (the JVM, in case of Java) imposes a burden in both size and startup time on our Docker image/container.

Java shares this fate with other platforms such as .net, node.js and ruby. To make a point, the follwoing table shows a brief overview of the current “alpine” variants (i.e. the smallest) of those platform’s base images:

Image Size(MB)
Β adoptopenjdk/openjdk8:jre8u212-b04-alpine 48
mcr.microsoft.com/dotnet/core/runtime:2.2.4-alpine 36
node:12.0.0-alpine 26
ruby:2.6.3-alpine3.9 26

(The sizes are the compresses size of all layers of the images within DockerHub. The image sizes is calculated as described here).

In contrast, nativly compiled apps written in Go or C/C++ can be built from scratch (if statically compiled), starting of at 0 MB.

GraalVM to the rescue

About a year ago, Oracle announced the first release candiadate of GraalVM offering better interoperability for Java Programms with non-JVM languages and precompiled native images with instant start up and low memory footprint (using ahead-of-time (AOT) compilation). In the announcement they stated that Twitter already uses GraalVM in production to efficiently run their Scala workloads.

From the start, this sounded like a innovation to me. As I’m always a bit low on free time to play around with technologies, I started by following the topic by reading articles that crossed my way, such as

Great introductions, though they all have one thing in common: They compile a single “Hello World” Java file into a native image.

This made me wonder if it’s also that easy for more real-life projects and what pitfalls there are. Especially regarding the known limitations for native image generation, like reflection.

Getting the hands dirty

My first approach was to naively adopt the Dockerfiles for two existing projects to produce native Images. Containing the compilation in Dockerfiles has the advantage of not requiring anything installed locally and is also a kind of “infrastructure as code”, providing deterministic results and allowing for version control.

The projects I chose were

BTW – As they both failed in the beginning (with GraalVM 1.0.0-rc14), I had a look on Java frameworks that offically support GraalVM native images. See my article: Short comparison: Building Graal Native Images with Quarkus, Micronaut and Helidon.

When I started, the latest version was GraalVM 1.0.0-rc14 (the 14th release candidate for version 19.0.0 πŸ€”). For both projects I stumbled upon a number of confusing (to me) errors (like NullPointerExceptions) that all magically vanished once I updated to the “ready for production use” version GraalVM 19.0.0 (as soon as it was available).
Nice work by the GraalVM team πŸ‘Β  (
if anyone is interested in the errors, I carved them into the Dockerfile or Git Commit messages, see fernflower and colander).

Here are my most interesting findings (I will elaborate on them bellow):

  • the fernflower CLI app works with a Docker image of only 5.3 MB in size! This is really revolutionary for a Java app πŸŽ‰
    Note that it only compiles statically (which is what I wanted anyway, to be able to use a scratch Docker image)
  • the colander native image also compiles (it’s also only 5 MB). However, the app does not work, because of it’s dependencies. These were the kind of real-world problem findinds I was looking for.

Findings in detail

Happy days

As said, the GraalVM native image build for fernflower works like charm. In fact, I was so charmed I created an automated build at DockerHub, that regularly builds Docker images for the latest version of fernflower, using different base images: schnatterer/fernflower-docker.

So if you’d ever need a Java decompiler just do a docker run --rm -v $(pwd):/src schnatterer/fernflower and 5MB later you’ll have you’re decompiler ready.
These images also allow for a nice comparison of image sizes for different Java base images.

Image Size(MB)
native/scratch image
adoptopenjdk/openjdk8:jre
adoptopenjdk/openjdk8:alpine-jre
gcr.io/distroless/java:8

5 vs 45MB comparing native image to regular JRE. Stunning, isn’t it?

BTW – this is the final Dockerfile to build the native image.

Facing real world challenges

Of course, I love it when a plan comes together. But on the other hand I suspected this wouldn’t always be the case for a such complex a thing as GraalVM native images. Here are the issues I encountered for the colander app (as said before, these issues are well documented known limitations of GraalVM):

  • When starting the native image, there’s not much output to the console.
    Reason: Not surprisingly, unlike the jar, the native image does not contain a logback.xml. In order to fix this, we would have to copy the logback.xml manually into the final docker image during the build.
  • The command line option --help does not show any options.
    Reason: These options are defined in annotations, and are read at runtime using reflection via the JCommander framework. How to fix?
    Configure the reflection for native image generation. Fortunately, some Java CLI frameworks like picocli support generating the config files out of the box. So migrating would also be an option.
  • Colander can’t show its own version name.
    Reason: The version name is read from the MANIFEST.mf file using the cloudogu/versionName library. The file is (again, not surprisingly) is not contained in the native image. This made me wonder if it wouldn’t be much simpler to read the version name from a Java constant. A hard-coded value just feels more read-only than a text file such as the MANIFEST.mf. I added this feature to the library using annotation processors.

Conclusion

The issues encountered definitely proof my hypothesis, that GraalVM native images, while a technological innovation that provides a lot of potential, causes extra efforts during development. So we have to decide if these extra efforts are justified by the advantages.

I presume that, if planned from the beginning, the efforts are a lot less, because we can take GraalVM support into account when making our technical decisions. There are frameworks like Quarkus, Micronaut and Helidon that have native support for GraalVM, minimizing the extra effort. For large existing apps, though this does not help. The colander example is really small (with only about 1k of net LOC) and already causes a lot of effort.

So would I use GraalVM native images in production?

I probably wouldn’t migrate existing applications, except they run at a massive scale and the potential savings (faster horizontal scaling, smaller memory footprint) justify the effort for building the native image in the first place. For new projects, I would assess building GraalVM native images from the start, if sticking with JDK8 is OK.Β  Native images only support JDK8, as of GraalVM 19.0.0.
In the long run most popular Java frameworks, not only Quarkus, Micronaut and Helidon are likely to support native image generation. For now, Spring is still a WIP, also mentioned in the GraalVM 19.0 release announcement. If I had a teams with profound Spring experience, I would only switch to another framework for a good reason.
On the other hand, if developing using the microservices architecture pattern, first experiences with GraalVM native images could be gained by implementing small services using GraalVM.

Advertisements

Short comparison: Building Graal Native Images with Quarkus, Micronaut and Helidon

The technological innovations of the last years such as the adoption of containers, cloud-native technologies, the microservice architectural style, the inception of GraalVM and the end of JavaEE (as we know it) has energized the Java framework market.

As of May 2019, there are at least three frameworks supporting GraalVM native images out of the box, targeting cloud-native microservices:

  • Quarkus,
  • Micronaut and
  • Helidon.

As building GraalVM native images is a bit challenging, I was curious to find out how these three frameworks keep up with their promises. I worked through the respective getting started guides and wrote down some similarities and differences resulting in this short (and surely incomplete) comparison of the three frameworks. See the following table for an overview.

General comparison

Quarkus Micronaut Helidon
Core Project Source quarkusio/quarkus micronaut-projects/micronaut-core oracle/helidon
Website quarkus.io micronaut.io helidon.io
Started/Backed By RedHat objectcomputing Oracle
First Commit 2018-06-22 2017-03-06 2018-08-28
GitHub Stars (05/2019) 1693 2283 1371
GitHub Contributers (05/2019) 88 120 26
# Commits (05/2019) 3970 5907 617
Supported languages Java, Kotlin Java, Groovy, Kotlin Java
Supported build tools mvn, Gradle mvn, Gradle mvn
Supported APIs for graal native Microprofile, vert.x,

 

Micronaut, ReactiveX/RxJava Helidon SE (Microprofile, without native image)
Programming paradigms for graal native Imperative, reactive Reactive, imperative? Reactive (imperative, without native image)
Code generation via mvn plugin CLI (mn) mvn archetype
Getting Started (Graal native) Guide Docs Blog
Resulting src of Getting Started schnatterer/quarkus-getting-started schnatterer/micronaut-getting-started schnatterer/helidon-getting-started
Size of getting started docker image
Getting started base docker image fedora-minimal alpine-glibc scratch
Getting started uses native image? N N Y
Size of getting started in scratch docker image

All three projects are rather young (grandpa Micronaut is about 2 years old as of 05/2019) but have what looks like extensive documentation at first glance. The only thing that made my stumble a bit was that Helidon’s docs don’t return a result for “graal”. I later found a brand new getting started with graal on oracle’s developers blog. Hopefully, this will be added to the docs soon.

There are a couple of notable differences between the three frameworks:

    • Programming style (reactive vs. imperative)
      • Quarkus explicitly supports both (reactive as an extension),
      • Helidon claims to support both, but only reactive in conjunction with native images right now
      • Micronaut is reactive only From the docs it seems that micronaut focuses on reactive, but blocking approaches are supported (see Graeme Rocher’s comment).
    • Language
      • Micronaut and Quarkus both support Java and Kotlin.
        Micronaut also supports Groovy πŸŽ‰ (having Graeme Rocher, the creator of Grails, on board it’s probably a must)
      • Helidon only supports Java
    • Build tool / code generation
      • Micronaut and Quarkus support Maven and Gradle.
        • Quarkus uses a Maven plugin for code generation (bad luck for Gradle users) whereas
        • Micronaut brings its own CLI tool that thankfully can easily be installed using sdkman.
      • Helidon supports only Maven and has only initial code generation support via a Maven archetype.
    • Kubernetes
    • Community
      Hard to tell. The amount of discussions on my tweet about Quarkus makes me think they’re the ones that are most interested in feedback and people getting involved.

GraalVM native Image / Docker Image

  • The Dockerfiles provided by the getting started of Quarkus and Micronaut each require an external Maven build.
    The images base on fedora-minimal (resulting in a 44MB compressed image) or alpine-glibc (resulting in a 32MB compressed image) respectively.
    A base image containing a libc is required because the native image is linked dynamically.
  • Helidon provides a proper self-contained Dockerfile that can be built by simply calling docker build, not requiring anything locally (except Docker, of course).
    Here, the native image is linked statically. Therefore the binary can run in an empty scratch image (resulting in an 8MB compressed image).

Bearing in mind that a Java 8 JRE Image requires about 100MB (debian) or 50 MB (alpine), 44MB or even 32MB for a small webapp is not so bad. OTOH the 8 MB for the statically linked image are a real revelation, leaving me stunned.

The fact that Helidon plays well with GraalVM shouldn’t be too surprising, as they both are official Oracle products.

Beyond getting started

As Quarkus was the first framework I tried, I wondered why they rely on fedora and not just compile a static binary (later, I learned about some of their reasons on twitter). So I tried a couple of other images, eventually setting the switch for creating a static binary and using a scratch image. VoilΓ : It results in a 7MB image, even a wee bit smaller than the Helidon one. See the table bellow for an overview of images and their features and sizes (taken from the README of my getting started repo).

Base Image Size Shell Package Manager libc Basic Linux Folders Static Binary Dockerfile
fedora β˜’ β˜’ β˜’ β˜’ ☐ πŸ“„
debian β˜’ β˜’ β˜’ β˜’ ☐ πŸ“„
alpine-glibc β˜’ β˜’ β˜’ β˜’ ☐ πŸ“„
distroless-base ☐ ☐ β˜’ β˜’ β˜’ πŸ“„
busybox β˜’ ☐ β˜’ β˜’ β˜’ πŸ“„
distroless-static ☐ ☐ ☐ β˜’ β˜’ πŸ“„
scratch ☐ ☐ ☐ ☐ β˜’ πŸ“„

I applied more or less the same on Micronaut. Here, the scratch image is only 5 MB smaller than the alpine one – 27 MB. This is not too surprising, because the plain alpine-glibc image is only about 6MB. It also felt like the native image generation took longer and needed more memory (observed with docker stats).

As for Helidon’s self-contained, scratch image containing only a static binary, there was not much to be done. I only extend the Dockerfile by a maven cache stage for faster Docker builds.

There’s one last thing I changed in all Dockerfiles: Don’t run as root. I used the USER statement in the Dockerfile. docker run -u ... would also be fine. This way, it’s much more unlikely that possible vulnerabilities (such as CVE-2019-5736 in runc) are exploited.

So summing up: Quarkus and Helidon can be used to create really small docker images, Micronauts are “only” small πŸ˜‰. It’s worth mentioning that I didn’t look what features are included in those images, so maybe it’s a bit naive to just compare the minimal sizes resulting from the individual getting started guides.

Going even further

If I were to continue my comparison at this point (which I won’t because it’s only a short comparison) I would look into the following features of each framework:

  • integration and unit testing,
  • extensions (e.g. Cloud Native features, Tracing, Monitoring, etc.)

Summary

So, for a new green field project, which one of e frameworks would I use?

As far as I can tell after completing the getting started, all three look promising. As for all architectural decisions, I’d definitely try to build a walking skeleton (technical roundtrip) before finally deciding, in order to gain more field experience and find out what’s beyond getting started.

I’d base this decision on the experience or preferences of the team

  • reactive vs. imperative
  • Maven vs. Gradle
  • Java vs. Kotlin (or even groovy)
  • APIs – Microprofile, vert.x, RxJava

Personally, I like the fact that Quarkus builds on existing APIs such as Microprofile, so existing experience can be reused for faster results. It also seems to me the most flexible of the three, supporting Java, Kotlin, Maven, Gradle, reactive and imperative.

As for native images, I’d definitely either try it from the beginning or stick to a regular JRE. I suppose switching from plain JRE-based to native could be complicated for an existing app, due to the native image limitations. If the app under development does not have the requirement to be scaled horizontally, this could be an argument for skipping the native image part. But this is beyond the scope of this article.

As for the docker image – it’s obviously not only the size that matters. An image without shell and package manager is always more secure but harder to debug.

 

Edits:

  • 2019/05/17: John Clingan pointed out that Quarkus supports Kubernetes resource generation an multiple reactive extensions
  • 2019/05/19: As commentef by Graeme Rocher’s, Micronaut also supports blocking workloads

Coding Continuous Delivery with Jenkins Pipelines

Starting in their 01/2018 issue, Java aktuell published my four-part articles series Coding Continuous Delivery in German. I’m happy to announce that all parts are now available in English, courtesy of Cloudogu.

The series takes you from zero to continuously delivering your software through a sophisticated Jenkins pipeline. It starts with the fundamentals, heading on to advanced topics such as nightly builds, parallel execution, docker, shared libraries, unit testing, static code analysis with SonarQube and deployment to Kubernetes. All of the topics are described hands-on with examples comparing the scripted with the declarative syntax provided by the Jenkins Pipeline Plugin.

  1. Jenkins pipeline plugin basics | πŸ–Ί original article PDF (German)
  2. Performance optimization for the Jenkins Pipeline | πŸ–Ί original article PDF (German)
  3. Helpful Tools for the Jenkins Pipeline | πŸ–Ί original article PDF (German)
  4. Static Code Analysis with SonarQube and Deployment on Kubernetes et al. with the Jenkins Pipeline Plugin | πŸ–Ί original article PDF (German)

The examples to all articles are contained in this GitHub repository: triologygmbh/jenkinsfile and the builds can be seen in action on this Jenkins server: opensource.triology.de.

My awesome colleagues at Cloudogu GmbH and Triology GmbH – thank you so much for your support. Especially my co-author from the first article, Daniel Behrwind, who got this whole thing started.

The pragmatic migration to JUnit 5

This article shows how get from JUnit 3.x / 4.x to JUnit 5.x as fast as possible.

Just a short clarification of the term “JUnit 5” (from the user guide) before we take off:

JUnit 5 = JUnit Platform + JUnit Jupiter + JUnit Vintage

where

  • Platform provides the Maven and Gradle Plugins and is the extension point for IDE integration,
  • Vintage contains legacy JUnit 4 API and engine,
  • Jupiter contains the new JUnit 5 API and engine.

Step 1 – Run existing tests with JUnit 5 vintage

The first thing we do is to replace the existing junit:junit depedency with the following

<dependency>
	<groupId>org.junit.vintage</groupId>
	<artifactId>junit-vintage-engine</artifactId>
	<version>5.1.0</version>
</dependency>

For Gradle see this article.

For a real world example see this commit.

Note: After upgrading fromjunit:junit:4.12 to org.junit.vintage:junit-vintage-engine:5.1.0 the execution order of @Rule seems to have changed: They seem to be now executed sequentially (from top to bottom, as defined in the test class).

Step 2 – Getting started with JUnit Jupiter and the Platform

Now lets go from vintage to the fancy new stuff. Just add the Jupiter dependency and empower surefire to use the JUnit platform:

<dependency>
	<groupId>org.junit.jupiter</groupId>
	<artifactId>junit-jupiter-engine</artifactId>
	<version>5.1.0</version>
</dependency>

<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-failsafe-plugin</artifactId>
	<!-- ... -->
	<dependencies>
		<dependency>
			<groupId>org.junit.platform</groupId>
			<artifactId>junit-platform-surefire-provider</artifactId>
			<version>1.1.0</version>
		</dependency>
	</dependencies>
</plugin>

Make sure to juse either surefire 2.19.1 or 2.21.0+, as there seem to be a bug in the versions in between.

As above, for Gradle see this article.

For a real world example see this commit.

As of now, we’re ready to write new tests with JUnit Jupiter.

Here’s a pragmatic aproach how to introduce JUnit 5 from here:

  • Use the new API and all the new features for new test classes.
  • Don’t try to migrate all existing tests. It causes a lot of effort with no direct business value.
  • Instead, apply the boyscout rule by gradually migrating existing tests before they need to be changed.

When getting started wiht JUnit Jupiter you will recognize that some familiar features of JUnit now have a new API or can be achieved using different concepts. After that, there are a some new features to explore.

One way to get accustomed to the new API and concrepts is to migrate some (not all) existing tests, preferably the most complex ones. This way, you will find out how to use the new concepts and which limitations there still might be about JUnit Jupiter (e.g JUnit 4 rules that have not been ported to extensions).

Step 3 – Get accustomted to the API changes in JUnit Jupiter

There are some simple API changes but also two major concept changes: Rules and Runners are gone.

Simple API changes

  • public modifier can be removed (class and methods)
  • org.junit.Test➑️ org.junit.jupiter.api.Test
  • org.junit.Assert.assertX ➑️ org.junit.jupiter.api.Assertions.assertX
    (except assertThat)
  • Order of parameters changed in assert methods. The message parameter is now after expected and actual parameters! This can be a pitfall when migrating, because the message (strings) might silently turn to expected, if you just change the import.
  • assertThat is no longer part of the JUnit API. Instead, just use your favorite assertion library as AssertJ, Google truth or even hamcrest.
  • @Before ➑️ @BeforeEach
  • @After ➑️ @AfterEach
  • @BeforeClass ➑️ @BeforeAll
  • @AfterClass ➑️ @AfterAll
  • @Ignore ➑️ @Disabled
  • @Category ➑️ @Tag

For a real world example see this commit.

Make sure to not mix the APIs, because the tests are either run by the Jupiter or the vintage Engine, which will ignore unknown annotations.

Note that IntellI has a quick fix for migrating JUnit 4 to JUnit Jupiter. However, as of version 2018.1 this seems to only affect @Test, no asserts, exceptions, rules or runners.

Advanced API changes

Basically, Runners and Rules are replaced by Extensions, where one test class can have more than one extension.
However, some Runners have not been ported to Extensions, yet. For those you can try to use@EnableRuleMigrationSupport (see Temporary Folders). If this does not work, you will have to stick with the JUnit 4 API and vintage Engine for now.

Exceptions & Timeouts

Exceptions no longer need a Rule or the expected param in @Test. Instead, the API provides an assert mechanism now.

ExpectedException and @Test(expected = Exception.class) ➑️ assertThrows(Exception.class,() -> method());

For a real world example see this commit.

The same applies to timeouts:

@Test(timeout = 1) ➑️ assertTimeout(Duration.ofMillis(1), () ->method());

Mockito

Instead of the mockito runner, we use the new extension, which comes in a separate module.

<dependency>
    <groupId>org.mockito</groupId>
    <artifactId>mockito-junit-jupiter</artifactId>
    <version>${mockito.version}</version>
</dependency>

@RunWith(MockitoJUnitRunner.class) ➑️ @ExtendWith(MockitoExtension.class)

For a real world example see this commit.

Temporary Folders

Until there is an Extension, we can use @EnableRuleMigrationSupport from this module:

<dependency>
	<groupId>org.junit.jupiter</groupId>
	<artifactId>junit-jupiter-migrationsupport</artifactId>
	<version>${junit5.version}</version>
</dependency>

With this we can use the new API (org.junit.jupiter.api.Test). Howerver, rules and classes must stay public. ClassRules seem not to work.

For a real world example see this commit.

Other Rules

Here are some more rules and their equivalent in JUnit Jupiter.

  • @RunWith(SpringJUnit4ClassRunner.class) ➑️ @ExtendWith(SpringExtension.class)
  • stefanbirkner/system-rules, such as ExpectedSystemExit
    Work in progress! That is, these tests will have remain on the JUnit 4 APIs for now.
  • TestLoggerFactoryResetRule from slf4j-test
    No progress to be seen.
    Could be replaced by logback-spike. For a real world example see this commit.
  • Of course this list is non-exhaustive, there are a lot more runners I have not stumbled upon, yet.

Step 4 – Make use of new features in JUnit Jupiter

Just using the same features with different API is boring, right?
JUnit Jupiter offers some long-awaited features that we should make use of!
Here are some examples:

Optional: Further Reading

More sutainable Android Software with Project Treble and 6-y LTS Kernels on Android O?

Recently, we could read and hear about Google putting some effort in facilitating easier updates for android devices:

So, even though three years are not the promised four years, we can still see a trend of increasing years of guaranteed software support on Android phones.

That is a positive trend! I hope that Google takes the gloves off soon and guarantees four upgrades to a new android version, forcing other vendors to at least start guaranteeing something for their phones. As I said before, this would really be a unique selling point. Also this would really mean a big leap forward regarding sustainability of Android phones! Let’s see if any of the Oreo Phones being release now will support Android S πŸ˜‰

Android Logging for Java Professionals – SLF4J and Logback in Android

One of my articles was published in Java Magazin 9.17. I wrote it while working on the nusic android app, about how to use SLF4J in Android using logback-android. It also features an example and a small library for android.

Triology GmbH provides an English version of this article, and also acquired the original article PDF (in German), which can be found here:Β Android Logging fΓΌr Java-Profis – SLF4J und Logback in Android. I’d like to thank my colleagues there for their support.

Android 7 (Nougat) on a 5-year-old phone

A 5-year-old phone is not going to the kindergarden, it is more likely to be found on the phone graveryard. One important reason for this is that its manufacturer stopped caring for it at least three years ago. Fortunately, there are other caretakers – people that maintain the latest android version also for five-year-old phone grannies – in form of custom ROMs.
So, is it possible to run the latest Android version on a device that’s 5 years old? Hell, yes! This article shows how.

My old HTC One S (aka ville, the older S4 processor not the “C2” with S3 processor), launched April 2012 runs terribly laggy on its latest stock anroid 4 firmware, last update 2013 – four years ago! That’s why it was retired a long time ago. At the beginning of 2017, when the first unofficial builds of lineage OS turned up, I was surprised to see that HTC One S was among them. I was wondering, how can such an old device run the latest android version? I had to see for myself.

TLDR; Android Nougat works surprisingly well, way better than the latest Stock version but HTC makes it a hell of a way to get there. Lets see what has to be done.

Note that all files on your device will be deleted during the process!

Unlock Bootloader

To be able to flash software to our HTC devices, HTC forces us create an account on htcdev.com and register our device there. Then we receive an unlock code as zip file which we can flash on our device.


adb reboot bootloader

fastboot flash unlocktoken Unlock_code.bin

For all Windows 8 (and also Windows 10, apparently) users who have problems getting the fastboot driver to work, whereas ADB works, this little registry edit did the trick for me.

In the bootloader make sure to remember your HBoot version, it will be required during the process.

HBoot 2.16 - we're good!

HBoot 2.16 – we’re good!

Flash TWRP

Download from here to your PC and flash as follows


fastboot flash recovery twrp.img

fastboot reboot

Check HBoot version

I had to find out the hard way by flashing lineage 14.1 that there are requirements regarding the HBoot (HTC Bootloader) version:Β  HBoot 2.16 is required.

So better check in advance. If you’re on HBoot 2.16 flash lineage and enjoy. If not, let the games beginn.

S-OFF & root

Upgrading HBoot needs S-OFF (Security off, enables writing to /system directory). On top of unlocking our bootloader at htcdev.com we need to risk bricking our phone to enalbe writing in the /system directory of our device.
Those two things are the reasons why I most likely won’t ever buy an HTC device again.
Anyway, with the One S, we’re lucky! S-OFF takes a bit of time but not much effort and comes at almost no risk, thanks to rumrunner (a good overview of other S-OFF Mechanisms is described here). Rumrunner makes it easy to get S-OFF, but it needs root access.

What? Why are we doing this again? We want to Upgrade HBoot, that’s why we need S-OFF, in order to get S-OFF, we need root. That’s the last unpleasant surpise, promise!
Lucky again, root access can be gained easily on HTC One S. I used Superboot: Just Download, execute Script on PC while Phone phone is in bootloader mode. Root done. More detailed description here.

Then just download rumrunner follow the instructions, wait for about 10 minutes (don’t worry, your phone is restarting about one million times) and you’re S-OFF!

Upgrade HBoot

We’re getting closer. Now, with S-OFF we can flash HBoot 2.16. I followed these instructions, and found the firmware here.

Caution: After flashing HBoot 2.16, the internal memory is really small, only 50 MB left. Obviously Android 4 and Hboot 2.16 are not good friends. Don’t worry, this is solved once Android 7 is flashed.

Flash Android 7.1.2

It’s finally gonna happen. We’ll flash Lineage 14.1, Android 7.1.2 (maintained by moozon, thanks so much!). But how, the internal memory is only 50MB, but the image is almost 300MB? I copied the image to an USB thumb drive, connected it to the phone via an USB OTG adapter and flashed via TWRP Recovery. TWRP also offers ADB sideloading, which might be an alternative, it you don’t have an USB OTG Adapter at hand.

Once flashed, you finally can enjoy the latest android version on your “antique” phone. πŸ₯‚

Optional finishing and more info

Root is built-in but disabled by default. If you want root access, you can enable it in developer options.

If you need google services, get ARM | 7.1 | pico (my recommendation) from opengapps.org and flash via TWRP.

If you need more info, you might get started here: [LINEAGE OS] HTC One S Lineage OS 14.1, Android Nougat 7.1 ROM

Final thoughts

The One S with Android 7.1.2 almost works better than my HTC One M8 with Android 6! Why must HTC make it so hard to get there? And why are manufacturers not able to support their devices longer than 2 years, while the community or even single indivudals are able to do so for more than 5 years??

[EDIT 2017/06/13: Google just published the guaranteed updates for its devices: two years of upgrades and another year of security updates. While this is at least some formal guarantee (other manufacturer just don’t guarantee anything) it still is far from the five years provided by the community for my HTC One S… ]

I’m still waiting for a manufacturer that guarantees support for its devices for many years. That would really be a unique selling point! I can’t believe no manufacturer uses this USP on the highly competitive market for mobile devices. In addition, this would be so much more sustainable, and contribute to a green(er) IT. While the whole world complaints about planned obsolescence, why is there no manufacturer that uses this fact for positive marketing?

[EDIT 2017/07/01: HMD Global (owner of the Nokia brand) just announced that they intend to provide plain vanilla android phones with support “even after two years“. I looking forward to their next anouncements. My calls might just have been heard πŸ‘‚]

[EDIT 2017/10/05: Google talks about four android version upgrades being possible from Android O and releases Pixel 2 with three years of support. See More sutainable Android Software with Project Treble and 6-y LTS Kernels with Android O?]

What is Google doing, by the way? They introduce Safety Net, providing the oportunity for developers to hide their apps on Google Play from specific devices, such as the ones running custom ROMs. So now, no more Netflix, etc. for custom ROMs. Effectively, this is another punch for the custom ROM scene after the death of CyanogenMod.

Lets just hope custom ROM developers will not be discouraged by these facts and continue their imporant work, making success stories such as the one of my HTC One granny possible.

Thank you android community, XDA forums, custom ROM developers, etc. for sharing your work and never giving up! πŸ‘