Short comparison: Building Graal Native Images with Quarkus, Micronaut and Helidon

The technological innovations of the last years such as the adoption of containers, cloud-native technologies, the microservice architectural style, the inception of GraalVM and the end of JavaEE (as we know it) has energized the Java framework market.

As of May 2019, there are at least three frameworks supporting GraalVM native images out of the box, targeting cloud-native microservices:

  • Quarkus,
  • Micronaut and
  • Helidon.

As building GraalVM native images is a bit challenging, I was curious to find out how these three frameworks keep up with their promises. I worked through the respective getting started guides and wrote down some similarities and differences resulting in this short (and surely incomplete) comparison of the three frameworks. See the following table for an overview.

General comparison

Quarkus Micronaut Helidon
Core Project Source quarkusio/quarkus micronaut-projects/micronaut-core oracle/helidon
Website quarkus.io micronaut.io helidon.io
Started/Backed By RedHat objectcomputing Oracle
First Commit 2018-06-22 2017-03-06 2018-08-28
GitHub Stars (05/2019) 1693 2283 1371
GitHub Contributers (05/2019) 88 120 26
# Commits (05/2019) 3970 5907 617
Supported languages Java, Kotlin Java, Groovy, Kotlin Java
Supported build tools mvn, Gradle mvn, Gradle mvn
Supported APIs for graal native Microprofile, vert.x,

 

Micronaut, ReactiveX/RxJava Helidon SE (Microprofile, without native image)
Programming paradigms for graal native Imperative, reactive Reactive, imperative? Reactive (imperative, without native image)
Code generation via mvn plugin CLI (mn) mvn archetype
Getting Started (Graal native) Guide Docs Blog
Resulting src of Getting Started schnatterer/quarkus-getting-started schnatterer/micronaut-getting-started schnatterer/helidon-getting-started
Size of getting started docker image
Getting started base docker image fedora-minimal alpine-glibc scratch
Getting started uses native image? N N Y
Size of getting started in scratch docker image

All three projects are rather young (grandpa Micronaut is about 2 years old as of 05/2019) but have what looks like extensive documentation at first glance. The only thing that made my stumble a bit was that Helidon’s docs don’t return a result for “graal”. I later found a brand new getting started with graal on oracle’s developers blog. Hopefully, this will be added to the docs soon.

There are a couple of notable differences between the three frameworks:

    • Programming style (reactive vs. imperative)
      • Quarkus explicitly supports both (reactive as an extension),
      • Helidon claims to support both, but only reactive in conjunction with native images right now
      • Micronaut is reactive only From the docs it seems that micronaut focuses on reactive, but blocking approaches are supported (see Graeme Rocher’s comment).
    • Language
      • Micronaut and Quarkus both support Java and Kotlin.
        Micronaut also supports Groovy 🎉 (having Graeme Rocher, the creator of Grails, on board it’s probably a must)
      • Helidon only supports Java
    • Build tool / code generation
      • Micronaut and Quarkus support Maven and Gradle.
        • Quarkus uses a Maven plugin for code generation (bad luck for Gradle users) whereas
        • Micronaut brings its own CLI tool that thankfully can easily be installed using sdkman.
      • Helidon supports only Maven and has only initial code generation support via a Maven archetype.
    • Kubernetes
    • Community
      Hard to tell. The amount of discussions on my tweet about Quarkus makes me think they’re the ones that are most interested in feedback and people getting involved.

GraalVM native Image / Docker Image

  • The Dockerfiles provided by the getting started of Quarkus and Micronaut each require an external Maven build.
    The images base on fedora-minimal (resulting in a 44MB compressed image) or alpine-glibc (resulting in a 32MB compressed image) respectively.
    A base image containing a libc is required because the native image is linked dynamically.
  • Helidon provides a proper self-contained Dockerfile that can be built by simply calling docker build, not requiring anything locally (except Docker, of course).
    Here, the native image is linked statically. Therefore the binary can run in an empty scratch image (resulting in an 8MB compressed image).

Bearing in mind that a Java 8 JRE Image requires about 100MB (debian) or 50 MB (alpine), 44MB or even 32MB for a small webapp is not so bad. OTOH the 8 MB for the statically linked image are a real revelation, leaving me stunned.

The fact that Helidon plays well with GraalVM shouldn’t be too surprising, as they both are official Oracle products.

Beyond getting started

As Quarkus was the first framework I tried, I wondered why they rely on fedora and not just compile a static binary (later, I learned about some of their reasons on twitter). So I tried a couple of other images, eventually setting the switch for creating a static binary and using a scratch image. Voilà: It results in a 7MB image, even a wee bit smaller than the Helidon one. See the table bellow for an overview of images and their features and sizes (taken from the README of my getting started repo).

Base Image Size Shell Package Manager libc Basic Linux Folders Static Binary Dockerfile
fedora 📄
debian 📄
alpine-glibc 📄
distroless-base 📄
busybox 📄
distroless-static 📄
scratch 📄

I applied more or less the same on Micronaut. Here, the scratch image is only 5 MB smaller than the alpine one – 27 MB. This is not too surprising, because the plain alpine-glibc image is only about 6MB. It also felt like the native image generation took longer and needed more memory (observed with docker stats).

As for Helidon’s self-contained, scratch image containing only a static binary, there was not much to be done. I only extend the Dockerfile by a maven cache stage for faster Docker builds.

There’s one last thing I changed in all Dockerfiles: Don’t run as root. I used the USER statement in the Dockerfile. docker run -u ... would also be fine. This way, it’s much more unlikely that possible vulnerabilities (such as CVE-2019-5736 in runc) are exploited.

So summing up: Quarkus and Helidon can be used to create really small docker images, Micronauts are “only” small 😉. It’s worth mentioning that I didn’t look what features are included in those images, so maybe it’s a bit naive to just compare the minimal sizes resulting from the individual getting started guides.

Going even further

If I were to continue my comparison at this point (which I won’t because it’s only a short comparison) I would look into the following features of each framework:

  • integration and unit testing,
  • extensions (e.g. Cloud Native features, Tracing, Monitoring, etc.)

Summary

So, for a new green field project, which one of e frameworks would I use?

As far as I can tell after completing the getting started, all three look promising. As for all architectural decisions, I’d definitely try to build a walking skeleton (technical roundtrip) before finally deciding, in order to gain more field experience and find out what’s beyond getting started.

I’d base this decision on the experience or preferences of the team

  • reactive vs. imperative
  • Maven vs. Gradle
  • Java vs. Kotlin (or even groovy)
  • APIs – Microprofile, vert.x, RxJava

Personally, I like the fact that Quarkus builds on existing APIs such as Microprofile, so existing experience can be reused for faster results. It also seems to me the most flexible of the three, supporting Java, Kotlin, Maven, Gradle, reactive and imperative.

As for native images, I’d definitely either try it from the beginning or stick to a regular JRE. I suppose switching from plain JRE-based to native could be complicated for an existing app, due to the native image limitations. If the app under development does not have the requirement to be scaled horizontally, this could be an argument for skipping the native image part. But this is beyond the scope of this article.

As for the docker image – it’s obviously not only the size that matters. An image without shell and package manager is always more secure but harder to debug.

 

Edits:

  • 2019/05/17: John Clingan pointed out that Quarkus supports Kubernetes resource generation an multiple reactive extensions
  • 2019/05/19: As commentef by Graeme Rocher’s, Micronaut also supports blocking workloads
Advertisements

The pragmatic migration to JUnit 5

This article shows how get from JUnit 3.x / 4.x to JUnit 5.x as fast as possible.

Just a short clarification of the term “JUnit 5” (from the user guide) before we take off:

JUnit 5 = JUnit Platform + JUnit Jupiter + JUnit Vintage

where

  • Platform provides the Maven and Gradle Plugins and is the extension point for IDE integration,
  • Vintage contains legacy JUnit 4 API and engine,
  • Jupiter contains the new JUnit 5 API and engine.

Step 1 – Run existing tests with JUnit 5 vintage

The first thing we do is to replace the existing junit:junit depedency with the following

<dependency>
	<groupId>org.junit.vintage</groupId>
	<artifactId>junit-vintage-engine</artifactId>
	<version>5.1.0</version>
</dependency>

For Gradle see this article.

For a real world example see this commit.

Note: After upgrading fromjunit:junit:4.12 to org.junit.vintage:junit-vintage-engine:5.1.0 the execution order of @Rule seems to have changed: They seem to be now executed sequentially (from top to bottom, as defined in the test class).

Step 2 – Getting started with JUnit Jupiter and the Platform

Now lets go from vintage to the fancy new stuff. Just add the Jupiter dependency and empower surefire to use the JUnit platform:

<dependency>
	<groupId>org.junit.jupiter</groupId>
	<artifactId>junit-jupiter-engine</artifactId>
	<version>5.1.0</version>
</dependency>

<plugin>
	<groupId>org.apache.maven.plugins</groupId>
	<artifactId>maven-failsafe-plugin</artifactId>
	<!-- ... -->
	<dependencies>
		<dependency>
			<groupId>org.junit.platform</groupId>
			<artifactId>junit-platform-surefire-provider</artifactId>
			<version>1.1.0</version>
		</dependency>
	</dependencies>
</plugin>

Make sure to juse either surefire 2.19.1 or 2.21.0+, as there seem to be a bug in the versions in between.

As above, for Gradle see this article.

For a real world example see this commit.

As of now, we’re ready to write new tests with JUnit Jupiter.

Here’s a pragmatic aproach how to introduce JUnit 5 from here:

  • Use the new API and all the new features for new test classes.
  • Don’t try to migrate all existing tests. It causes a lot of effort with no direct business value.
  • Instead, apply the boyscout rule by gradually migrating existing tests before they need to be changed.

When getting started wiht JUnit Jupiter you will recognize that some familiar features of JUnit now have a new API or can be achieved using different concepts. After that, there are a some new features to explore.

One way to get accustomed to the new API and concrepts is to migrate some (not all) existing tests, preferably the most complex ones. This way, you will find out how to use the new concepts and which limitations there still might be about JUnit Jupiter (e.g JUnit 4 rules that have not been ported to extensions).

Step 3 – Get accustomted to the API changes in JUnit Jupiter

There are some simple API changes but also two major concept changes: Rules and Runners are gone.

Simple API changes

  • public modifier can be removed (class and methods)
  • org.junit.Test➡️ org.junit.jupiter.api.Test
  • org.junit.Assert.assertX ➡️ org.junit.jupiter.api.Assertions.assertX
    (except assertThat)
  • Order of parameters changed in assert methods. The message parameter is now after expected and actual parameters! This can be a pitfall when migrating, because the message (strings) might silently turn to expected, if you just change the import.
  • assertThat is no longer part of the JUnit API. Instead, just use your favorite assertion library as AssertJ, Google truth or even hamcrest.
  • @Before ➡️ @BeforeEach
  • @After ➡️ @AfterEach
  • @BeforeClass ➡️ @BeforeAll
  • @AfterClass ➡️ @AfterAll
  • @Ignore ➡️ @Disabled
  • @Category ➡️ @Tag

For a real world example see this commit.

Make sure to not mix the APIs, because the tests are either run by the Jupiter or the vintage Engine, which will ignore unknown annotations.

Note that IntellI has a quick fix for migrating JUnit 4 to JUnit Jupiter. However, as of version 2018.1 this seems to only affect @Test, no asserts, exceptions, rules or runners.

Advanced API changes

Basically, Runners and Rules are replaced by Extensions, where one test class can have more than one extension.
However, some Runners have not been ported to Extensions, yet. For those you can try to use@EnableRuleMigrationSupport (see Temporary Folders). If this does not work, you will have to stick with the JUnit 4 API and vintage Engine for now.

Exceptions & Timeouts

Exceptions no longer need a Rule or the expected param in @Test. Instead, the API provides an assert mechanism now.

ExpectedException and @Test(expected = Exception.class) ➡️ assertThrows(Exception.class,() -> method());

For a real world example see this commit.

The same applies to timeouts:

@Test(timeout = 1) ➡️ assertTimeout(Duration.ofMillis(1), () ->method());

Mockito

Instead of the mockito runner, we use the new extension, which comes in a separate module.

<dependency>
    <groupId>org.mockito</groupId>
    <artifactId>mockito-junit-jupiter</artifactId>
    <version>${mockito.version}</version>
</dependency>

@RunWith(MockitoJUnitRunner.class) ➡️ @ExtendWith(MockitoExtension.class)

For a real world example see this commit.

Temporary Folders

Until there is an Extension, we can use @EnableRuleMigrationSupport from this module:

<dependency>
	<groupId>org.junit.jupiter</groupId>
	<artifactId>junit-jupiter-migrationsupport</artifactId>
	<version>${junit5.version}</version>
</dependency>

With this we can use the new API (org.junit.jupiter.api.Test). Howerver, rules and classes must stay public. ClassRules seem not to work.

For a real world example see this commit.

Other Rules

Here are some more rules and their equivalent in JUnit Jupiter.

  • @RunWith(SpringJUnit4ClassRunner.class) ➡️ @ExtendWith(SpringExtension.class)
  • stefanbirkner/system-rules, such as ExpectedSystemExit
    Work in progress! That is, these tests will have remain on the JUnit 4 APIs for now.
  • TestLoggerFactoryResetRule from slf4j-test
    No progress to be seen.
    Could be replaced by logback-spike. For a real world example see this commit.
  • Of course this list is non-exhaustive, there are a lot more runners I have not stumbled upon, yet.

Step 4 – Make use of new features in JUnit Jupiter

Just using the same features with different API is boring, right?
JUnit Jupiter offers some long-awaited features that we should make use of!
Here are some examples:

Optional: Further Reading

Automatic checks for vulnerabilities in Java project dependencies

 Java aktuell published an article I wrote on a topic at work for TRIOLOGY GmbH.

You can find an English version on the TRIOLOGY Blog: Automatic checks for vulnerabilities in Java project dependencies. The article shows an approach to keeping your Java project dependencies free of known vulnarabilities (e.g. CVEs) using the OWASP Dependency check with Jenkins and Maven. There also is an example project on GitHub.

The original article PDF (in German) is available for download here: Automatisierte Überprüfung von Sicherheitslücken in Abhängigkeiten  von Java-Projekten.

TRIOLOGY also published a short Q&A on the article, which can be found here.

Maven: Create a more sophisticated build number

Earlier this year, while working on a project for TRIOLOGY GmbH, I once again used maven to write a version name into an application, using the mechanism described in my post Maven: Create a simple build number. As a more sophisticated version name was required for this project, we expanded it by a time stamp, SCM information (branch and commit), build number and a also created a special name for releases. You can find a how-to here – Version names with Maven: Creating the version name – which is the first part of a small series of blog posts on this topic.

The second part shows how the version name can be read from within the application. While writing the examples for the post, I wondered how many times I must have implemeted reading a version name from a file in Java. Way too often! So I decided that this would be the very last time I had to do it, and extracted the logic into a small library: versionName, availble on GitHub. What it does and how to use it is described in the second part of the post: Version names with Maven: Reading the version name.

Hopefully, this will be useful for someone else. Funny enough, in the new project I’m on, I’m about to reuse it once again. I’m glad I don’t have to write it again. Here’s to reusability 🍺

Building GitHub projects with Jenkins, Maven and SonarQube 5.2 on OpenShift

Time for an update of the post Building GitHub projects with Jenkins, Maven and SonarQube 4.1.1 on OpenShift, because SonarQube 5.2 is out: It’s the first version since 4.1.1 that can be run on OpenShift. That is, it’s the first version of SonarQube 5 and the first one that contains Elasticsearch and many other features that are now available on OpenShift!
Interested? Then let’s see how to set up SonarQube on OpenShift.

  • If you’re starting from scratch, just skip to this section.
  • If you got a running instance of SonarQube
    • make sure to back up you instance before you continue:
      rhc snapshot save -a <application name>--filepath <backup destination>
      or
      ssh <UID>@<application name>-<yourAccount>.rhcloud.com 'snapshot' > sonar.tar.gz
    • Then pull the git repository just like in step 2,
    • wait until the app has started and visit
      https://sonar-<yourAccount>.rhcloud.com/setup

      SonarQube will update it’s database during the process.

    • If you followed this post to set up your SonarQube instance and therefore use an SSH tunnel to access the SonarQube database, note that you can now get rid of this workaround. From SonarQube 5.2 the analyses can be run without direct contact to the database.
      That is, you can also remove the database connection from your the configuration of the SonarQube plugin in jenkins.

Install new SonarQube instance

To install SonarQube 5.2, execute the following steps on your machine:

  1. rhc app create sonar diy-0.1 postgresql-9.2

    Make sure to remember the login and passwords!

  2. git rm -r diy .openshift misc README.md
    git remote add upstream -m master https://github.com/schnatterer/openshift-sonarqube.git
    git pull -s recursive -X theirs upstream master
    git push
    
  3. Login to your SonarQube instance at
    http://sonar-<yourAccount>.rhcloud.com/

    Note that the initial setup may take some minutes. So be patient.
    The default login and passwords are admin / admin.
    You might want to change the password right away!

Basic installation Jenkins

Basically, the following is an updated (and a lot simpler) version of my post about SonarQube 4.1.1.

  1. Create Jenkins app
    rhc app create jenkins jenkins-1
  2. Install Plugins
    Browse to Update Center

    https://jenkins-<yourAccount>.rhcloud.com/pluginManager/advanced

    and hit Check Now (as described here).
    Then go to the Available tab and install

    1. Sonar Plugin,
    2. GitHub plugin,
    3. embeddable-build-status (if you’d like to include those nifty build badges in you README.md).

    Then hit Install without restart or Download and install after restart. If necessary, you can restart your app anytime like so

    rhc app restart -a jenkins
  3. Set up maven settings.xml to a writable location.
    • SSH to Jenkins
      mkdir $OPENSHIFT_DATA_DIR/.m2
      echo -e "&amp;amp;lt;settings&amp;amp;gt;&amp;amp;lt;localRepository&amp;amp;gt;$OPENSHIFT_DATA_DIR/.m2&amp;amp;lt;/localRepository&amp;amp;gt;&amp;amp;lt;/settings&amp;amp;gt;" &amp;amp;gt; $OPENSHIFT_DATA_DIR/.m2/settings.xml
      
    • Browse to Configure System
      https://jenkins-<yourAccount>.rhcloud.com/configure

      Default settings provider: Settings file in file system
      File path=$OPENSHIFT_DATA_DIR/.m2/settings.xml

  4. Either see my post on how to introduce a dedicated slave node to this setup or
    set up the Jenkins master to run its own builds as follows (not recommended on small gears, as you might run out of memory pretty fast during builds):
    Go to Configure System

    https://jenkins-<yourAccount>.rhcloud.com/configure

    and set
    # of executors: 1

  5. Setup sonar plugin (the following bases on SonarQube Plugin 2.3 for Jenkins)
    On the Jenkins frontend, go to Configure System

    https://jenkins-<yourAccount>.rhcloud.com/configure
    • Global properties,
      tick Environment variables
      Click Add
      name=SONAR_USER_HOME
      value=$OPENSHIFT_DATA_DIR
      See here for more information.
    • Setup the Runner:
      Navigate to SonarQube Runner
      Click Add SonarQube Runner
      Name=<be creative>
    • Then set up the plugin itself
      Navigate to SonarQube
      tick Enable injection of SonarQube server configuration as build environment variables
      and set the following
      Name=<be creative>
      Server URL:

      http://sonar-<yourAccount>.rhcloud.com/

      Sonar account login: admin
      Sonar account password: <your pw> (default: admin)

    • Hit Save

Configure build for a repository

Now lets set up our first build.

  1. Go to
    https://jenkins-<yourAccount>.rhcloud.com/view/All/newJob

    Item name: <your Project name>
    (Unfortunately, Maven projects do not work due to OpenShift’s restrictions.)
    Hit OK

  2. On the next Screen
    GitHub project:

    https://github.com/<your user>/<your repo>/

    Source Code Management:

    https://github.com/<your user>/<your repo>.git

    Branch Specifier (blank for 'any'): origin/master
    Build Triggers: Tick  Build when a change is pushed to GitHub
    Build Environment: Tick Prepare SonarQube Scanner environment
    Build | Execute Shell

    cd $WORKSPACE
    # Start the actual build
    mvn clean package  $SONAR_MAVEN_GOAL --settings $OPENSHIFT_DATA_DIR/.m2/settings.xml -Dsonar.host.url=$SONAR_HOST_URL
    
  3. I’d also recommend the following actions
    Post-build Actions| Add post-build action| Publish JUnit test result report
    Test report XMLs=target/surefire-reports/TEST-.xml*
    Post-build Actions| Add post-build action| E-mail Notification
    Recipients=<your email address>
  4. Hit Apply.
  5. Finally, press Save and start you first build. Check Jenkins console output for errors. If everything succeeds you should see the result of the project’s analysis on SonarQube’s dashboard.

Using Custom Maven / JDK version when building with Jenkins on OpenShift

[EDIT (2016-06-02): OpenShift now provides different JDK “alternatives”, e.g.

/etc/alternatives/java_sdk_1.8.0

So you might want to skip the steps bellow, regarding a custom JDK. The steps described for using a custom maven still apply, however.

]

In previous posts I pointed out how to build GitHub projects with Jenkins, Maven and SonarQube and how to run these builds on dedicated Jenkins slaves. The following shows how to replace the “stock” versions of maven and JDK that are provided by OpenShift.

At the time of writing OpenShift features Maven 3.0.4 and OpenJDK Server 1.7.0_85. Why would you want to change those? Best example is a Java8 project to be build on Jenkins. Can we just advise Jenkins to download the newest Oracle JDK and we’re good to go? Nope, it’s not that simple on OpenShift! Jenkins does download the new JDK, sets the JAVA_HOME variable and the correct PATH, but maven is always going to use the stock JDK. Why? Running this command provides the answer

$ cat `which mvn`
#!/bin/sh
prog=$(basename $0)
export JAVA_HOME=/usr/lib/jvm/java
export JAVACMD=$JAVA_HOME/bin/java
export M2_HOME=/usr/share/java/apache-maven-3.0.4
exec $M2_HOME/bin/$prog &amp;quot;$@&amp;quot;

The stock maven is setting its own environment variables that cannot be overridden by Jenkins!

So, in order to exchange the JDK, we need to exchange maven first.

  • SSH to the machines where your builds are executed (e.g. your slave node). The following example show what to do for maven 3.3.3:
    cd $OPENSHIFT_DATA_DIR
    mkdir maven
    cd maven
    wget http://apache.lauf-forum.at/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz
    tar -xvf apache-maven-3.3.3-bin.tar.gz
    rm apache-maven-3.3.3-bin.tar.gz
    
  • Edit maven config
    vi $OPENSHIFT_DATA_DIR/maven/apache-maven-3.3.3/conf/settings.xml
    

    Add the following to the <settings> tag (replace <UID> by your OpenShift UID first)

    <localRepository>/var/lib/openshift/<UID>/app-root/data/.m2</localRepository>

    (press i button for edit mode, insert, then press esc button, enter :wq, finally press return button)

  • Browse to
    https://jenkins-<yourAccount>.rhcloud.com/configure

    Set Environment variables
    PATH=$OPENSHIFT_DATA_DIR/maven/apache-maven-3.3.3/bin:$PATH
    M2_HOME=$OPENSHIFT_DATA_DIR/maven/apache-maven-3.3.3

  • And that’s it, your builds are now running on the custom maven!
    This allows for using a specific JDK in Jenkins. You could just choose a specific JDK via Jenkins console. This is comfortable, but has one disadvantage: It takes a lot of memory (approx. 600MB per JDK), because the JDK is stored twice – compressed in cache to be sent to slave and again uncompressed to be used on the master. If you got enough memory, you’re done here.

    However, In case you’re running a small gear with only 1GB of memory, you might want to save a bit of your precious memory. The following example shows how to do so for JDK 8 update 51 build 16.
    On SSH:

    cd $OPENSHIFT_DATA_DIR
    mkdir jdk
    cd jdk
    wget --no-check-certificate --no-cookies --header &amp;quot;Cookie: oraclelicense=accept-securebackup-cookie&amp;quot; http://download.oracle.com/otn-pub/java/jdk/8u51-b16/jdk-8u51-linux-x64.tar.gz
    tar -xvf jdk-8u51-linux-x64.tar.gz
    rm jdk-8u51-linux-x64.tar.gz
    
  • Then go to Jenkins
    https://jenkins-<yourAccount>.rhcloud.com/configure

    JDK installations | JDK
    Name=SlaveOnly-Custom-JDK8u51
    JAVA_HOME=$OPENSHIFT_DATA_DIR/jdk/jdk-8u51-linux-x64

Building GitHub projects on Jenkins slaves on OpenShift

This post showed how to build GitHub projects with Jenkins, Maven and SonarQube 4 on OpenShift. For starters, it used the Jenkins master node for running build jobs. However, when running on a small gear, the master node might run out of memory pretty fast, resulting in a reboot of the node during builds.

In order to resolve this issue, there are two options:

  • limitting the memory of the build or
  • running the build on a slave node.

As spawning additional nodes is easy in a PaaS context such as OpenShift and provides a better performance than running builds with small memory, the slave solution seems to be the better approach.

This post shows how.

  1. Create new DYI app as a slave node (a how-to can be found here), name the node e.g. slave
  2. Create node in Jenkins
    1. Go to Jenkins web UI and create new node:
      https://jenkins-<yourAccount>.rhcloud.com/computer/new
    2. Set the following values:
      Remote FS root:/app-root/data folder on slave. Typically this is /var/lib/openshift/<slave's UID>/app-root/data/jenkins, you can find out by SSHing to the slave node and calling

      echo $OPENSHIFT_DATA_DIR/app-root/data/jenkins

      Labels: Some label to use within builds to refer the node, e.g. OS Slave #1
      Host: the slave’s hostname, e.g. slave-<youraccount>.rhcloud.com

    3. Add Credentials
      username: <slave's UID>
      Private Key File:Path to a private key file that is authorized for your OpenShift account. In the first post this path was used: /var/lib/openshift/<UID of your jenkins>/app-root/data/git-ssh/id_rsa. Note: $OPENSHIFT_DATA_DIR seems not to work here.
      BTW: You can change the credentials any time later via this URL

      https://jenkins-<yourAccount>.rhcloud.com/credentials/
  3. Prepare slave node: Create same environment as on master in the first post
    1. Create folder structure
      mkdir $OPENSHIFT_DATA_DIR/jenkins
      mkdir $OPENSHIFT_DATA_DIR/.m2
      echo -e "&lt;settings&gt;&lt;localRepository&gt;$OPENSHIFT_DATA_DIR/.m2&lt;/localRepository&gt;&lt;/settings&gt;" &gt; $OPENSHIFT_DATA_DIR/.m2/settings.xml
      
    2. Copy SSH directory from master to same directory on slave, e.g.
      scp -rp -i $OPENSHIFT_DATA_DIR/.ssh $OPENSHIFT_DATA_DIR/.ssh &lt;slave's UID&gt;@slave-&lt;your account&gt;.rhcloud.com:app-root/data/.ssh
    3. As the different cartridges (jenkins and DIY) have different environment variables for their local IP addresses ($OPENSHIFT_JENKINS_IP vs $OPENSHIFT_DIY_IP) we’ll have to improvise at this point. There are two options: Either
      1. Replace all occurrences of $OPENSHIFT_JENKINS_IP
        In all builds and in

        https://jenkins-<yourAccount>.rhcloud.com/configure

        Sonar | Sonar installations
        Database URL: jdbc:postgresql://$OPENSHIFT_DIY_IP:15555/sonarqube
        or

      2. Create an $OPENSHIFT_JENKINS_IP environment variable on your slave machine
        rhc env set OPENSHIFT_JENKINS_IP=&lt;value of  $OPENSHIFT_DIY_IP&gt; -a slave

        You can find out the value of $OPENSHIFT_DIY_IP by SSHing to the slave and execute

        echo $OPENSHIFT_DIY_IP 
      3. I’d love to hear suggesstions that do better 😉
  4. Adapt Build
    Easiest way is to not use the master at all.
    To do so, go to

    https://jenkins-<yourAccount>.rhcloud.com/configure

    and set # of executors to 0.
    Hit Apply

  5. Limit memory usage.
    Make sure the slave does not run out of memory (which leads to a restart of the node):
    Global properties | Environment variables
    name: MAVEN_OPTS
    value: -Xmx512m
    Hit Save.
  6. Now run a build. It should run on the slave and hopefully succeed 🙂

See also Libor Krzyzanek’s Blog: Jenkins on Openshift wi… | JBoss Developer