Maven: Create a more sophisticated build number

Earlier this year, while working on a project for TRIOLOGY GmbH, I once again used maven to write a version name into an application, using the mechanism described in my post Maven: Create a simple build number. As a more sophisticated version name was required for this project, we expanded it by a time stamp, SCM information (branch and commit), build number and a also created a special name for releases. You can find a how-to here – Version names with Maven: Creating the version name – which is the first part of a small series of blog posts on this topic.

The second part shows how the version name can be read from within the application. While writing the examples for the post, I wondered how many times I must have implemeted reading a version name from a file in Java. Way too often! So I decided that this would be the very last time I had to do it, and extracted the logic into a small library: versionName, availble on GitHub. What it does and how to use it is described in the second part of the post: Version names with Maven: Reading the version name.

Hopefully, this will be useful for someone else. Funny enough, in the new project I’m on, I’m about to reuse it once again. I’m glad I don’t have to write it again. Here’s to reusability 🍺

Android apps – nusic: Find New Music Albums

Have you ever stumbled upon a new album of one of your favorite artist and recognized they released several albums since you last checked?

nusic - your new music

nusic – your new music

Problems like this can now be solved conveniently with nusic for Android.

All you need is an Android device that contains music of all the artists you like.

After installing, nusic regularly checks if there are upcoming releases for the artists on your device and informs you about any news.

Just install the app, start it once and it will keep you up to date about new releases via android notifications. Never again miss any releases of your favorite artists!
You can install nusic from Google Play or get the APK from GitHub.

Get it on Google Play
If you should encounter any errors, please report them here instead of just giving a poor rating.

By the way, nusic is open source. Please contribute by forking nusic on GitHub.


How does it work?

nusic regularly checks MusicBrainz – the open music encyclopedia – for new releases of the artists on your device.

That’s all there is. You don’t need an account and the app is not pulling any other data from your phone!

Automatically downloading/backing up/dumping/exporting databases from remote hosts via the web

The problem

You operate a database-backed website (e.g. Drupal) where you can’t access cron jobs, cgi, perl and outgoing connections. So one idea to back up your database on a regular basis (which is always a good idea) is to download SQL dumps via a web-based administration tool (such as the backup and migrate plugin for drupal). Unfortunately, these kinds of downloads cannot simply be automated on the shell by using curl or wget, because they require a bit of javascript, for example to outsmart the php timeout.

The solution

Use a headless browser (that is, a browser without graphical user interface) to automate the task. It fetches the desired page, logs in, (virtually) clicks the download button and downloads the dump file.

It should be a command line tool, in order to run it as cron job from a some server (e.g. a NAS).

Personally, I liked the idea of PhantomJS, but it was not available for my Synology DS213+ PowerPC and I didn’t like the idea of building it from source.

So my plan B was to write a small Java program (remoteDbDumper)  that uses the HtmlUnit framework (our headless browser).

How to use

  1. Install drupal plugin backup and migrate.
  2. Download and extract remoteDbDumper.
  3. Start it from the shell.
    remoteDbDumper -u <username> -p <password> -o <output dir> <url to backup and migrate>

    Note that output dir must be an existing directory

    1. Linux example:
      ./ -u user -p topsecret -o ~/backup
    2. Windows example
      remoteDbDumper.bat -u user -p topsecret -o "%HOMEPATH%\backup"
  4. Use the scheduling mechanism of your choice to call remoteDbDumper regularly, creating backups.

Example (Synology)

Just a short exemplary scenario on how to use remoteDbDumper on a Synology Diskstation (running DSM 4.2) to regularly back up a drupal database.

  1. (if Java is not installed) install Java:
    If available for your Diskstation, use the Java Manager package. Otherwise, you could use a third party Java package (that’s what I had to do).
  2. Download, extract and copy remoteDbDumper to the NAS (e.g. to \\diskstation\public\, which corresponds to /volume1/public/)
  3. SSH to the NAS and check if it works
    /volume1/public/remoteDbDumper-1.0/ -u user -p topsecret -o /volume1/someUser/
  4. (optional) Wrap the command line call in a shell script, e.g.
    BASEDIR=$(dirname $0)
    $BASEDIR/remoteDbDumper-1.0/ -u user -p topsecret -o $1
  5. Either use the web frontend  or the crontab to schedule the back up.
    1. Web frontend:
      Go to http://diskstation:5000, (or whatever combination of host name and port you’re using)
      login as admin,
      click on Control Panel | Task Scheduler.
      Then click on Create | User-defined Script.
      Enter a task name, choose a user (preferably not root), set up a schedule (e.g. every sunday at 8 p.m.).
      Finally enter the path to remoteDbDumpe or the script (4.) respectively. For the example above, the path would look like this:

      /volume1/public/ /volume1/public/
    2. If you insist to do it on foot, here’s what to enter in crontab:
      vi /etc/crontab
      #minute hour    mday    month   wday    who              command
      0       20      *       *       0       enterUserHere    /volume1/public/ /volume1/public/
    3. Set a maker in your calender for the next scheduled run, to check if it worked.

Future tasks

At the current state remoteDbDumper can only backup drupal databases. Fair enough.

However, with just a little more effort it would be possible to extend remoteDbDumper to support addition web-based database administration tools, such as  mysqldumper, phpMyBackupPro, phpMyAdmin or phpPhAdmin.

In order to do so, just fork the repo on github and implement the interface DbDump.

JSF: Displaying FacesMessages during render response phase

The problem

A controller runs into an error situation during the last phase of the JSF lifecycle (the RENDER_RESPONSE phase). In order to let the user know that something went wrong, a FacesMessage is added. However, the faces message is not shown on the client. There may or may not be a warning on the log which says “FacesMessage(s) have been enqueued, but may not have been displayed“.

The cause

In the render response phase the component tree is traversed and each component is rendered (that is, HTML is generated). When an error occurs after the message component finished rendering, it is not possible to add another message to it.

Some solutions/workarounds

  1. Place the message component at the end of the component tree (that is, the page or template). That way, it’s most likely that the message widget has not been rendered at the time the content is rendered (which is when the error occurs and the FacesMessage is added). The FacesMessages are processed later, when the message widget is rendered.
    However, in most cases you don’t want to display your message at the end of the page. This can be solved with at least these two approaches:

    1. Use an overlay (for example PrimeFace’s growl widget – <p:growl>), whose position on the rendered page is “independent” of its position in the component tree.
    2. Move the widget via CSS positioning (which is a bit of an ugly workaround).
  2. Try to validate the data in an earlier phase. For example, you could check the data before rendering via an event:
    <f:event type="preRenderView" listener="#{bean.initData}"/>

    Or (if you’re using Seam), you could use Seam page actions,  as shown in this thread.

  3. Save the messages in the session and trigger a server render (which can be done with ICEfaces). This can be realized with a PhaseListener as described here.

Which one to use?

This depends on your use case and the component framework you’re using.

Solution #3 should always be your last measure, as it is really heavyweight and it requires additional HTTP requests.

#2 cannot be used in conjunction with some components and requires changing the UI logic and increases complexity of your code.

Which leaves #1, which is the most pragmatic solution. However, it can’t be used when your FacesMessage should be displayed on a position within the page (e.g. on the top of the page).

An example

When rendering data using PrimeFace’s DataTable with lazy loading (p:dataTable) the data is fetched from database via the LazyDataModel at rendering time (render repsonse phase). Once the data is fetched it might occur that some data is missing and the user should be informed about this fact. In order to do so, a FacesMessage would be the way to go. However, as described above, the message never gets rendered.

As far as I know, fetching the data cannot be brought forward to an earlier phase.

As the growl widget was already used within the application as the way of displaying information to the user, the solution was as easy to move the growl widgets to the end of the layout template. And voilà, the messages suddenly show up!

This solution was really easy to realize, but finding out about it took a painful amount of time. So hopefully, this post may speed up the process for anyone stuck with the same problem 🙂

Hibernate: Write SQL to a specific logfile (without additional framework)

This post will show a simple setup logging Hibernate’s SQL statements to a single file, including all the parameters, using only log4j and (of course) Hibernate.

SQL in Hibernate

For the purpose of seeing the magic behind a object-relational mapper (e.g. for debugging), it’s always useful to have a look at the actual SQL queries that are created.

Especially, if you are working (or have to work) with something as useful as Hibernate’s Criteria API.

In Hibernate, the simplest way is to set the parameter hibernate.show_sql to true and Hibernate will log all SQL statements to system out.

Straight after setting this option, two things come to mind:

  • Even small applications can generate tons of SQL statements on your log and
  • the SQL statements contain only question marks instead of the actual paramaters, which leaves them only partly useful.

I can’t count the number of SystemOut.log files I saw that were so flooded with Hibernate’s SQL drivel that the important log statements were nowhere to be found.

Using additional Frameworks

In order to solve these issues, the best approaches might be to intercept the SQL statements on JDBC level and then logging them. This can be achieved using a framework like p6spy or log4jdbc. Those frameworks will log the complete SQL statements including the actual parameter values instead of question marks to a file.

However, those frameworks require a lot of effort on configuration. In some cases they can’t be used at all, e.g. in enterprise software – log4jdbc doesn’t even support maven (as of log4jdbc 4.1.2).

Using only Hibernate (and log4j)

A by far simpler alternative is to swiftly set up your logging framework (which should already be part of your application) to log the SQL statements, the values of the parameters and the values returned by the database to a separate file.

A configuration for log4j might look as follows:

log4j.rootLogger=INFO, File, Console, File, File, Console, FileSql, FileSql

log4j.appender.Console.layout.ConversionPattern=%d{ISO8601} %-5p %c{1} %l - %m%n

log4j.appender.File.layout.ConversionPattern=%d{ISO8601} %-5p %c{1} %l - %m%n

log4j.appender.FileSql.layout.ConversionPattern=%d{ISO8601} %-5p %c{1} - %m%

This results in all SQL statements being logged to myApplication_sql.log whereas all other application-related messages (including warnings issued by Hibernate) are written to myApplication.log.

The important part is the additivity parameter, which avoids the log messages being propagated to the parent logger. That is, the SQL statements will not flood your system out.

This solution has only one drawback compared to the additional frameworks solution: It still contains the question marks. However, the org.hibernate.type logger writes additional log statements that contain the values.

Note: Setting the logger org.hibernate.SQL to DEBUG seems to be equivalent to setting the parameter hibernate.show_sql to true.

In addition I’d always recommend to set the hibernate.format_sql parameter to true in order to make the SQL statements easier to read.

The result might look something like this

2013-04-16 21:18:47,684 DEBUG SQL -
    select as id0_1_
        app table0_
2013-04-16 21:18:47,684 TRACE BasicBinder - binding parameter [1] as [BIGINT] - 95
2013-04-16 21:18:47,684 TRACE BasicExtractor - Found [1] as column [id0_1_]


To conclude, using log4j can be set up in no time and contains all the necessary information for debugging (including the values returned by the database), whereas a dedicated JDBC logging framework requires more configuration effort but puts out the SQL statements more neatly.


Once more, I found most of the information aggregate in this article on stackoverflow: here and there. Thanks guys!

Songbird/Nightingale: Exporting playlists

The playlist problem

As mentioned in my previous post I have been using Songbird/Nightingale for quite some time, in spite of the drawback mentioned in the post.

No matter if using Songbird or Nightingale, one of my main problem still remained the same: The playlists are trapped somewhere inside the library with no way to export as playlist files. Absolutely no way? That’s not the whole truth, however, as there are (or were) addons like Playlist Export Tool, Export My Playlists or FolderSync. Thanks to the developers, by the way – those addons were really useful to me!

Unfortunately, with every new songbird release, all addons stopped working. In other words: Whenever I made the mistake of updating updated, I wasn’t able to export playlists anymore. I actually don’t even know if there are any addons left, that are compatible to the most recent version of Songbird.

The playlist solution

One more good thing about Songbird (and Nightingale as well), is that it uses an SQLite database. This allows for accessing the Songbird database from a variety of programming languages without getting your hands dirty and makes way for a “third-party” tool, that is capable of exporting playlists from the Songbird database and doesn’t depend on the Songbird version. I developed an exporter in Java and been using it some time to make my Songbird playlists available on my NAS.

As I thought this exporter might be useful to others, I refactored the quick and dirty source code and published it on GitHub. So now, I’m proud to present songbirdDbTools a Java-based playlist exporter for Songbird/Nightingale that was just released in its very first version. Hopefully, it will be of use for somebody else, who was missing this functionality as much as I did 🙂


The name is a bit of an exaggeration at this point, as the tool provides only the export functionality. However, I put some effort in designing songbirdDbTools to be as extensible as possible. I have a couple of things in mind that would be useful.
For example synchronizing playlists. That is, exporting not only the playlist but copy the member files as well. This might come handy for effectively synchronizing files to mobile devices.
Or finding zombies and ghosts (like the Excorcist used to do, three years ago). Another neat feature might be to find out all playlists a file belongs

If only I had more time!

So, just in case you’re interested in contributing: Fork songbirdDbTools on GitHub!

Shutting down JUnit tests “gracefully” in eclipse


I started working on an integration testing project recently. We’re using an integrated container (provided by cargo) to host the application under test and then have the selenium webdriver accessing the connecting to the server via browser.

In this environment, I bumped into a problem that I didn’t consider being one at first: Every time when clicking the terminate button during a test run in eclipse, it just kills the process – no @after or onTearDown() methods getting invoked nor is a shutdown hook called.
This leaves me with the problem that when the JVM that runs the test is killed, no cleanup is performed: The server isn’t stopped, the browser isn’t closed and the database isn’t cleaned up.
As a consequence, I have to do this manually after every test that I terminate, which, at least during active development, is about every single one, as these test unfortunately aren’t very performing. Doing so like 100 times a day really started to make me angry.

So I spent some time looking for a solution, which terminates JUnit tests within eclipse more gently, but no luck. A couple of guys even filled bugs at eclipse during the last years, from which not a single one ever got fixed.

I just wonder why eclipse just can’t call @after/onTearDown() when terminating JUnit tests?

Anyway, after getting inspired by a couple of stackoverflow posts, I put together my own rather simple “solution”, which is more of a workaround, really.
Still, I think it might be worth being posted here.

Solution overview

My approach includes having a separate thread listening to standard in, waiting for a specific “signal” – a string whose occurrence initiates the “soft” termination. Just like SIGINT on unix-like systems. Once the signal is received, the console listener calls System.exit() and leaves the cleanup to a shutdown hook.

The shutdown is realized as a user-defined Thread, which is registered as the JVM’s shutdown hook. This thread is executed by the runtime after System.exit() is invoked.

Speaking code

Sounds complex? Maybe a couple of lines of code illustrate the mechanism.

You can also find the following classes as a part of a demo-project from my github. So pull it and try it out for yourself!
The following class realizes the shutdown mechanisms mentioned above.

public class JUnitShutdown {
	/** Log4j logger. */
	private Logger log = Logger.getLogger(this.getClass());

	/** User defined shutdown hook thread. */
	private Thread shutdownHookThread;
	 * The "signal" string, whose input on the console initiates the shutdown of
	 * the test.
	private String exitSignalString;
	/** Listen for the signal only if the test is run in debug mode. */
	private boolean isDebugOnly;

	 * Creates an instance of the shutdownhook that listens to the console for
	 * an <code>exitSignal</code> and executes <code>shutdownHook</code> when
	 * the signal is received.
	 * @param exitSignal
	 *            the signal the leads to exiting the test.
	 * @param isUsedInDebugOnly
	 *            if <code>true</code>, <code>exitSignal</code> is only
	 *            evaluated if test is run in debug mode
	 * @param shutdownHook
	 *            the thread that is executed when <code>exitSignal</code> is
	 *            received.
	public JUnitShutdown(final String exitSignal,
			final boolean isUsedInDebugOnly, final Thread shutdownHook) {
		shutdownHookThread = shutdownHook;
		exitSignalString = exitSignal;
		this.isDebugOnly = isUsedInDebugOnly;

	 * Allows for cleanup before test cancellation by listening to the console
	 * for a specific exitSignal. On this signal registers a shutdown hook who
	 * performs the cleanup in separate thread.
	private void initShutdownHook() {
		if (isDebugOnly
						.contains("-agentlib:jdwp")) {

		/* Start thread which listens to */
		Thread consoleListener = new Thread() {
			public void run() {
				BufferedReader bufferReader = null;
				try {
					bufferReader = new BufferedReader(new InputStreamReader(;
					/* Read from */
					while (!bufferReader.readLine().equals(exitSignalString)) {

					// Add shutdown hook that performs cleanup

					log.debug("Received exit signal \"" + exitSignalString
							+ "\". Shutting down test.");
				} catch (IOException e) {
					log.debug("Error reading from console", e);

			 * Is not doing a thing.
			private void doNothing() {



Now what to do with JUnitShutdown?
The following rather senseless JUnit test keeps you machine busy for a while (like forever). At the beginning it opens an exemplary external resource (a file) open, which is closed and then deleted on cleanup. That is, entering “q” in your console during test execution results in closing and then deleting the resource, hitting the terminate button in eclipse will cause the file to remain on the system, however.

public class SomeLongRunningTest {
	/** Log4j logger. */
	private Logger log = Logger.getLogger(this.getClass());
	 * An exemplary resource which not deleted when the test gets terminated,
	 * but gets deleted, when using {@link JUnitShutdown}.
	private static final String SOME_FILE = "some.file";
	 * An exemplary resource which is not closed when the test gets terminated,
	 * but gets closed, when using {@link JUnitShutdown}.
	private BufferedWriter someResource = null;

	/** A value that controls how many values are output during the test. */
	private static final int SOME_DIVISOR = 100000000;

	 * The shutdown hook listening to the exit signal.
	private JUnitShutdown shutdownHook = new JUnitShutdown("q", false,
			new Thread("testShutdownHook") {
				public void run() {
					cleanupResources("shutdown hook");

	 * Some exemplary test.
	 * @throws IOException
	 *             some error
	public void testSomething() throws IOException {
		try {
			someResource = new BufferedWriter(new FileWriter(
					new File(SOME_FILE), false));


		} finally {

	 * Keeps the machine busy forever.
	 * @throws IOException
	 *             something went wrong during writing to file
	private void doSomethingExpensive() throws IOException {
		int i = 0;
		while (true) {
			if (++i % SOME_DIVISOR == 0) {

	 * This method is only called when the test ends without being killed.
	public void onTearDown() {

	 * Closes {@link #someResource} and deletes {@link #SOME_FILE}.
	 * @param caller
	 *            the caller of this method for logging purpose only.
	private void cleanupResources(final String caller) {
		log.debug("cleanupResources() called by " + caller);
		if (someResource != null) {
			try {
			} catch (IOException e) {
				log.error("Unable to close resource", e);
		File f = new File(SOME_FILE);
		if (f.exists() && !f.isDirectory()) {

Running this test an pressing “q” and then pressing enter will produce a log output such as this:

2012-11-20 22:54:14,248 [main] DEBUG SomeLongRunningTest  info.schnatterer.test.sometest.SomeLongRunningTest.doSomethingExpensive( 100000000
2012-11-20 22:54:15,260 [main] DEBUG SomeLongRunningTest  info.schnatterer.test.sometest.SomeLongRunningTest.doSomethingExpensive( 200000000
q2012-11-20 22:54:16,175 [main] DEBUG SomeLongRunningTest  info.schnatterer.test.sometest.SomeLongRunningTest.doSomethingExpensive( 300000000

2012-11-20 22:54:16,505 [Thread-0] DEBUG JUnitShutdown  info.schnatterer.test.shutdown.JUnitShutdown$ Received exit signal "q". Shutting down test.
2012-11-20 22:54:16,544 [testShutdownHook] DEBUG SomeLongRunningTest  info.schnatterer.test.sometest.SomeLongRunningTest.cleanupResources( cleanupResources() called by shutdown hook

More advanced shutdowns

Where to go next?
I kept the examples above rather simple to make my point. Still, I’m sure that there is a lot you could extend or improve.
For instance, in the project mentioned above, I extended the mechanism, so now I have two signals:

  • “q” stops the server, closes the browser and deletes all the test data from the database.
  • “q!” (does that sound familiar?) is a bit faster, it omits the database-related stuff (as the data is set up at the beginning of each test run anyway).

I have to admit “q!” improves productivity tremendously 🙂

You may also have noted that this mechanism is not limited to be used with JUnit. I first considered implementing it using JUnit rules, then I found out that it would be easier and more generic not to do so.

Let me know if you have any ideas on how to further improve this mechanism.