Modernizing android UIs part 1: Migrating from Actionbarsherlock to Material Design

This post shows by example the steps that are necessary for migrating an android application from Actionbarsherlock to Material Design (introduced in android KitKat/Version 5.x/API level 21/22), while keeping compatibility with at least android Gingerbread/Version 2.3 /API level 9). It uses the appcompat-v7 library on API level 22. The app that was migrated (nusic) in this example is developed using Eclipse, Maven and RoboGuice.

Before-after comparison

Let’s begin with some before-after screenshots:

ActionBarSherlock

ActionBarSherlock

AppCompat-v7, API-22

AppCompat-v7, API-22

02-nusic-abs-prefs

Preferences: ActionBarSherlock

Preferences: AppCompat-v7, API-22

Preferences: AppCompat-v7, API-22

Basic migration (appcompat-v7)

The following lists the steps that were implemented in order to change the app as depicted in the screenshots above.

  • Update Android SDK Tools
  • Eclipse
    • Setup Eclipse project for appcompat described here.
    • Link your project with it. Right click on your project | Properties | Android | Library | Add
      eclipse-choose-appcompat
    • Remove action bar sherlock. Same menu as above.
  • Set Up Maven Build
    • <repositories>
      <repository>
      <id>android</id>
      <url>file://${env.ANDROID_HOME}/extras/android/m2repository</url>
      </repository>
      </repositories>
      
    • <dependency>
      <groupId>com.android.support</groupId>
      <artifactId>appcompat-v7</artifactId>
      <version>${android.compatibility-v7.version}</version>
      <type>aar</type>
      </dependency>
      
  • Migrate from Actionbarsherlock to appcompat
    • styles.xml

      <!-- <style name="AppBaseTheme" parent="@style/Theme.Sherlock"> -->
       <style name="AppBaseTheme" parent="@style/Theme.AppCompat">
      
    • Replacing classes
      • RoboSherlockFragment -> RoboFragment
      • RoboSherlockFragmentActivity -> RoboActionBarActivity
      • RoboSherlockPreferenceActivity -> Write your own RoboAppCompatPreferenceActivity that looks like this (or as described here) but is derived from RoboPreferenceActivity (see here for the class that was used in the example).
        Then derive your class from it as before with RoboSherlockPreferenceActivity.
    • Replacing methods
      • getSherlockActivity() -> getActivity()
      • getSupportMenuInflater() -> getMenuInflater()
    • Fixing imports
      • android.view.Menu
      • android.support.v7.app.ActionBar
    • Updating proguard.cfg
      • Remove actionbarsherlock
      • Add
        # support4, appcompat-v7, design support
        -dontwarn android.support.**
        -keep class android.support.** { *; }
        -keep interface android.support.** { *; }
        

For further info please see

iTunes: Exporting playlists with relative paths

TL;DR; A way of exporting iTunes playlists with relative paths is described here.

iTunes Export

Once upon a time I wrote a little tool for exporting playlists from Songbird/Nightinale. After migrating to iTunes, I was looking for a tool that provides the same functionality for iTunes. Fortunately, there already is one: iTunes Export. It’s twofold – you can either use a UI or a console version. I’m more the console type of guy, so my choice is clear. The latest release, version 2.2.2, was released in 2010, almost 5 years ago… and it still works with iTunes 12.1.2 – it’s a miracle! And it’s fast – my approximately 100 playlists are exported in less than 2 minutes.

Relative Playlist workaround

Among all those parameters of iTunes export, there is none for creating relative playlists, though. However, we can use a workaround for achieving this, by combining the musicPath and musicPathOld parameters. Here’s what the doc says:

  • musicPath
    iTunes Export will use the absolute location of your music files in the playlist. iTunes Export accepts a command line parameter that will override this default. Example:

    java -jar itunesexport.jar -musicPath="c:\My Music Directory"
  • musicPathOld
    Tunes Export will only apply the prefix to tracks stored in the directory configured in iTunes as the iTunes Music Folder location. Files stored in a different directory will not have the prefix applied.If you only wish to override a portion of the music path you can specifi the musicPathOld parameter. iTunes Export will replace this path with the musicPath parameter instead of replacing the default music path.

    java -jar itunesexport.jar -musicPathOld="c:\My Old Path"

A bit complicated, eh?!

Exporting playlists with relative paths by example

I’ll point out how we can use those parameters by a small example. Imagine the following folder structure

D:\Music\
  Playlists\
  Artist1\ 
    Song1.ext
  Artist2\
    Album\
      Song2.ext
  iTunes
    iTunes Library.xml

If we had a playlist call playlistX that contained Song1 and Song2 (anyone remember a band called Blur? 🙂 ) and we would export it without further parameters to the Playlists folder, it would look like this:

D:\Music\Artist1\Song1.ext
D:\Music\Artist2\Album\Song2.ext

What we’re going to do is replace the absolute part with a relative one (in respect to the destination folder for playlists). In our example: Replace “D:\Music” by “..” because it’s the parent folder of “D:\Music\Playlists“. That’s exactly what the parameters mentioned above are for! musicPathOld is the the part that is going to be replaced by musicPath. Speaking of which, our call to iTunesExport looks like this:

java -jar itunesexport.jar -library="D:\Music\iTunes\iTunes Library.xml" -outputDir="D:\Music\Playlists" -musicPath=".." -musicPathOld="D:\Music"

and results in the file D:\Music\Playlists\playlistX.m3u

..\Artist1\Song1.ext
..\Artist2\Album\Song2.ext

Migrating from Songbird/Nightingale to iTunes

TL;DR; If you still use Songbird or Nightingale and want to migrate your music database to iTunes, start reading here.

Bye-bye Songbird, bye-bye Nightingale

Is anyone out there still using Nightingale or even Songbird? I started using Songbird in 2009 (or even earlier) and switched to Nightingale in 2013. All that time I loved the open source approach of both applications and even contrtibuted a bit myself. However, there also were a lot of drawbacks like performance, incompatibilities of addons after each new version, etc. Speaking of addons – the idea of a modular media player that is extensible just like Firefox or Thunderbird is wonderful. However, it seems to me there’s not much of a community left that releases addons for Nightingale. One of my favorites used to be MLyrics, which was last released in 2013 and doesn’t work properly anymore in the current version of Nightingale (at least for me). Still, there seems to be some development going on. Same goes for the core media player software itself: We all know that Songbird was discontinued in 2013 and the last release of Nightingale was published in January, 2014. Even though there also seems to be some development going on, I lost hope that there will be better usability at some point. So I finally decided with a heavy heart to move on. Nevertheless, I’d like to say thank you to all the Nightingale developers for their strong efforts to keep the dream of a real open source alternative for iTunes alive.

Is there a better alternative to Nightingale? That question I cannot answer properly. My reasons for migrating to iTunes are that it has been maintained by a huge company for years and it’s one of the most popular media players around. So hopefully, it might get along better with my rather huge media library in terms of performance. Plus it is the only tool that is capable of feeding my iPod Nano 6 :-/

Technical approaches of migrating to iTunes

Leaving sentimentality behind – how to migrate from Songbird/Nightingale to iTunes?

One of the nice things about Songbird/Nightingale is their SQLite database. I already worked with it before when creating a playlist exporter for Songbird using the java programming language. It’s always a good idea not to reinvent the wheel. So after extracting the database wrapper into a separate project – songbirdDbApi4j (right now, I really wonder why I named it like this 😮 ) – we’re halfway done with the Songbird to iTunes migration. Almost.

The other half – importing to iTunes was a bit more challenging. iTunes stores its database in an XML file. So one approach is to access this XML directly, just like tools such as iTunesExport. However, this file is generated by iTunes merely for the purpose of exporting, the actual database is stored in the ITL file. It would be possible to recreate the ITL from the XML, but this approach is not very convenient. So a different approach might be more suitable here: on Windows, iTunes offers a COM Interface. It’s poorly documented but fortunately, the developers of COM4j implemented it as one of their sample projects. In order to ease the use of this API, I create a very basic Java wrapper for iTunes’ COM API (itunes4j).

Almost done! What’s missing is a bit of glue logic that reads all files, some of their attributes and all playlists from songbird using songbirdDbApi4j and adds them to iTunes via iTunes4j.

Migrating to iTunes

After a lot of empirical studies and nightly test migrations, I’m proud to present a tool for migrating from songbird to iTunes: songbird2itunes. In case there happen to be any other Nightingale survivors out there that run on Windows and would like to migrate their music database to iTunes, you might just give it a go!

I did my best to make it a resilient migration tool. Still, there might be errors. So:

  1. Double check if you really want to leave Nightingale behind
  2. Make sure to read the wiki first. If you’re sure you want to do this, start the migration as described there.
  3. When the migration is done, check if the statistics show any warnings.
  4. If so, look for WARN in the songbird2itunes.log and see if those are not critical for you.
  5. In case of error, please fix it and contribute 🙂
  6. Manually check your new iTunes library, making sure everything is as expected

Building GitHub projects with Jenkins, Maven and SonarQube 4.1.1 on OpenShift

Basic installation SonarQube

There are different community-driven sonar cartridges around. There is

  • this one that bases on a Tomcat cartridges and provides SonarQube 3.x and
  • that one that comes with SonarQube 4.0.
  • The most uptodate and flexible one is this, though. It downloads a specific version of SonarQube with each build. At the moment it works with version 4.1.1. I’m still working on getting SonarQube 5 to run on openshift, but haven’t succeeded, yet.

There also is a tutorial that shows how to install SonarQube 3.1.1. It also contains general thoughts on how to bypass OpenShift’s restrictions.

Anyway, to install SonarQube 4.1.1 execute the following steps on your machine:

    1. rhc app create sonar diy-0.1 postgresql-9.2

Make sure to remember the login and passwords!

  1. git rm -r diy .openshift misc README.md
    git remote add upstream -m master https://github.com/worldline/openshift-sonarqube.git
    git pull -s recursive -X theirs upstream master
    git push
    
  2. Login to your SonarQube instance at
    http://sonar-<yourAccount>.rhcloud.com/

    The default login and passwords are admin / admin.
    You might want to change the password right away!

Basic installation Jenkins

A lot of information within this paragraph was taken from here.

  1. Create Jenkins gear with Git-SSH
    rhc create-app jenkins  jenkins-1  "https://cartreflect-claytondev.rhcloud.com/reflect?github=majecek/openshift-community-git-ssh"
  2. Authorize your Jenkins node to communicate with other gears (and with you Git Repository)
    Generate SSH key for your Jenkins node
    SSH to the jenkins node

    ssh-keygen -t rsa -b 4096 -f $OPENSHIFT_DATA_DIR/git-ssh
  3. Add the key to your OpenShift, either
    • via web console
      In SSH console

      cat id_rsa.pub

      then copy and paste the output into web console
      or

    • via rhc
      Download the public key (id_rsa.pub) to your host (e.g. by SFTP) and use the

      rhc sshkey add

      command to authorize the public keys for your OpenShift account.
      If you plan on accessing a private repo or want to allow jenkins committing to your repo (e.g. for generate releases with the maven release plugin) you should also add the key to your repo account. See GitHub Help.

  4. Install Plugins
    Browse to Update Center

    https://jenkins-<yourAccount>.rhcloud.com/pluginManager/advanced

    and hit Check Now (as described here).
    Then go to the Available tab and install

    1. Sonar Plugin,
    2. GitHub plugin,
    3. embeddable-build-status (if you’d like to include those nifty build badges in you README.md).

    While you’re at it, you might as well update the already installed plugins in the Updates tab.
    Then hit Install without restart or Download and install after restart. If necessary, you can restart your app like so

    rhc app restart -a jenkins
  5. Set up maven settings.xml to a writable location.
    • SSH to Jenkins
      mkdir $OPENSHIFT_DATA_DIR/.m2
      echo -e "<settings><localRepository>$OPENSHIFT_DATA_DIR/.m2</localRepository></settings>" > $OPENSHIFT_DATA_DIR/.m2/settings.xml
      
    • Browse to Configure System
      https://jenkins-<yourAccount>.rhcloud.com/configure

      Default settings provider: Settings file in file system
      File path=$OPENSHIFT_DATA_DIR/.m2/settings.xml

  6. Set up main Jenkins node as slave (easy to set up and doesn’t need extra gears).
    Go to Configure System

    https://jenkins-<yourAccount>.rhcloud.com/configure

    and set
    # of executors: 1
    As an alternative, you could also use another gear as dedicated Jenkins slave. To do so, follow the steps described here.

    [EDIT 2015-08-09: As it turned out, memory is too low to run the jenkins master and builds on one node. See my second post on how to introduce a dedicated slave node to this setup]

  7. Setup sonar plugin
    On the Jenkins frontend, go to Configure System

    https://jenkins-<yourAccount>.rhcloud.com/configure
    • Global properties,
      tick Environment variables
      Click Add
      name=SONAR_USER_HOME
      value=$OPENSHIFT_DATA_DIR
      See here for more information.
    • Then set up the plugin itself
      Navigate to Sonar, Sonar installations and set the following
      Name=<be creative>
      Server URL:

      http://sonar-<yourAccount>.rhcloud.com/

      Sonar account login: admin
      Sonar account password: <your pw>, default: admin
      Database URL: jdbc:postgresql://$OPENSHIFT_JENKINS_IP:15555/sonar
      Database login: The admin account that was returned when you first created the sonar application
      Database password: The password that was returned when you first created the sonar application

    • Hit Save

Configure build for a repository

Now lets set up our first build.

  1. Go to
    https://jenkins-<yourAccount>.rhcloud.com/view/All/newJob

    Item name: <your Project name>
    Build a free-style software project (Unfortunately, Maven projects do not work due to OpenShift’s restrictions.)
    Hit OK

  2. On the next Screen
    GitHub project:

    https://github.com/<your user>/<your repo>/

    Source Code Management:

    https://github.com/<your user>/<your repo>.git

    Branch Specifier (blank for ‘any’): origin/master
    Build Triggers: Tick: Build when a change is pushed to GitHub
    Build | Execute Shell

    cd $WORKSPACE
    # Start the actual build
    mvn clean compile test package
    

    Post-build Actions | Add post-build action | Sonar

  3. I’d also recommend the following actions
    Post-build Actions | Add post-build action | Publish JUnit test result report
    Test report XMLs=target/surefire-reports/TEST-.xml*
    Post-build Actions | Add post-build action | E-mail Notification
    Recipients=<your email address>
  4. Hit Apply.
  5. That’s it for the basic build set up. Now for the fun part: We need to find a way for Jenkins to reach sonar’s database.
    We’ll use an SSH tunnel for that.
    Build | Add build step | Execute Shell
    Now enter the following:

    # Make sure Tunnel for Sonar is open
    # Find out IP and port of DB
    OPENSHIFT_POSTGRESQL_DB_HOST_N_PORT=$(ssh -i $OPENSHIFT_DATA_DIR/git-ssh/id_rsa -o "UserKnownHostsFile=$OPENSHIFT_DATA_DIR/git-ssh/known_hosts" <UID>@sonar<yourAccount>.rhcloud.com  '(echo `printenv OPENSHIFT_POSTGRESQL_DB_HOST`:`printenv OPENSHIFT_POSTGRESQL_DB_PORT`)')
    # Open tunnel to DB
    BUILD_ID=dontKillMe nohup ssh -i $OPENSHIFT_DATA_DIR/git-ssh/id_rsa -o "UserKnownHostsFile=$OPENSHIFT_DATA_DIR/git-ssh/known_hosts" -L $OPENSHIFT_JENKINS_IP:15555:$OPENSHIFT_POSTGRESQL_DB_HOST_N_PORT -N <UID>@sonar<yourAccount>.rhcloud.com &
    

    This will tunnel requests from your Jenkins’ local Port 15555 via SSH to your sonar gear, which will forward it to its local PostgreSQL database.
    What is missing is script that explicitly closes the tunnel. But for now I’m just happy that everything is up and running. The tunnel will eventually be closed after a timeout. Let me know if you have any ideas how to improve the tunnel handling.

  6. Finally, press Save and you’re almost good to go.
  7. Before running your first build you should SSH to your Jenkins once more and
    ssh -i $OPENSHIFT_DATA_DIR/git-ssh/id_rsa -o "UserKnownHostsFile=$OPENSHIFT_DATA_DIR/git-ssh/known_hosts" <UID>@sonar<yourAccount>.rhcloud.com

    so the sonar node is added to the list of know hosts.

Moving from Google Code to GitHub: Migrating the wiki

When moving your project repositories from Google Code to GitHub, the migration from SVN to Git is supported by a a rich toolset.

 

But what about the project wikis?

There’s this tutorial, that also contains a python tool to convert the syntax. I somehow didn’t manage to get it to work.

As I had only few pages to migrate, I pragmatically decided to do it manually.

Here are the steps that were performed

  1. On your github repo, create the first wiki page (which creates the wiki, effectively).
  2. Clone the wiki via git (it’s much more comfortable to work on a local copy):
    git clone https://github.com/<user>/<repo>.wiki.git
    
  3. Copy the Google Code wiki files into your local clone.
  4. Replace the file endings to .md
  5. If your Google Code wiki featured a sidebar, rename your sidebar file to _Sidebar.md. The links are converted later on.
  6. For each wiki page do the following steps
    1.  Convert bold words, replace the regex
      \*(.*)\*
      by
      **\1**
    2.  Convert italic words, replace
      _word_
      by
      *word*
    3. Replace headings for all levels
      1. = Heading1 = to * Heading1
      2. == Heading2 == to ** Heading2
      3.  And so on
    4. Convert links, by replacing the  regex
      \[([^\s]*) (.*?)]
      by
      [[\2|\1]]
      For anchors (starting with #) make all characters lowercase and replace underscores (_) by minus (-).
    5. Replace
      <code> or {{{
      by
      ``` java
      (replace java by any other available highlighter).
    6. Replace
      </code> or }}}
      by
      ```
      (make sure to always have the ``` in a new line).
    7. Remove #summary. Leave the text after it as is.
    8. Remove the line that starts with #sidebar on each page.
    9. Convert ordered lists: replace the remaining
      #
      by
      1.
    10. Convert unordered lists: Replace
      -
      by
      *
  7. Push the changes upstream.
  8. Manually compare each page of the old version with the new version.
  9. You might have to adjust the layout of the sidebar a bit.
  10. Delete the google code wiki from your source code repository

Note that these steps were sufficient for the paricular repo I moved over. Thus, this list is definitely not complete. You might recognize that more transformations might be necessary for you. That’s why step 7 is extra important!

Please comment on any rules I left out.

Maven: Create a simple build number

It’s always a good idea to have means to integrate a build number within your application. It makes it easy to find out which exact versions are deployed on your stages our what version you’re users are running in case of third level support.

The task of creating unique build numbers can easily be automated using maven.

A build number can contain all kind of information, like the application version, a sequential number, SCM revision or a time stamp.

From my point of view, using the time stamp when creating a unique build number is essential: It is unique per se (it is unlikely that the same system is build twice in one second) and it is generated by maven without depending on any plugins. In addition, it is easy to understand, like “Did operations really deploy the new version of the application, yet? No, the build number still shows that it’s the one from last week!”.

So let’s get to buisness: How to generate build numbers within a JAR and how to read it? Does it also work with WARs?

JAR

As mentioned before, maven provides a built-in mechanism for creating a timestamp. You can just customize the formatting and use it for your build number within your pom.xml like so:

	<properties>
		<maven.build.timestamp.format>yyyyMMddHHmmss</maven.build.timestamp.format>
		<buildNumber>v.${project.version} build ${maven.build.timestamp}</buildNumber>
	</properties>

Please note that using the property maven.build.timestamp directly within your filtered file is not possible due to this bug.

The build number must no be made accessible to your application. There are at least two solutions at hand: Write the build number to your manifest or create a properties file.

Manifest

pom.xml

	<build>
		<!-- Add build number to manifest -->
		<plugins>
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-jar-plugin</artifactId>
				<configuration>
					<archive>
						<manifestEntries>
							<build>${buildNumber}</build>
						</manifestEntries>
						<manifest>
							<mainClass>info.schnatterer.Main</mainClass>
						</manifest>
					</archive>
				</configuration>
			</plugin>
		</plugins>
	</build>

And that’s how you can access it from your application:

	private String getBuildNumberFromManifest() throws IOException {
		InputStream manifestStream = getClass().getClassLoader()
				.getResourceAsStream("META-INF/MANIFEST.MF");
		if (manifestStream != null) {
			Manifest manifest = new Manifest(manifestStream);
			Attributes attributes = manifest.getMainAttributes();
			return attributes.getValue("build");
		}
		return null;
	}

Properties

Alternatively, create use a properties file:

build.properties

buildNumber=${buildNumber}

pom.xml

	<build>
		<resources>
			<resource>
				<!-- Filter for build number -->
				<directory>src/main/resources</directory>
				<filtering>true</filtering>
				<includes>
					<include>build.properties</include>
				</includes>
			</resource>
		</resources>
	</build>

And that’s how you can access it from your application:

	private String getBuildNumberFromProps() throws IOException {
		String propertiesName = "/build.properties";

		InputStream propertiesStream = Main.class
				.getResourceAsStream(propertiesName);
		if (propertiesStream != null) {
			Properties pros = new Properties();
			pros.load(propertiesStream);

			return pros.getProperty("buildNumber");
		}
		return null;
	}

The output looks something like this:

v.0.0.1-SNAPSHOT build 20140225211328

Note: You might want to check the returned value for null or have the methods return empty Strings in order to avoid NullPointerExceptions or the word null on your GUI.

WAR

If you’re building a web application, you should also be able to use the manifest or the properties file as described above. Your maven set up might be a bit different, though (see Maven War plugin – WAR Manifest Customization).

You have another alternative that makes creating the build number even easier: You can “stamp” the build number directly into interpreted documents like (X)HTML or js files.

For example when using JSF, you could write the build number directly into the footer template.

Add this to your page:

<h:outputText styleClass="version" value="${buildNumber}"/>

And this to your pom.xml

	<properties>
		<maven.build.timestamp.format>yyyy-MM-dd HH:mm:ss</maven.build.timestamp.format>
		<buildNumber>v.${project.version} build ${maven.build.timestamp}</buildNumber>
	</properties>

	<build>
		<plugins>
			<plugin>
				<groupId>org.apache.maven.plugins</groupId>
				<artifactId>maven-war-plugin</artifactId>
				<version>2.3</version>
				<configuration>
					<!-- Filter for build number -->
					<webResources>
						<resource>
							<directory>src/main/webapp/WEB-INF/templates</directory>
							<targetPath>WEB-INF/templates</targetPath>
							<filtering>true</filtering>
							<includes>
								<include>yourFooterTemplate.xhtml</include>
							</includes>
						</resource>
					</webResources>
				</configuration>
			</plugin>
		</plugins>
	</build>

And that’s it. No parsing necessary in code, as maven will insert the build number in your web page directly.

Further Reading

If you’re planning on creating a more complex build number (SCM, sequential numbers, etc.), I suggest you have a look at the Build Number Maven Plugin.

Synology: Backup and restore encrypted folders

This post quickly introduces encrypted folders and backing them up on a Synology NAS. It focuses on how to restore those backups, as this is not straightforward.

Encrypting shared folders

Creating an encrypted folder on a Synology NAS can be done easily, as described in detail by Synology here. Note: Don’t store you password on the NAS (Mount automatically on startup option), because this will render encryption useless! You don’t write the password to your computer on the display, do you?

Versioned backups – only for unencrypted folders!

Synology offers the Time Machine package that can be used to create different versions of stored file. However, this does still not work with encrypted folders (as of version 1.2-2300). Why Synology, why?

A simple backup solution for encrypted folders

Fortunately, there is an alternative that provides at least rudimentary ways for backing up data: The Backup and Restore package.

Backup and Restore icon

Backup and Restore

Once the encrypted folder is mounted, Backup and Restore can be used to create a local backup for all or some of the folders contained within the encrypted folder. Backups can be create for example on a daily, weekly or monthly basis. Unfortunately, it’s not possible to keep several versions of a backup – that’s what Time Machine would be for 😦

Note: The Maximum number of kept versions only relates to the NAS configuration, not to the data!

Restoring encrypted backups – almost impossible?

By now, our data gets backed up regularly by the NAS. But how to restore a file, in case of emergency? There is the Restore tab within the Backup and Restore package. For encrypted backups, you can only use it to restore all or nothing. You can’t even choose where to restore the data to. That is, if you want to restore a single file, your only option is to overwrite all of your productive data. In other words: This is useless.

Restoring encrypted backups – a comfortable workaround

There is a workaround, however, that will make the backups accessible like any other shared folder:

  1. On the web interface of your NAS: Create a new shared folder, use the same password as for the encrypted folder that should be backed up. Let’s call it myBackup.
    Use the Read only permission within the Privileges setup in order to protected your backups.
  2. Unmount the new folder
  3. SSH to your NAS
  4. Delete the container you just created. For example:
    rm -r /volume1/@myBackup@
    
  5. Create a link to the backup that is named just like the container. For example:
    ln -s /volume1/backup/folder/@folder@ /volume1/@myBackup@
    

    Where backup is the shared folder where the backup was written to by the Backup and Restore package and folder is the name of the folder within backup that was set up in Backup and Restore.

  6. Go back to the web interface and mount the folder using the password of the encrypted container.

That’s it. You can now access the backup like any other folder (SMB/CIFS, NFS, FTP, …)

Restoring encrypted backups – mount backup on separate system

As an alternative, you could also mount the encrypted folder on any other Linux system.

Synology uses EcrytpFS to encrypt shared folders. Those can be mounted on a separate Linux system, as described here:

This is useful for remote backups or when your Synology Diskstation should be damaged but the hard disks still work.