Synology: Backup and restore encrypted folders

This post quickly introduces encrypted folders and backing them up on a Synology NAS. It focuses on how to restore those backups, as this is not straightforward.

Encrypting shared folders

Creating an encrypted folder on a Synology NAS can be done easily, as described in detail by Synology here. Note: Don’t store you password on the NAS (Mount automatically on startup option), because this will render encryption useless! You don’t write the password to your computer on the display, do you?

Versioned backups – only for unencrypted folders!

Synology offers the Time Machine package that can be used to create different versions of stored file. However, this does still not work with encrypted folders (as of version 1.2-2300). Why Synology, why?

A simple backup solution for encrypted folders

Fortunately, there is an alternative that provides at least rudimentary ways for backing up data: The Backup and Restore package.

Backup and Restore icon

Backup and Restore

Once the encrypted folder is mounted, Backup and Restore can be used to create a local backup for all or some of the folders contained within the encrypted folder. Backups can be create for example on a daily, weekly or monthly basis. Unfortunately, it’s not possible to keep several versions of a backup – that’s what Time Machine would be for 😦

Note: The Maximum number of kept versions only relates to the NAS configuration, not to the data!

Restoring encrypted backups – almost impossible?

By now, our data gets backed up regularly by the NAS. But how to restore a file, in case of emergency? There is the Restore tab within the Backup and Restore package. For encrypted backups, you can only use it to restore all or nothing. You can’t even choose where to restore the data to. That is, if you want to restore a single file, your only option is to overwrite all of your productive data. In other words: This is useless.

Restoring encrypted backups – a comfortable workaround

There is a workaround, however, that will make the backups accessible like any other shared folder:

  1. On the web interface of your NAS: Create a new shared folder, use the same password as for the encrypted folder that should be backed up. Let’s call it myBackup.
    Use the Read only permission within the Privileges setup in order to protected your backups.
  2. Unmount the new folder
  3. SSH to your NAS
  4. Delete the container you just created. For example:
    rm -r /volume1/@myBackup@
    
  5. Create a link to the backup that is named just like the container. For example:
    ln -s /volume1/backup/folder/@folder@ /volume1/@myBackup@
    

    Where backup is the shared folder where the backup was written to by the Backup and Restore package and folder is the name of the folder within backup that was set up in Backup and Restore.

  6. Go back to the web interface and mount the folder using the password of the encrypted container.

That’s it. You can now access the backup like any other folder (SMB/CIFS, NFS, FTP, …)

Restoring encrypted backups – mount backup on separate system

As an alternative, you could also mount the encrypted folder on any other Linux system.

Synology uses EcrytpFS to encrypt shared folders. Those can be mounted on a separate Linux system, as described here:

This is useful for remote backups or when your Synology Diskstation should be damaged but the hard disks still work.

Advertisements

Automatically downloading/backing up/dumping/exporting databases from remote hosts via the web

The problem

You operate a database-backed website (e.g. Drupal) where you can’t access cron jobs, cgi, perl and outgoing connections. So one idea to back up your database on a regular basis (which is always a good idea) is to download SQL dumps via a web-based administration tool (such as the backup and migrate plugin for drupal). Unfortunately, these kinds of downloads cannot simply be automated on the shell by using curl or wget, because they require a bit of javascript, for example to outsmart the php timeout.

The solution

Use a headless browser (that is, a browser without graphical user interface) to automate the task. It fetches the desired page, logs in, (virtually) clicks the download button and downloads the dump file.

It should be a command line tool, in order to run it as cron job from a some server (e.g. a NAS).

Personally, I liked the idea of PhantomJS, but it was not available for my Synology DS213+ PowerPC and I didn’t like the idea of building it from source.

So my plan B was to write a small Java program (remoteDbDumper)  that uses the HtmlUnit framework (our headless browser).

How to use

  1. Install drupal plugin backup and migrate.
  2. Download and extract remoteDbDumper.
  3. Start it from the shell.
    remoteDbDumper -u <username> -p <password> -o <output dir> <url to backup and migrate>

    Note that output dir must be an existing directory

    1. Linux example:
      ./remoteDbDumper.sh -u user -p topsecret -o ~/backup https://ho.st/drupal/?q=admin/config/system/backup_migrate
      
    2. Windows example
      remoteDbDumper.bat -u user -p topsecret -o "%HOMEPATH%\backup" https://ho.st/drupal/?q=admin/config/system/backup_migrate
      
  4. Use the scheduling mechanism of your choice to call remoteDbDumper regularly, creating backups.

Example (Synology)

Just a short exemplary scenario on how to use remoteDbDumper on a Synology Diskstation (running DSM 4.2) to regularly back up a drupal database.

  1. (if Java is not installed) install Java:
    If available for your Diskstation, use the Java Manager package. Otherwise, you could use a third party Java package (that’s what I had to do).
  2. Download, extract and copy remoteDbDumper to the NAS (e.g. to \\diskstation\public\, which corresponds to /volume1/public/)
  3. SSH to the NAS and check if it works
    /volume1/public/remoteDbDumper-1.0/remoteDbDumper.sh -u user -p topsecret -o /volume1/someUser/ https://ho.st/drupal/?q=admin/config/system/backup_migrate
    
  4. (optional) Wrap the command line call in a shell script, e.g.
    BASEDIR=$(dirname $0)
    $BASEDIR/remoteDbDumper-1.0/remoteDbDumper.sh -u user -p topsecret -o $1 https://ho.st/drupal/?q=admin/config/system/backup_migrate
    
  5. Either use the web frontend  or the crontab to schedule the back up.
    1. Web frontend:
      Go to http://diskstation:5000, (or whatever combination of host name and port you’re using)
      login as admin,
      click on Control Panel | Task Scheduler.
      Then click on Create | User-defined Script.
      Enter a task name, choose a user (preferably not root), set up a schedule (e.g. every sunday at 8 p.m.).
      Finally enter the path to remoteDbDumpe or the script (4.) respectively. For the example above, the path would look like this:

      /volume1/public/dumpDb.sh /volume1/public/
      
    2. If you insist to do it on foot, here’s what to enter in crontab:
      vi /etc/crontab
      
      #minute hour    mday    month   wday    who              command
      0       20      *       *       0       enterUserHere    /volume1/public/dumpDb.sh /volume1/public/
      
    3. Set a maker in your calender for the next scheduled run, to check if it worked.

Future tasks

At the current state remoteDbDumper can only backup drupal databases. Fair enough.

However, with just a little more effort it would be possible to extend remoteDbDumper to support addition web-based database administration tools, such as  mysqldumper, phpMyBackupPro, phpMyAdmin or phpPhAdmin.

In order to do so, just fork the repo on github and implement the interface DbDump.

NAS: Downgrading DSM (DS213+)

As mentioned before, Synology’s update from Diskstation Manager 4.1-2657 to DSM 4.1-2668 made the issue where files in the encrypted shared folders were case-sensitive reappear. As this rendered the NAS almost useless to me, my best possibility was to downgrade DSM.

Officially, downgrading is not possible at all.

The DSM version is not downgradeable

The DSM version is not downgradeable

As (almost) always, the Internet provides a couple of workarounds. However, after skimming through a couple of them, downgrading seemed pretty risky to me, considering the risk of damaging my device or even losing my whole data.
But I didn’t give up and finally found a really comfortable solution here:

  1. change the Firmware version stated in the file “/etc.defaults/VERSION” to a number lower than the FW version you want to install
  2. Use the Web page management GUI to install the desired Firmware.

Here’s a bit more detailed description. Please note that you do the downgrading at your own risk, as it is not officially supported!

Three Steps to downgrade your DSM

  1. Download the desired firmware from Synology.
  2. SSH to your DSM and run
     vi /etc.defaults/VERSION
    

    then change the buildnumber/version to a lower version, than the one you’re trying to install.
    For example: I changed from

    buildnumber="2668"
    

    to

    buildnumber="2650"
    

    because I wanted to downgrade from 2668 to 2657.
    If this step is skipped DSM will refuse to install the “new” (that is old) firmware.

    Unable to perform DSM update because this DSM is an older version.

    Unable to perform DSM update because this DSM is an older version.

  3. Log on to your DSM’s webinterface, click Control Panel | DSM Update | Manual DSM Update. Then choose the firmware and downgrade your DSM.

(4. File any bug that made you downgrade at Synology.) 🙂

Microsoft Robocopy vs Linux NAS: Robocopy Pitfalls

Intro

I have been using Microsoft Robocopy for years, as it is an easy (by now even a built-in) way to do incremental backups under Windows.

Until recently, I used to backup my data from an internal NTFS hard drive to an external NTFS hard drive. As of late, I’m the proud owner of a DS213+ NAS, which runs some Linux base OS and Ext4 hard drives.

As I’m still looking for the perfect backup/versioning client (rsync on windows?!), I thought to stick to Robocopy in the meantime. Unfortunately, my backup scripts, which have done a splendid job of incrementally backing up my data to an external hard drive for years, now do a full backup to the NAS every time.

As it turns out, there is not only one reason for this behavior, but two:

  1. Timestamp
  2. File size

Here are the solutions solving these issues (at least for me), as well as some additional hints to using Robocopy.

1. Timestamp

At first, Robocopy kept telling me NEWER or OLDER for each file (even though the file didn’t change), resulting in copying the file instead of skipping it.

Solution:

First, make sure that both the NAS and the client PC have the same system time (use an NTP server, for example).

If the problem still persists, a good solution is to make Robocopy use FAT file times (/FFT).

This results in a 2-second-granularity, that is a file is only declared NEWER or OLDER, when it there is a difference of more than two seconds between the source and the destination file. If this option is not set, a nanosecond-granularity is used. Obviously, Samba’s time granularity is not as precise, and therefore the time stamps hardly ever match.

2. File size

If your incremental works by now, skip the rest of the article.

As for me, after solving the above problem, the incremental backups still didn’t work.

Robocopy kept telling me CHANGED for most files. Out of the frying pan into the fire!

What does CHANGED mean? The answer can be found here:

The source and destination files have identical time stamps but
different file sizes. The file is copied; to skip this file, use /XC.

Skipping all files with different sizes? No, that’s some dangerous idea when backing up data. So what now?

But why do they have a different size at all? Thats some file on the client PC:

SomeFile on the Client PC

SomeFile on the Client PC

And that’s the same file after transferring to the NAS:

SomeFile on the NAS

SomeFile on the NAS

Tthe attentive observer might have recognized that the size on the disk is different.

The reason for this can be found in different block sizes used in NAS and Client. I was wondering first, because I set up both NTFS and Ext4 with a Block size of 4K.

However, the Samba server has a default block size of 1k! So setting up the Samba with an explicit block size that matches the one of your client PC solves this issue.

How?

SSH to your (Synology) NAS.

 vi /usr/syno/etc/smb.conf

Press i.

Enter this line bellow the [global] tag (use the block size of your host file system, e.g. 4K = 4×1024=4096)

         block size = 4096

Press ESC, then enter :wq and press ENTER.

Restart the samba server by

/usr/syno/etc/rc.d/S80samba.sh restart

That solved my problems and I can now do incremental backups again.
Until I finally have set up perfect rsync for windows solution 🙂

Alternative solution for 1. and 2.

There is, however, an alternative to the solutions for 1. and 2.:

Use the Archive bit. Each file has an archive bit. Everytime you change the file, the bit is set. This behavior can be utilized by Robocopy. Using the /m switch makes Robocopy reset the archive bit on each source file and skips all files whose archive bit is set. That is, it copies only files that changed since the last backup. No need for caring about nasty time stamps or stupid file sizes.

There is one drawback, however. When you want to make a full backup, or you backup your data to several devices, you must not use the /m switch or your backups will be incomplete.

Additional Hints

While I’m on it: Here’s the Robocopy options I use. See Robocopy | SS64.com for complete reference.

robocopy "<source>" "<dest>" /MIR /V /NP /TEE /LOG:"%~f0.log" /Z /R:10 /W:10 /FFT /DCOPY:T
  • /MIR – MIRror a directory tree, that is, copy all subfolders and purge extra files on the destination
  • /V – Verbose output log, showing skipped files
  • /NP – No Progress. Don’t flood log file with % copied for each file.
  • /TEE – Output to console window, as well as the log file.
  • /LOG:”%~f0.log” – Output status to file <name-of-batch-file>.log, overwrite existing one.
  • /Z – Copy files in restartable mode (survive network glitch)
  • /R:10 – Number of Retries on failed copies – default is 1 million. Reducing the number is helpful when trying to copy locked files.
  • /W:10 – Wait time between retries – default is 30 seconds. Reducing the number is helpful when trying to copy locked files.
  • /FFT – Assume FAT File Times, 2-second date/time granularity. See above.
  • /DCOPY:T – Copy Directory Timestamps. Why is this not default? You definitely want to keep your directory timestamps!

NAS: DS213+ & WD20NPVT – 3. Performance and Encryption

As announced in the first and second post about Synology DS213+ and the Western Digital WD20NPVT, this post is about the effective data rates achieved by the NAS and the hard drives. It contains the data rates measured when reading files from the DS213+ (download), as well as the ones measured when writing to it (upload) for both unencrypted and encrypted folders on the NAS. For measurement both one large file (1 x 50GB) as well as many small files (100,000 x 10KB) have been transfered to/from the NAS.

Measured Values

The following tables compare the measured data rates to the ones published by Synology.

Large file

Note that for the measurement in this post a 50GB file was used, whereas Synology transfered a 5GB file, which should not make much of a difference.

Operation Data rate (measured) Data rate (Synology)
Upload 51.87 MB/s 84.31 MB/s
Upload (encrypted) 21.32 MB/s 24.65 MB/s
Download 40.89 MB/s 110.36 MB/s
Download (encrypted) 37.21 MB/s 49.58 MB/s
Client (internal) 111.55MB/s

Small files

Note that for the measurement in this post a 100,000 10KB files were used, whereas Synology transfered 1,000 5MB files. So the rates here cannot really be compared, as transferring more smaller files results in a bigger overhead and therefore in a lower transfer rate.

Still, it is remarkable, that Synology only measured the performance when transferring small files to unencrypted folders. Maybe the data rates measured for encrypted folders didn’t look too good?

Operation Data rate (measured) Data rate (Synology)
Upload 0.44 MB/s 43.82MB/s
Upload (encrypted) 0.05 MB/s
Download 0.75 MB/s 58.15MB/s
Download (encrypted) 0.49 MB/s
Client (internal) 4.52MB/s

Measurement

All data rates have been measured from the same client PC using Microsoft Robocopy, connecting to the NAS via SMB protocol.

The Client and the NAS are connected via a Linksys SE2800 switch, using Gigabit Ethernet.

The following table lists the NAS details, as well as the client PCs’ used for measurement in this post. In addition, the details of the client PC used by Synology are listed in the table.

Synology DS213+ Client PC Client PC (Synology)
OS DSM 4.1-2657 Windows 8×64 Windows 7
CPU Freescale MPC8544E 2x 1.067GHz Intel T7250 2×2.0GHz Intel Core i5 750 2.67GHz
RAM 512MB DDR3 4GB DDRII (2x2GB at 667MHz) 4GB DDRIII
SSD/HDD Western Digital Green WD20NPVT x2, RAID 1 Samsung 840 Pro (256GB) SVP200S3 (60GB) SSD x 2, RAID 0

Conclusion / Differences

There obviously are differences between the values measured here and the ones published by Synology. What are the reasons for this?

For the small files, the main reason for the difference surely is the smaller size of the files copied, as mentioned above. Why did I choose this smaller size and bigger number? It was not my main objective to compare the values to the ones measured by Synology. However, I was interested at what rate small files are actually copied. For me, small files are less than 1MB. Have you ever tried to copy a directory with a large quantity of small files (several KB each) such as an eclipse workspace or an SVN repo? It takes ages. I never thought, though, that they are copied with a data rate of less than a MB per sec.

For the large file I presume the difference between the measured values and the ones by Synology can be found in the differences in measurement set up. Synology used a faster CPU, fast RAM, an Raid 0 and direct connection between client PC and NAS.

Moreover, I don’t know what software and protocol Synology used for transfer. Maybe they used FTP, which might perform better than SMB. In addition, it might be even faster for small files, because they can be transfered using several concurrent connections and not sequentially, as it is done by robocopy.

Anyway, Synology’s download rate of 110 MB/s somehow still is a miracle to me, as this is almost as fast as when I write to my local SSD with robocopy…

Finally, I must say that it is astonishing why uploading (that is writing) large file is faster than downloading (for unencrypted files). I repeated measurement of all four large file operations several times but I got nearly the same results every time (± 1 MB/s). This seems to have something to do with Robocopy or SMB, because downloading the exact same file via FTP (Filezilla) yields a data rate of about 65 MB/s.

Maybe I should write another post comparing FTP and SMB, when I have time 🙂

NAS: DS213+ & WD20NPVT – 2. Power Consumption

As announced in the first post about the Synology DS213+ and the Western Digital WD20NPVT, here’s the measured power consumption of the NAS and the two hard disks in differnt operational modes.

Measured values

The following table compares the values measured to the values as specified in DS213+’s specification.

Operational Mode Mean power consumption (measured) Power consumption (specification)
Off ~1W
System hibernation 2.9W 2.64W
on – HDD hibernation 9.6W 10.08W
on – idle 12.56W
on – download 13.94W 22.20W (“access”)
on – upload 15.47W 22.20W (“access”)

The spec mentions higher values for “access” and, HDD hibernation probably because they used 3.5″ HDDs. However, the value for hibernation is lower. Maybe that’s due to the measurement accuracy.

Note: The power consumed by the power supply when the device is off, is below the effective power range of the measurement device. Therefore the measured value is only an approximate value.

Measurement

For measurement, an Energy Logger 4000 device was used. It is not the most accurate one (5 – 3500 W (± 1% + 1 count), 2 -5 W (± 5% + 1 count), < 2 W (±15% + 1 count)), especially in the lowest measurement range. Still, the values measured should povide an impression of the power consumption in the different operational modes of the NAS.

For off, system hibernation, HDD hibernation and idle the power consumption is an arithmetic mean over several hours.

The download and upload, values have been measured while reading/writing a 50GB file. The values bellow are mean values over the process of reading or writing, respectively. The data rates measured during this process will be published in the next post.

Surprisingly, there is no (measurable) difference in hibernation whether Wake On LAN (WOL) is on or off. That’s why there is only one system hibernation mode.

Conclusion

15.5W at max, is not so bad for a device running two hard drives. It’s idle consumption of about 12.6W still is about twice the power consumption of other devices running 24/7 (like routers).

That’s where the the system hibernation mode comes in handy. 3W in hibernation – that’s about as much as the power supply of an old desktop light consumes when the lights are off. If you use your NAS as a private storage, or even a web server that’s a very good compromise. Of course, it would be even more economical to switch the NAS completely off, when not in use. But probably not what these devices are intended for.

Thanks to the WOL functionality, you can use a hibernating NAS almost as comfortable as if it ran all day: For usage at home, the NAS can be switched on by sending a WOL package to the NAS MAC/IP address from any PC (e.g. WOL for Windows) or mobile device (e.g. Wake On Lan for android). Actually, I don’t have to do this very often as it seems that my Windows Explorer switches on the NAS as soon as it is started, for I have mounted some NAS folders as network drives.

If you want to use the NAS as a web server you can configure your router to send a WOL package to the NAS when a request is received on a certain port, for example via HTTP or HTTPs. This will switch on the device, which takes about 30 seconds, that is, the website is delivered some seconds later, once the NAS is awake. I think in private usage scenarios this should not be too much of a drawback but it safes 75% of energy.

Doing so, allows for having a NAS or even a self-hosted web server/”personal cloud” that consumes almost no energy when it is not in use. A good enough solution for my “green conscience”, at last.

By the way, a device consuming 3W consumes about 26kWh in a year, which is about 7€ (as of 2012 in Germany, the average price for electricity was 0,26€ per kWh). In comparison: A device consuming 12.6W, consumes about 110kWh a year, which is about 29€.

NAS: DS213+ & WD20NPVT – 1. Conclusion

Motivation

I have been looking for a Network Attached Storage which sufficient performance but rather low power consumption.

As a NAS needs to be running 24/7, the power consumption is of particular importance. On the other hand, whenever the NAS is in active usage it can’t provide data too fast.

The crucial component for both the power consumption and the data rate is the processor.

The best compromise it could find in October 2012 was the DS213+ NAS. It features a Freescale Dual Core CPU with 2x 1.067GHz, which should provide more performance as the single core CPUs used in most other NAS in medium price range, but consumes less power than the Intel Atom Dual Cores used in NAS in higher price ranges.

As storage device, I decided to purchase two Western Digital Scorpio Green (WD20NPVT), a 2.5″ drive which seems to be designed exactly for this use case: It has low power consumption, but still provides enough space (2 TB). From a economical point of view, it would probably have made more sense to purchase a 3.5″ drive (such as the Western Digital Red (WD20EFRX), which has a higher power consumption (4.4w compared to 1.4w), but is cheaper (about 65 Euros in Germany, as of January 2013).

Still, I thought it’s a kind of statement that we (the consumers) are interested in energy efficient devices, and not only as much GB per quid as possible.

Or maybe I’m just an idealist 🙂

Structure

So, after having used the NAS for over two months now it’s time for a little resume. Just for a change, I’m going to start with the conclusion. My first post (the one you’re reading at the moment) contains the benefits and drawbacks of the device – What I like about my DS213+ and what problems I encountered.

In addition, I measured power consumption of DS213+ and the two WD20NPVT, which I will publish in a second post.

I also measured data rates and encryption performance of DS213+, which will be published in further upcoming posts.

Benefits

Synology’s Operating System, Diskstation Manager (DSM), which is shipped with DS213+ provides a real lot of features. In this post I’m only going to mention the most important ones to me. For more details see Synology.

The device can be set up via an ajax-driven web interface. In fact, it’s one of the best web interfaces I have seen recently. Synology provides a demo here. As an alternative you can configure it via SSH. Synology also included a plugin system which allows you to extend DSM with different packages to be used in your LAN (such as a web interface to the stored files, photographs, music, movies, etc.), but also tools intended to be used on the Internet (like Drupal, wordpress, etc.). In addition Synology provides several free mobile Apps for Android, iOS and Windows Phone that provide those features using interfaces that are optimized for mobile devices.

Another feature which is important to me is encryption. You can set up different folders on the hard drive which are encrypted with different keys and can be accessed by different users. As per DS213’s spec, the encryption is done in a dedicated hardware module, so the NAS performs well, even when encrypting. At least better than ordinary TrueCrypt on my PC 😉

In addition, you can encrypt all communication via HTTPS.

Another neat feature is that images stored on the device are not only indexed so they are quickly accessible via DLNA, but the device also creates thumbnails. This allows for viewing images for example on mobile devices via WiFi very smoothly. Almost feels like viewing local pictures on my mobile. However, it takes what feels like ages to create the thumbs. More precisely, it took about three days to create the 30k images on my NAS. That’s just a one-time expense, though.

As low power consumption was one of my main objectives, I very much appreciate the hibernation mode, offered by DS213+. The disks are spun down after a configurable time period. In addition, you can set up the NAS to hibernate the whole system 60 seconds after the disks are down. For this hibernation mode, you can set up if the system can be switched on again via network – Wake On LAN (WOL). This results in slighly higher power consumption but is a lot more comfortable. Of course, it would be even more comfortable to have the NAS running 24/7, but at the cost of a higher power consumption. As mentioned, I’m going to publish the actual power consumption I measured in the next post.

Drawbacks

Enough words of praise. I have some issues with DS213+.

Most of them seem to be in conjunction with the encryption functionality. The DS offers versioning functionality which is one of the features I am particularly interested in, to use as part of my backup strategy. This feature can be used via a comfortable web interface. Only, that the feature cannot be used with encrypted folders whilst all my important folders are encrypted. That is, I can’t use this feature at all. The same applies to the pictures web interface: Even though you can view pictures stored in encrypted folders via DNLA, they cannot be found via the web interface or picture app. However, encrypted music can be played via the web interface. Not real consistent behaviour, is it?

In addition, I wasted almost a whole precious day off trying to figure out why I could not access encrypted sub folders via SMB. After trying about every possible configuration of DS’s SAMBA server, I found out that it was a bug in DSM relating to case-sensitive file names. Fortunately, it had been fixed just a couple of days before (Version 4.1-2647). So I upgraded to the next DSM version an the problem was gone. The good news is Synology keeps improving the DSM Software and provides the new versions to customers for free. Still, it seems as if DSM has a bit of “banana software” (it “ripes” at the customer) – at least where encryption is concerned.
 
<UPDATE >
22 January, 2013: After updating to DSM 4.1-2668 the bug re-appeared! I filed a bug at Synology and received the following answer one day later:


The developer confirmed that this is a known issue that our developer is currently working on. The issue will be improved in our future official update in the future.

Let’s hope so! If you didn’t update yet, better wait for the next version.
</UPDATE>
 
Another issue that seems to occurr every now an then with Synology devices, is hibernation. See for example here and there. I’m experiencing unexpected behavior myself. Every now and then I’m surprised that the device is not in hibernation, even though there should be no reason for it to be not. For example, I was wondering why it was switched on every now and then when I went to work, early in the morning. On the other hand it seems to ignore some of my WOL packages sent from my PC or my phone. I started debugging, but haven’t quite figured out why it behaves like this.

There’s one more thing (though less important): The DS sparkles like a christmas tree. There are five LEDs in different sizes and colours that are twinkling in different frequencies. Unfortunately, they cannot be deactivated via the web interface. Some of them can be switched off via the command line, but the device keeps switching them on. Still looking for a solution to permanently switch them off.

Conclusion

Despite these issues pointed out above, I don’t regret buying DS213+. It meets most of my expectations, but still, some things could be more elaborate, especially when it comes to encryption. There also are a lot of features that I’m not using yet but might be of good use in the future (like rsync).

So wrapping things up, DS213+ is real good NAS device with lots of features and rather small energy consumption. If you’re interested in using the encryption features, you might have a look at different devices. I can’t say, however, if there are a better ones.