Songbird/Nightingale: Improving search performance

Only just recently, I complained about an everlasting performance problem.

Shortly after writing this, I stumbled upon this nice tweak (thanks michaelvandeborne!):

  1. Click on File | New Tab
  2. Enter about:config, then promise that you’ll be careful.
  3. Enter songbird.dbengine.cacheSize
  4. Increase the value. Start with 5000.
    You might also try to increase or lower it a little and see if the performance increases any further.

 

NAS: Downgrading DSM (DS213+)

As mentioned before, Synology’s update from Diskstation Manager 4.1-2657 to DSM 4.1-2668 made the issue where files in the encrypted shared folders were case-sensitive reappear. As this rendered the NAS almost useless to me, my best possibility was to downgrade DSM.

Officially, downgrading is not possible at all.

The DSM version is not downgradeable

The DSM version is not downgradeable

As (almost) always, the Internet provides a couple of workarounds. However, after skimming through a couple of them, downgrading seemed pretty risky to me, considering the risk of damaging my device or even losing my whole data.
But I didn’t give up and finally found a really comfortable solution here:

  1. change the Firmware version stated in the file “/etc.defaults/VERSION” to a number lower than the FW version you want to install
  2. Use the Web page management GUI to install the desired Firmware.

Here’s a bit more detailed description. Please note that you do the downgrading at your own risk, as it is not officially supported!

Three Steps to downgrade your DSM

  1. Download the desired firmware from Synology.
  2. SSH to your DSM and run
     vi /etc.defaults/VERSION
    

    then change the buildnumber/version to a lower version, than the one you’re trying to install.
    For example: I changed from

    buildnumber="2668"
    

    to

    buildnumber="2650"
    

    because I wanted to downgrade from 2668 to 2657.
    If this step is skipped DSM will refuse to install the “new” (that is old) firmware.

    Unable to perform DSM update because this DSM is an older version.

    Unable to perform DSM update because this DSM is an older version.

  3. Log on to your DSM’s webinterface, click Control Panel | DSM Update | Manual DSM Update. Then choose the firmware and downgrade your DSM.

(4. File any bug that made you downgrade at Synology.) 🙂

Songbird/Nightingale: Exporting playlists

The playlist problem

As mentioned in my previous post I have been using Songbird/Nightingale for quite some time, in spite of the drawback mentioned in the post.

No matter if using Songbird or Nightingale, one of my main problem still remained the same: The playlists are trapped somewhere inside the library with no way to export as playlist files. Absolutely no way? That’s not the whole truth, however, as there are (or were) addons like Playlist Export Tool, Export My Playlists or FolderSync. Thanks to the developers, by the way – those addons were really useful to me!

Unfortunately, with every new songbird release, all addons stopped working. In other words: Whenever I made the mistake of updating updated, I wasn’t able to export playlists anymore. I actually don’t even know if there are any addons left, that are compatible to the most recent version of Songbird.

The playlist solution

One more good thing about Songbird (and Nightingale as well), is that it uses an SQLite database. This allows for accessing the Songbird database from a variety of programming languages without getting your hands dirty and makes way for a “third-party” tool, that is capable of exporting playlists from the Songbird database and doesn’t depend on the Songbird version. I developed an exporter in Java and been using it some time to make my Songbird playlists available on my NAS.

As I thought this exporter might be useful to others, I refactored the quick and dirty source code and published it on GitHub. So now, I’m proud to present songbirdDbTools a Java-based playlist exporter for Songbird/Nightingale that was just released in its very first version. Hopefully, it will be of use for somebody else, who was missing this functionality as much as I did 🙂

 

The name is a bit of an exaggeration at this point, as the tool provides only the export functionality. However, I put some effort in designing songbirdDbTools to be as extensible as possible. I have a couple of things in mind that would be useful.
For example synchronizing playlists. That is, exporting not only the playlist but copy the member files as well. This might come handy for effectively synchronizing files to mobile devices.
Or finding zombies and ghosts (like the Excorcist used to do, three years ago). Another neat feature might be to find out all playlists a file belongs

If only I had more time!

So, just in case you’re interested in contributing: Fork songbirdDbTools on GitHub!

Songbird/Nightingale: Using Songbird database in Nightingale

Songbird vs Nightingale

I’ve been using Songbird ever since it was a promising, upcoming, cross-platform open source media player. Back then, I even had it running on a parallel installation of Windows and Fedora on (physically) the same library 🙂

Since then, it seems they cut the support for Linux 😦 and POTI Inc. (the company behind Songbird) seem to focus on mobile/web and losing more and more interest in the good old desktop version. At least, that’s what springs to mind when searching the songbird web page for the desktop version.

getsongbird.com - where is the download link for the desktop version?

getsongbird.com – where is the link to the desktop version?

In addition, there’s this everlasting performance problem, which seems to be inevitable as soon as your library reaches the magic 10k song limit.

Still, I like Songbird’s functionality, it’s open source nature and the addon system. Therefore I never got comfortable with iTunes, Amarok or whatever.
Only just recently, I came across a Songbird fork that looks pretty promising: Nightingale. It supports Linux and there still seems to be some development going on.

getnightingale.com -

getnightingale.com – no need to search for the link to download

Trying Nightingale with your existing Songbird Database or even migrating to Nightingale is fairly easy, as the database as well as addons seem to be compatible with Songbird.

That’s what worked for me (on Windows):

  • Back up songbird folders (just in case):
    • %HOMEDRIVE%\%HOMEPATH%\AppData\Local\Songbird2 and
    • %HOMEDRIVE%\%HOMEPATH%\AppData\Roaming\Songbird2.
  • Create symlinks from Songbird to Nightingale folders:
    • mklink /D %HOMEDRIVE%\%HOMEPATH%\AppData\Local\Nightingale %HOMEDRIVE%\%HOMEPATH%\AppData\Local\Songbird2
    • mklink /D %HOMEDRIVE%\%HOMEPATH%\AppData\Roaming\Nightingale %HOMEDRIVE%\%HOMEPATH%\AppData\Roaming\Songbird2

This should make your Songbird database available on both Nightingale and Songbird. I’d recommend not to run them both in parallel.

Microsoft Robocopy vs Linux NAS: Robocopy Pitfalls

Intro

I have been using Microsoft Robocopy for years, as it is an easy (by now even a built-in) way to do incremental backups under Windows.

Until recently, I used to backup my data from an internal NTFS hard drive to an external NTFS hard drive. As of late, I’m the proud owner of a DS213+ NAS, which runs some Linux base OS and Ext4 hard drives.

As I’m still looking for the perfect backup/versioning client (rsync on windows?!), I thought to stick to Robocopy in the meantime. Unfortunately, my backup scripts, which have done a splendid job of incrementally backing up my data to an external hard drive for years, now do a full backup to the NAS every time.

As it turns out, there is not only one reason for this behavior, but two:

  1. Timestamp
  2. File size

Here are the solutions solving these issues (at least for me), as well as some additional hints to using Robocopy.

1. Timestamp

At first, Robocopy kept telling me NEWER or OLDER for each file (even though the file didn’t change), resulting in copying the file instead of skipping it.

Solution:

First, make sure that both the NAS and the client PC have the same system time (use an NTP server, for example).

If the problem still persists, a good solution is to make Robocopy use FAT file times (/FFT).

This results in a 2-second-granularity, that is a file is only declared NEWER or OLDER, when it there is a difference of more than two seconds between the source and the destination file. If this option is not set, a nanosecond-granularity is used. Obviously, Samba’s time granularity is not as precise, and therefore the time stamps hardly ever match.

2. File size

If your incremental works by now, skip the rest of the article.

As for me, after solving the above problem, the incremental backups still didn’t work.

Robocopy kept telling me CHANGED for most files. Out of the frying pan into the fire!

What does CHANGED mean? The answer can be found here:

The source and destination files have identical time stamps but
different file sizes. The file is copied; to skip this file, use /XC.

Skipping all files with different sizes? No, that’s some dangerous idea when backing up data. So what now?

But why do they have a different size at all? Thats some file on the client PC:

SomeFile on the Client PC

SomeFile on the Client PC

And that’s the same file after transferring to the NAS:

SomeFile on the NAS

SomeFile on the NAS

Tthe attentive observer might have recognized that the size on the disk is different.

The reason for this can be found in different block sizes used in NAS and Client. I was wondering first, because I set up both NTFS and Ext4 with a Block size of 4K.

However, the Samba server has a default block size of 1k! So setting up the Samba with an explicit block size that matches the one of your client PC solves this issue.

How?

SSH to your (Synology) NAS.

 vi /usr/syno/etc/smb.conf

Press i.

Enter this line bellow the [global] tag (use the block size of your host file system, e.g. 4K = 4×1024=4096)

         block size = 4096

Press ESC, then enter :wq and press ENTER.

Restart the samba server by

/usr/syno/etc/rc.d/S80samba.sh restart

That solved my problems and I can now do incremental backups again.
Until I finally have set up perfect rsync for windows solution 🙂

Alternative solution for 1. and 2.

There is, however, an alternative to the solutions for 1. and 2.:

Use the Archive bit. Each file has an archive bit. Everytime you change the file, the bit is set. This behavior can be utilized by Robocopy. Using the /m switch makes Robocopy reset the archive bit on each source file and skips all files whose archive bit is set. That is, it copies only files that changed since the last backup. No need for caring about nasty time stamps or stupid file sizes.

There is one drawback, however. When you want to make a full backup, or you backup your data to several devices, you must not use the /m switch or your backups will be incomplete.

Additional Hints

While I’m on it: Here’s the Robocopy options I use. See Robocopy | SS64.com for complete reference.

robocopy "<source>" "<dest>" /MIR /V /NP /TEE /LOG:"%~f0.log" /Z /R:10 /W:10 /FFT /DCOPY:T
  • /MIR – MIRror a directory tree, that is, copy all subfolders and purge extra files on the destination
  • /V – Verbose output log, showing skipped files
  • /NP – No Progress. Don’t flood log file with % copied for each file.
  • /TEE – Output to console window, as well as the log file.
  • /LOG:”%~f0.log” – Output status to file <name-of-batch-file>.log, overwrite existing one.
  • /Z – Copy files in restartable mode (survive network glitch)
  • /R:10 – Number of Retries on failed copies – default is 1 million. Reducing the number is helpful when trying to copy locked files.
  • /W:10 – Wait time between retries – default is 30 seconds. Reducing the number is helpful when trying to copy locked files.
  • /FFT – Assume FAT File Times, 2-second date/time granularity. See above.
  • /DCOPY:T – Copy Directory Timestamps. Why is this not default? You definitely want to keep your directory timestamps!

NAS: DS213+ & WD20NPVT – 3. Performance and Encryption

As announced in the first and second post about Synology DS213+ and the Western Digital WD20NPVT, this post is about the effective data rates achieved by the NAS and the hard drives. It contains the data rates measured when reading files from the DS213+ (download), as well as the ones measured when writing to it (upload) for both unencrypted and encrypted folders on the NAS. For measurement both one large file (1 x 50GB) as well as many small files (100,000 x 10KB) have been transfered to/from the NAS.

Measured Values

The following tables compare the measured data rates to the ones published by Synology.

Large file

Note that for the measurement in this post a 50GB file was used, whereas Synology transfered a 5GB file, which should not make much of a difference.

Operation Data rate (measured) Data rate (Synology)
Upload 51.87 MB/s 84.31 MB/s
Upload (encrypted) 21.32 MB/s 24.65 MB/s
Download 40.89 MB/s 110.36 MB/s
Download (encrypted) 37.21 MB/s 49.58 MB/s
Client (internal) 111.55MB/s

Small files

Note that for the measurement in this post a 100,000 10KB files were used, whereas Synology transfered 1,000 5MB files. So the rates here cannot really be compared, as transferring more smaller files results in a bigger overhead and therefore in a lower transfer rate.

Still, it is remarkable, that Synology only measured the performance when transferring small files to unencrypted folders. Maybe the data rates measured for encrypted folders didn’t look too good?

Operation Data rate (measured) Data rate (Synology)
Upload 0.44 MB/s 43.82MB/s
Upload (encrypted) 0.05 MB/s
Download 0.75 MB/s 58.15MB/s
Download (encrypted) 0.49 MB/s
Client (internal) 4.52MB/s

Measurement

All data rates have been measured from the same client PC using Microsoft Robocopy, connecting to the NAS via SMB protocol.

The Client and the NAS are connected via a Linksys SE2800 switch, using Gigabit Ethernet.

The following table lists the NAS details, as well as the client PCs’ used for measurement in this post. In addition, the details of the client PC used by Synology are listed in the table.

Synology DS213+ Client PC Client PC (Synology)
OS DSM 4.1-2657 Windows 8×64 Windows 7
CPU Freescale MPC8544E 2x 1.067GHz Intel T7250 2×2.0GHz Intel Core i5 750 2.67GHz
RAM 512MB DDR3 4GB DDRII (2x2GB at 667MHz) 4GB DDRIII
SSD/HDD Western Digital Green WD20NPVT x2, RAID 1 Samsung 840 Pro (256GB) SVP200S3 (60GB) SSD x 2, RAID 0

Conclusion / Differences

There obviously are differences between the values measured here and the ones published by Synology. What are the reasons for this?

For the small files, the main reason for the difference surely is the smaller size of the files copied, as mentioned above. Why did I choose this smaller size and bigger number? It was not my main objective to compare the values to the ones measured by Synology. However, I was interested at what rate small files are actually copied. For me, small files are less than 1MB. Have you ever tried to copy a directory with a large quantity of small files (several KB each) such as an eclipse workspace or an SVN repo? It takes ages. I never thought, though, that they are copied with a data rate of less than a MB per sec.

For the large file I presume the difference between the measured values and the ones by Synology can be found in the differences in measurement set up. Synology used a faster CPU, fast RAM, an Raid 0 and direct connection between client PC and NAS.

Moreover, I don’t know what software and protocol Synology used for transfer. Maybe they used FTP, which might perform better than SMB. In addition, it might be even faster for small files, because they can be transfered using several concurrent connections and not sequentially, as it is done by robocopy.

Anyway, Synology’s download rate of 110 MB/s somehow still is a miracle to me, as this is almost as fast as when I write to my local SSD with robocopy…

Finally, I must say that it is astonishing why uploading (that is writing) large file is faster than downloading (for unencrypted files). I repeated measurement of all four large file operations several times but I got nearly the same results every time (± 1 MB/s). This seems to have something to do with Robocopy or SMB, because downloading the exact same file via FTP (Filezilla) yields a data rate of about 65 MB/s.

Maybe I should write another post comparing FTP and SMB, when I have time 🙂

NAS: DS213+ & WD20NPVT – 2. Power Consumption

As announced in the first post about the Synology DS213+ and the Western Digital WD20NPVT, here’s the measured power consumption of the NAS and the two hard disks in differnt operational modes.

Measured values

The following table compares the values measured to the values as specified in DS213+’s specification.

Operational Mode Mean power consumption (measured) Power consumption (specification)
Off ~1W
System hibernation 2.9W 2.64W
on – HDD hibernation 9.6W 10.08W
on – idle 12.56W
on – download 13.94W 22.20W (“access”)
on – upload 15.47W 22.20W (“access”)

The spec mentions higher values for “access” and, HDD hibernation probably because they used 3.5″ HDDs. However, the value for hibernation is lower. Maybe that’s due to the measurement accuracy.

Note: The power consumed by the power supply when the device is off, is below the effective power range of the measurement device. Therefore the measured value is only an approximate value.

Measurement

For measurement, an Energy Logger 4000 device was used. It is not the most accurate one (5 – 3500 W (± 1% + 1 count), 2 -5 W (± 5% + 1 count), < 2 W (±15% + 1 count)), especially in the lowest measurement range. Still, the values measured should povide an impression of the power consumption in the different operational modes of the NAS.

For off, system hibernation, HDD hibernation and idle the power consumption is an arithmetic mean over several hours.

The download and upload, values have been measured while reading/writing a 50GB file. The values bellow are mean values over the process of reading or writing, respectively. The data rates measured during this process will be published in the next post.

Surprisingly, there is no (measurable) difference in hibernation whether Wake On LAN (WOL) is on or off. That’s why there is only one system hibernation mode.

Conclusion

15.5W at max, is not so bad for a device running two hard drives. It’s idle consumption of about 12.6W still is about twice the power consumption of other devices running 24/7 (like routers).

That’s where the the system hibernation mode comes in handy. 3W in hibernation – that’s about as much as the power supply of an old desktop light consumes when the lights are off. If you use your NAS as a private storage, or even a web server that’s a very good compromise. Of course, it would be even more economical to switch the NAS completely off, when not in use. But probably not what these devices are intended for.

Thanks to the WOL functionality, you can use a hibernating NAS almost as comfortable as if it ran all day: For usage at home, the NAS can be switched on by sending a WOL package to the NAS MAC/IP address from any PC (e.g. WOL for Windows) or mobile device (e.g. Wake On Lan for android). Actually, I don’t have to do this very often as it seems that my Windows Explorer switches on the NAS as soon as it is started, for I have mounted some NAS folders as network drives.

If you want to use the NAS as a web server you can configure your router to send a WOL package to the NAS when a request is received on a certain port, for example via HTTP or HTTPs. This will switch on the device, which takes about 30 seconds, that is, the website is delivered some seconds later, once the NAS is awake. I think in private usage scenarios this should not be too much of a drawback but it safes 75% of energy.

Doing so, allows for having a NAS or even a self-hosted web server/”personal cloud” that consumes almost no energy when it is not in use. A good enough solution for my “green conscience”, at last.

By the way, a device consuming 3W consumes about 26kWh in a year, which is about 7€ (as of 2012 in Germany, the average price for electricity was 0,26€ per kWh). In comparison: A device consuming 12.6W, consumes about 110kWh a year, which is about 29€.