You are here: Home » Topic » performance differences between itunes and nslu2

performance differences between itunes and nslu2

Viewing 15 posts - 1 through 15 (of 19 total)
  • Author
    Posts
  • #1100
    hendl
    Guest

    I use the before latest nightly (-1489) on an unslugged nslu2 with two soundbridges and it works perfectly. Once I try to hear music from the firefly via itunes 7 on windows it tooks several minutes (!) to get the whole songlist into the itunes client. During this time many data are transmitted as the leds at the switch show. After getting the song list the network transfer rate is enormously lower than during playing the same song via the soundbridge (???). Is it possible to speed up the itunes?

    I’ve not deunderlocked my nslu2 – it’s a plan for the future. Serving two soundbridges is no problem as well as two soundbridges and one itunes.

    Thanks, Stephan

    #9170
    rpedde
    Participant

    @hendl wrote:

    I use the before latest nightly (-1489) on an unslugged nslu2 with two soundbridges and it works perfectly. Once I try to hear music from the firefly via itunes 7 on windows it tooks several minutes (!) to get the whole songlist into the itunes client. During this time many data are transmitted as the leds at the switch show. After getting the song list the network transfer rate is enormously lower than during playing the same song via the soundbridge (???). Is it possible to speed up the itunes?

    I’ve not deunderlocked my nslu2 – it’s a plan for the future. Serving two soundbridges is no problem as well as two soundbridges and one itunes.

    Thanks, Stephan

    For big libraries, listening via iTunes is slow. It’s a protocol not suited for embedded machines. It assumes the machine has plenty of RAM and plenty of CPU. On the slug, that just isn’t so.

    One way to make it faster would be to replace the backend database with GDBM rather than sqlite. That’s going to happen soon for the embedded targets.

    Another way to make it faster is to set the “correct order” flag to “No” in the web config. That’s an expensive query on the slug.

    Other than that, I’m going to continue to try and speed up the slug, but I think it’s always going to be pretty ugly serving daap on an embedded device.

    — Ron

    #9171
    mas
    Participant

    Hmmm, no problems here with RSP. Well only 1/4 of his # of songs but still *4 time wouldnt be a big problem.

    Is RSP much faster than DAAP or does it simply not scale linear?

    #9172
    rpedde
    Participant

    @mas wrote:

    Hmmm, no problems here with RSP. Well only 1/4 of his # of songs but still *4 time wouldnt be a big problem.

    Is RSP much faster than DAAP or does it simply not scale linear?

    It’s much faster.

    Two things wrong with daap:

    1. the data blocks it sends back aren’t optimized for… well… anything. It’s a binary xml format, only with just start tags and a block length, and no end tag.

    So to send a string, like song artist, the atom looks like:

    Or:

    asar 9 “The Smiths”

    That’s fine, but you can’t start streaming the data until you know the size. For a song title, that’s fine, but the song artist is just one data block that’s contained in another block: a mlit (dmap.listingitem) block. So you have to have the size of *all* the metadata for the whole song so you can calculate the size of the listingitem so you can send that atom. Except the listingitem is one of several listingitems in a mlcl (dmap.listing) block, which is one of several blocks in a adbs (daap.databasesongs) block.

    So you have to calculate the length of all the metadata of all the items in the database before you can send a single byte of the response.

    So that’s one pass through the database to calculate the size of the resulting block of data, then another pass to actually build the block and send it.

    Then you can’t really compress it either, because you have to know what the exact data is you are going to compress since the only type of connection iTunes supports is a persistent one. So to send the data compressed, you need three full passes through the database.

    Blech.

    RSP, on the other hand, just uses straight xml and non-persistent connections, so it can return the whole library and compress on the fly on a single database pass. Much faster.

    Also, xml compresses better than daap, so the wire sizes are smaller as well.

    So the upshot is that rsp is faster, more ajaxy, and generally more suited to embedded devices.

    #9173
    CCRDude
    Participant

    Hmmm… binary xml format? It’s a binary tree structure, but has imho nothing to do with XML at all, or you could call every binary format out there xml πŸ˜‰

    I also disagree with those two passes – why would you need to separate size calculation and data building? Two passes are even dangerous when you should get to the point when you do MySQL, since data could change between the first and second pass. In my protocol tests, I built data from inside out – inner packets with data first, then building the rest around them (or to speed up things, noting the position of the few count/size indizes that needed to be updated at the end).
    If you actually do two passes, well, that’s giving me hope that iTunes access to Firefly may actually get faster with a bit of optimization πŸ˜€

    And why the third one? 😯 If you have a stream of data prepared to send to the client, even in ugly daap format, you can pass that stream through a compression method without passing through the database again for sure? What has persistence of the connection to do with the need to pass the db again?

    Are you doing *everything* on the fly with daap to preserve NAS memory? I just checked, the full /databases/1/items/ needs 6 MB uncompressed for 20.000 songs. To be honest, I would love to “waste” that memory (and its used for a very short moment anyway) even on my low-resource Kurobox for a tripple-speedup of daap πŸ˜‰
    And in the /worst/ case, if you don’t have that much memory, I would think that it still would be a lot faster to prepare the data to send storing it in a file (instead of buffer in memory), instead of passing the database thrice 😯

    That’s no criticism on rsp though, I agree that your rsp is much better suited for streaming those control structures πŸ™‚

    Oh, but one more thing on those terms… while ajax may include xml, xml isn’t automatically ajax πŸ˜›

    #9174
    rpedde
    Participant

    @CCRDude wrote:

    Hmmm… binary xml format? It’s a binary tree structure, but has imho nothing to do with XML at all, or you could call every binary format out there xml πŸ˜‰

    Yeah, the wire protocol used to be essentially the same as the iTunes Music Library.xml, just binaried. So that’s why I think of it as binary xml — it was the iTunes xml file converted to length basis.

    I also disagree with those two passes – why would you need to separate size calculation and data building? Two passes are even dangerous when you should get to the point when you do MySQL, since data could change between the first and second pass.

    db updates are semaphored during enums. Which causes some contention, but the db updates can stall without too much problem.

    In my protocol tests, I built data from inside out – inner packets with data first, then building the rest around them (or to speed up things, noting the position of the few count/size indizes that needed to be updated at the end).
    If you actually do two passes, well, that’s giving me hope that iTunes access to Firefly may actually get faster with a bit of optimization πŸ˜€

    I used to do that, building the whole tree in memory, then serializing it. It only took one pass, but I ran into memory limits on embedded devices like the slug. So the tradeoff was speed versus db size on the slug. I went with db size, which causes performance problems, but I can do databases bigger than 12K songs.

    I think maybe with a gdbm database, it will help performance quite a bit.

    And why the third one? 😯 If you have a stream of data prepared to send to the client, even in ugly daap format, you can pass that stream through a compression method without passing through the database again for sure? What has persistence of the connection to do with the need to pass the db again?

    Because the first chunk of data I have to put on the wire is the size of the response. So I need to know the size of the whole thing compressed before I start emitting anything. iTunes doesn’t understand chunked encoding either, or I would use that. I could calculate it on the second pass if I reset the dictionary after every mlit entry, but that would kill the compression effectiveness.

    Are you doing *everything* on the fly with daap to preserve NAS memory? I just checked, the full /databases/1/items/ needs 6 MB uncompressed for 20.000 songs. To be honest, I would love to “waste” that memory (and its used for a very short moment anyway) even on my low-resource Kurobox for a tripple-speedup of daap πŸ˜‰

    Double speed-up, actually. and it also caused heap fragmentation problems that caused alloc failures in small machines like the slug.

    Try it with 0.2.4 — that uses the behavior you are looking for. What’s the tradeoff in speed? I’d be interested to hear.

    And in the /worst/ case, if you don’t have that much memory, I would think that it still would be a lot faster to prepare the data to send storing it in a file (instead of buffer in memory), instead of passing the database thrice 😯

    Except that the queries change. Sometimes it only asks for a subset of the metadata. The caching layer would be both comples and (imho) error prone. I’d rather work toward a more generalized speedup than implement a caching strategy at this point.

    Oh… wait… gotcha. Not caching the responses, but using a memory mapped file or someting for disk-backed memory. That’s a really good thought. Wonder if that’s portable to windows?

    Oh, but one more thing on those terms… while ajax may include xml, xml isn’t automatically ajax πŸ˜›

    Right. But guess I mean more suitable to ajax than is daap. I played a bit with a json serializer for rsp also, and it looks reasonably clean. I might play with that a bit, too, in the future.

    #9175
    CCRDude
    Participant

    Ok, I compared πŸ™‚ I measured each one ten times to get an average:

    With svn-1498 & sqlite3, accessing 20k songs took 1:42.
    With 0.2.4 & gdbm, accessing 20k songs took 0:28.

    Would have tried svn-1498 with gdbm, but sadly the new configure requires me to use either –enable-sqlite or –enable-sqlite3 and doesn’t accept just –enable-gdbm, and the config file doesn’t allow it either.

    But this difference is even more than 1:3!

    And yes, you’re right about that “disk-backed memory”.

    In Pascal, which I use mostly on Win & Linux (well on the Mac as well *g*), I would just implement the response building with a TStream, and initialize that based on free memory as either TMemoryStream or TFileStream. Firefly is not OO though, C instead of C++, if I remember browsing some code files correctly? I’m not that good at C, but I think even using just a file while building any request (including seeking and all -3 seconds maybe) would be faster than browsing the DB multiple times (-70 seconds measured, maybe -60 if you include the difference between gdbm and sqlite3).

    #9176
    mas
    Participant

    Is that on an NSLU system????

    Because on my NSLU I noticed a speedup when I stepped over from 0.2.4 to the nighlies.

    Of course I have only 4k songs but still….

    #9177
    CCRDude
    Participant

    No, Kurobox (if I remember stats from other topics correctly, it’s just 20% faster or so, so not really that much a difference). Number of smart playlists may also matter, I’ve got ten of them. But then, I did have them in 0.2.4 as well and my old mt-daapd.playlist file still existed when testing I think.

    Are you using nightlies with sqlite 2 or 3 (or an older nightly that maybe still had gdbm, not sure when that was given up)?

    Also, this is about iTunes (through the daap protocl) only… not about the Roku Soundbridge which thanks to RSP replacing DAAP has actually become faster since the nightlies πŸ™‚

    #9178
    mas
    Participant

    Ahhh ok. I dont use DAAP. And yes I use the latest nighly with sqlite3.

    #9179
    chaintong
    Participant

    Ron – glad you’re looking into the speed thing…

    I’m not sure how this helps but here are my observations:

    I have just come to the conclusion (by trial an error) that that the limit of my set up is some where around 20k songs with a wireless roku. Using itunes on a wireless laptop it does finally get the entire songs list but it takes over a minute. I think the ROKU just times out because I always get a “failed to load browse data” at around 20k. The itunes setup will go all the way to 35k (the total number of songs in my library), – if you wait long enough.

    Interestingly if I reboot everything (slug, router, Roku) and let the whole thing settle down (wait 10 minutes) the Roku will load the songs list when I select browse songs. If I then shout down and start the Roku up against, it won’t browse all the songs again. – Also the reboot of everything never enables me to browse albums or artists – that’s a bit weird?

    As far as the roku is concerned I think the problem was made worse when I started a concerted effort to update all my tags using media monkey (including put in cover art in the tag to stop all those messy files cluttering up my directories – would adding tag data in make thinks worse?

    I have added sqlite indexes to titles, albums and artists and this made no change to the ability to browse 25k songs.

    I can’t find the β€œcorrect order” flag to speed things up

    Will sqlite3 help here?

    Is there anything else worth trying? Else I’ll just cut my songs down to around 15k, and wait until then next fix.

    Later

    Tom

    #9180
    CCRDude
    Participant

    Covers shouldn’t matter, because browsing will take only the metadata necessary, from the central db, and not from each file.

    20k songs are no problem at all for my Roku. Are you using a nightly (with rsp) or 0.2.4?

    The “correct order” flag is in the web interface if you switch the config page to advanced view imho πŸ™‚
    Config category “Databases”, last (sixth) line “Ordered Playlists”.

    #9181
    rpedde
    Participant

    @CCRDude wrote:

    With svn-1498 & sqlite3, accessing 20k songs took 1:42.
    With 0.2.4 & gdbm, accessing 20k songs took 0:28.

    Try it without playlists on sqlite3.

    I don’t think that’s all due to traversal. I get a full gdbm traversal in sub-second times. I think the big difference there is gdbm versus sqlite.

    Would have tried svn-1498 with gdbm, but sadly the new configure requires me to use either –enable-sqlite or –enable-sqlite3 and doesn’t accept just –enable-gdbm, and the config file doesn’t allow it either.

    The gdbm isn’t quite there yet.

    But this difference is even more than 1:3!

    Again, this isn’t all due to multi-passes, I think it might also be db. I want to get a gdbm backend in there — that will be apples to apples.

    — Ron

    #9182
    chaintong
    Participant

    CCR dude: are you using the roku wirelessly?

    I am running with Version svn-1498 – I understand that to be the latest nightly – I’m not sure about rsp – I assume that just comes with the latest nightly

    I found the Ordered Playlists option, but because I don’t actually use playlists it made not difference to my ability to browse albums, artists, or songs.

    Next idea (from Ron in another thread) is to move the tmp drive from flash to drive.

    #9183
    CCRDude
    Participant

    @chaintong: yes I am, and not the best signal strength either. The only times I ever have browsing problems is during those eternal waits when any Windows machine tries to access the server – during those 2 minutes, the Roku is unable to browse and fails.

    @ron: well, I’ll try to disable smart playlists when the machine isn’t used for some time and I find the sqlite terms to rename the playlists table to something else temporarily, so I don’t have to enter them all again afterwards πŸ˜‰
    If that is the case, it seems iTunes is coded even worse than I could imagine πŸ˜€ The protocol at least allows to read playlists separately, but may be right, they’re there immediately in the client once opened, might be iTunes is reading their contents on connection directly as well.

Viewing 15 posts - 1 through 15 (of 19 total)
  • The forum ‘General Discussion’ is closed to new topics and replies.