You are here: Home » Reply

Reply To: performance differences between itunes and nslu2

#9174

rpedde
Participant

@ccrdude wrote:

Hmmm… binary xml format? It’s a binary tree structure, but has imho nothing to do with XML at all, or you could call every binary format out there xml 😉

Yeah, the wire protocol used to be essentially the same as the iTunes Music Library.xml, just binaried. So that’s why I think of it as binary xml — it was the iTunes xml file converted to length basis.

I also disagree with those two passes – why would you need to separate size calculation and data building? Two passes are even dangerous when you should get to the point when you do MySQL, since data could change between the first and second pass.

db updates are semaphored during enums. Which causes some contention, but the db updates can stall without too much problem.

In my protocol tests, I built data from inside out – inner packets with data first, then building the rest around them (or to speed up things, noting the position of the few count/size indizes that needed to be updated at the end).
If you actually do two passes, well, that’s giving me hope that iTunes access to Firefly may actually get faster with a bit of optimization 😀

I used to do that, building the whole tree in memory, then serializing it. It only took one pass, but I ran into memory limits on embedded devices like the slug. So the tradeoff was speed versus db size on the slug. I went with db size, which causes performance problems, but I can do databases bigger than 12K songs.

I think maybe with a gdbm database, it will help performance quite a bit.

And why the third one? 😯 If you have a stream of data prepared to send to the client, even in ugly daap format, you can pass that stream through a compression method without passing through the database again for sure? What has persistence of the connection to do with the need to pass the db again?

Because the first chunk of data I have to put on the wire is the size of the response. So I need to know the size of the whole thing compressed before I start emitting anything. iTunes doesn’t understand chunked encoding either, or I would use that. I could calculate it on the second pass if I reset the dictionary after every mlit entry, but that would kill the compression effectiveness.

Are you doing *everything* on the fly with daap to preserve NAS memory? I just checked, the full /databases/1/items/ needs 6 MB uncompressed for 20.000 songs. To be honest, I would love to “waste” that memory (and its used for a very short moment anyway) even on my low-resource Kurobox for a tripple-speedup of daap 😉

Double speed-up, actually. and it also caused heap fragmentation problems that caused alloc failures in small machines like the slug.

Try it with 0.2.4 — that uses the behavior you are looking for. What’s the tradeoff in speed? I’d be interested to hear.

And in the /worst/ case, if you don’t have that much memory, I would think that it still would be a lot faster to prepare the data to send storing it in a file (instead of buffer in memory), instead of passing the database thrice 😯

Except that the queries change. Sometimes it only asks for a subset of the metadata. The caching layer would be both comples and (imho) error prone. I’d rather work toward a more generalized speedup than implement a caching strategy at this point.

Oh… wait… gotcha. Not caching the responses, but using a memory mapped file or someting for disk-backed memory. That’s a really good thought. Wonder if that’s portable to windows?

Oh, but one more thing on those terms… while ajax may include xml, xml isn’t automatically ajax 😛

Right. But guess I mean more suitable to ajax than is daap. I played a bit with a json serializer for rsp also, and it looks reasonably clean. I might play with that a bit, too, in the future.