NSLU performance

Viewing 10 posts - 21 through 30 (of 37 total)
  • Author
    Posts
  • #8743
    gusdesigns
    Participant

    @zuovo2 wrote:

    I’m guessing that this performance issue could also be fixed on the server if Firefly maintained a separate “artists” table in songs.db.

    Ron,

    Is this statement correct? Is there no such table?

    If so, would it be possible to add this somewhere? This would prolly then move the impact of the artist list to the scanning part and make the artist list on the SB much faster.

    Correct?

    Regards,

    Bram.

    #8744
    rpedde
    Participant

    @gusdesigns wrote:

    @zuovo2 wrote:

    I’m guessing that this performance issue could also be fixed on the server if Firefly maintained a separate “artists” table in songs.db.

    Ron,

    Is this statement correct? Is there no such table?

    There is indeed no such table. But I don’t think it would help performance. In fact, I think it would hinder it. Here’s why: you’d have to either join those tables in memory when spooling out the song info (which the slug couldn’t do without being reaped by the oom killer), or you’d have to merge it to a temp table (that looks just like the songs table does right now) and then stream that out.

    Bottom line is that it would kill the slug. Nope. I don’t think it gets faster with sqlite on the slug.

    Performance can certainly be improved, though. But I think the way to do that is to move to a quicker backend, though. I think a gdb backend again will help speed it up.

    Interestingly enough, though, thats another argument against normalizing the songs table — it makes it difficult to implement with a backend db that *isn’t* relational. Like gdb.

    nope, all in all I think normalizing the tables is a bad idea, both from a performance standpoint and a maintenance standpoint. That’s my take anyway.

    — Ron

    #8745
    fizze
    Participant

    True, but a more sophisticated DB structure almost certainly speeds up the SQL-like backends.

    Normally one would design the DB to reflect the song structure.
    Thus having one table with the “Song” related info, and a connecting table to albums and artists.
    That way one could parse by albums, neglecting the artist join, and vice versa. Or just browse by songs.
    Compilations would be a hybrid link table and join to songs, artists and albums.

    Sure memory load would eventually become an issue, but then again each table would hold less data, effectively reducing memory load.

    And a join with a single key over 2 tables really shouldn’t be that hard on the slug.

    I agree with the GDBM thingy though.
    Hm, would be nice to poke around with SQLite a bit on the slug and try to compare one approach vs the other…..

    0.02€

    #8746
    rpedde
    Participant

    @fizze wrote:

    True, but a more sophisticated DB structure almost certainly speeds up the SQL-like backends.

    I’d encourage you to try that supposition with a representative data set on a 16Mb embedded machine. I have, and got counter-intuitive results. That’s why the switch for in-order playlist results. As soon as you try and join tables, sqlite tries to do it either in memory or to a temp table, and it chokes.

    Normally one would design the DB to reflect the song structure.
    Thus having one table with the “Song” related info, and a connecting table to albums and artists.

    That way one could parse by albums, neglecting the artist join, and vice versa. Or just browse by songs.
    Compilations would be a hybrid link table and join to songs, artists and albums.

    And that’s how I would do it if it were an account-receivable program, and was going to run on a mssql cluster or something. 🙂

    Sure memory load would eventually become an issue, but then again each table would hold less data, effectively reducing memory load.

    Except with song-per-row, my memory overhead is one row, as I can do a rowscan and return the results without carrying a resultset or temp table in memory. No overhead. Scales to any n.

    And a join with a single key over 2 tables really shouldn’t be that hard on the slug.

    I agree with the GDBM thingy though.
    Hm, would be nice to poke around with SQLite a bit on the slug and try to compare one approach vs the other…..

    Look at the code that returns playlist results in-order. It’s a single query that does a join between playlists and songs. It’s probably representative of a join from songs to artists or vice versa. Time the queries, and decide where the time comes from.

    But do it on a memory constrained device, and you’ll see the difference is best measured in orders of magnitude.

    I’m certainly not a db expert, so if you find an elegant, fast solution, I’d be happy to take it!

    — Ron

    #8747
    fizze
    Participant

    I am sure you have your reasons and did a fair share of research and testing before you implemented the DB stuff like you did. 😉
    All I’m saying that maybe there are some tricks to avoid having sqlite “attempt” a full table join in memory. Working with hashes and whatnot.

    Will see what I can come up with.
    Oh and I am talking of the NSLU2 here. So thats pretty memory constrained.

    #8748
    rpedde
    Participant

    @fizze wrote:

    I am sure you have your reasons and did a fair share of research and testing before you implemented the DB stuff like you did. 😉
    All I’m saying that maybe there are some tricks to avoid having sqlite “attempt” a full table join in memory. Working with hashes and whatnot.

    See, that’s where my lack of real SQL knowledge gets me. I’m sure there are techniques that could be used to speed stuff up considerably, just that I don’t know them, and do things the ignorant way. So seriously, if you see something that would help in that direction, I’d be happy to learn it.

    — Ron

    #8749
    Anonymous
    Inactive

    I have just applied a nasty hack here which seems to help the problem.

    I created a new ‘album’ table, using the same schema as ‘songs’. I then did ‘insert into album select * from songs where track = 1’. I now had a table with just one entry per album.

    Next I took a hex editor to the mt-daapd binary. I searched for the string ‘distinct album from songs’ and replaced it with ‘select distinct album from album’. There were two occurrences to change.

    The downside is that the album table now gets out of sync with ‘songs’ after an index. This needs to be corrected. The upside is ‘Browse Albums’ is much faster.

    This could presumably be tidied up a lot. I just wanted quick results.

    #8750
    Anonymous
    Inactive

    14430 songs in DB on an unslung (6.10 Beta) with Firefly 1696.

    Artists (5250) returned in under 10 seconds
    Albums (890) the same
    Titles (14430) in under 30 seconds

    I have selected sqlite3 and compress when sending to player.

    This is very good, but I really want it to be able serve all 45,000 songs, so I am playing around with using just presets and don’t use the browse

    Also, next trying the indexing of Artist etc

    #8751
    Anonymous
    Inactive

    mmmm

    33,433 song seems to break it with a “Failed to load” on all Artists, Albums, Genre and Titles

    Playlists still work OK

    I have all 44k songs on the firefly running on my Mac, and the Pinnacle Soundbridges work OK, so I am guessing it is a NSLU2 problem.

    I don’t mind if it takes thiry seconds or so to get Artist/Genre or Album back, realistically I am using playlists all the time.

    The Playlists are built of the Group Tag

    Anyone got any ideas on how to clear the Failed to Load error?

    Thanks

    #8752
    mas
    Participant

    Well with 33k songs and a NSLU… 8-(

    The Problem is the SoundBridge has a timeout. So if the NSLU delivers it too slowly then that is it. There are some tricks already posted in the forums like using sqlite3 over sqlite and manually adding indexes to the sqlite3 database. Search the forums, but I doubt it will suffice for that amount of songs.

    The only solution is splitting the database or use something with more power than a NSLU.

    And does the 44k songs really load ok on the soundbridge when a Mac or PC serves it? I mean maybe the browse still works, but I doubt you can do a “play all”. At least from what both the Roku guys say and what other people have observed that should exhaust the memory of the Soundbridge so 44k is kinda limited anyway…

Viewing 10 posts - 21 through 30 (of 37 total)
  • The forum ‘General Discussion’ is closed to new topics and replies.