Amalgalite 0.2.0 Released

Now with more examples!

SQLite has the ability to do incremental IO on items that are BLOB's. I just finished adding support for this in Amalgalite. This means you can SELECT a blob, and from the result set, transfer the data directly to an IO stream.

A quick example. Assume we store files in filestore.db that has the following schema:

CREATE TABLE files(
  id   INTEGER PRIMARY KEY AUTOINCREMENT,
  path VARCHAR UNIQUE
  data BLOB
)

You can then write all the files in this table to a different tree with this piece of code. In this case, the files are not loaded into memory, the are copied directly from the pages in the database for the BLOBs to the output files.

new_root_dir = "/tmp"
db = Amalgalite::Database.new( "filestore.db" )
db.execute( "SELECT * FROM files" ) do |row|
  # unsafe path expansion
  dest_path = File.expand_path( File.join( new_root_dir, row['path']) )
  row['data'].write_to_file( dest_path )
end
db.close

To get this behavior you need to make sure that column that is SQLite's rowid column is part of the result set. If one of your columns is INTEGER PRIMARY KEY it is that one. Otherwise you must specifically select ROWID, OID. The code above will still work if the rowid column is not specified, but Amalgalite will fully read it into memory before being written to the file.

Take a look at the Amalgalite::Blob api and the example.

Comments (View)

blog comments powered by Disqus