Next stable DBD::SQLite will be released at the beginning of November

DBD::SQLite 1.71_07 (with SQLite 3.39.4) is a release candidate for the next stable DBD::SQLite. This release is mainly to address a security hole found in SQLite, plus a few performance issues for perl built with -DDEBUGGING. See Changes for other fixes and changes.

This time I'll wait for about a week and release 1.72 at the beginning of November if there's no blocker nor request to wait for more. Thank you for your patience.

2 Comments

Kenichi,

Hi.

I am using your DBD::SQLite Perl driver.

I have been doing disk based files but now have a desire to implement 'in-memory' DB's for a special purpose.

I have been reviewing DBI->connect with "dbname=:memory:" and can get one open.

Furthermore I can use the "sqlite_backup_from_file( $filename )" to apparently populate that :memory: DB.

What I would really like to do is have something that would 'backup' from the contents of a variable, e.g. "sqlite_backup_from_var( $variable )", where the $variable has a copy of an on-disk DB.

I have been trying to follow some examples of a filehandle is a variable that is a reference, ok maybe not describing that well, so that using the "sqlite_backup_from_file( $filename )" where $filename is actually a variable with a DB in it (again having been image copied (slurped) in from an on-disk db file) but 'prepare' don't see what is in the DB handle as the DB.

I simply cannot find a way around this.

Yes, you might think why don't I just do the "sqlite_backup_from_file( $filename )" as indicated/intended. It is not quite that simple, or at least on the on-disk side I am trying to do a lot with available disk capacity and am compressing (linux gzip/gunzip) files that are 'old enough' but I want them to be available and that requires the extra work of un-zipping (gunzip) them with associated heavy disk-i/o and associated latency, and I am trying to work up a solution where I gunzip to memory and hand that to a :memory: DB.

This is only for reading, i.e. all of the DBs are historical time-series whose time is passed and contain historical telemetry data so when I am doing scans right now all those old files have to be inflated (not compressed) and that is why I am trying to work up a disk space saving solution.

It would be as equally 'ok' if "sqlite_backup_from_file( $filename )" would work for a $filename that was an archive and expand it on the way in, e.g. "sqlite_backup_from_file_archive( $filename )" where $filename was (any of the) archive...

SQLite has to read the file anyway so I'm thinking I can massively extend the size of my physically partitioned time-series DB environment by going to a portion of the DB's as compressed...

Any idea on how I could hack this up now? Or maybe add "sqlite_backup_from_file_archive( $archive_filename )"?


Thank you,

Doug

Hi Doug, a blog comment column is probably not the best venue to raise a request of this nature. A mailing list or the issue tracker seem more suitable. And hey, look: https://github.com/DBD-SQLite/DBD-SQLite/issues/101 — sound familiar?

Leave a comment

About Kenichi Ishigaki

user-pic a Japanese perl programmer/translator, aka charsbar