From jay at jays.net Mon Aug 3 20:20:41 2009 From: jay at jays.net (Jay Hannah) Date: Mon, 3 Aug 2009 22:20:41 -0500 Subject: [Omaha.pm] Meeting room change: PKI 276 References: <8F03958F-07DA-4F37-B3DA-E890FD44D651@jays.net> Message-ID: <272CCF56-5FBB-46A3-AB19-57ED494BD2BC@jays.net> We've been meeting in room 269 for a couple years. We've been moved to room 276. Same size (big), computers everywhere, projectors, white boards, open wifi, etc. Same hallway, on the other side. You can't miss it. I'll tape a sign on the old room too, just in case. :) Meetings: second Tuesday of each month, 7pm-9pm UNO's Peter Kiewit Institute (PKI) map Room PKI 276 1110 South 67th Street Omaha, NE Lost? Jay's mobile phone: 402-578-3976 j http://jays.net/wiki/ODynUG From jhannah at omnihotels.com Wed Aug 5 11:48:50 2009 From: jhannah at omnihotels.com (Jay Hannah) Date: Wed, 5 Aug 2009 13:48:50 -0500 Subject: [Omaha.pm] Catalyst Action @args tip Message-ID: <396CEDAA86B38646ACE2FEAA22C3FBF1023E851E@l3exchange.omnihotels.net> FYI, Catalyst parses URL arguments for you. So if someone hits the URL "/histPhoenix/blah" you can say sub histPhoenix : Local { my ($self, $c, @args) = @_; and $args[0] will be 'blah'. So instead of writing this: sub histPhoenix : Local { my ($self, $c) = @_; my $uri=${$c->req->uri}; if ( my ($type)=( $uri =~ /histPhoenix\/(.+?)$/i ) ) { You can just do this: sub histPhoenix : Local { my ($self, $c, $type) = @_; $0.02, j -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at jays.net Wed Aug 12 12:10:25 2009 From: jay at jays.net (Jay Hannah) Date: Wed, 12 Aug 2009 14:10:25 -0500 Subject: [Omaha.pm] Next meeting Tue Sep 8 References: <8116A17C-7657-4CF4-BEF2-8A10810B9C06@jays.net> Message-ID: TEKsystems has offered to sponsor our September meeting. Thank you! Thanks for coming out last night! It was fun! See you all September 8! PS, here's what happens when Alias (irc.perl.org #poe) is left in a room with GraphViz and POE: http://svn.ali.as/graph/POE j http://jays.net/wiki/ODLUG From choman at gmail.com Wed Aug 12 12:13:11 2009 From: choman at gmail.com (Chad Homan) Date: Wed, 12 Aug 2009 14:13:11 -0500 Subject: [Omaha.pm] Next meeting Tue Sep 8 In-Reply-To: References: <8116A17C-7657-4CF4-BEF2-8A10810B9C06@jays.net> Message-ID: Jay Is this the dyn lang home page?: http://jays.net/wiki/ODLUG Chad, CISSP Sent from Elkridge, MD, United States Charles de Gaulle - "The better I get to know men, the more I find myself loving dogs." On Wed, Aug 12, 2009 at 2:10 PM, Jay Hannah wrote: > TEKsystems has offered to sponsor our September meeting. Thank you! > > Thanks for coming out last night! It was fun! See you all September 8! > > PS, here's what happens when Alias (irc.perl.org #poe) is left in a room > with GraphViz and POE: > > http://svn.ali.as/graph/POE > > j > http://jays.net/wiki/ODLUG > > > _______________________________________________ > Omaha-pm mailing list > Omaha-pm at pm.org > http://mail.pm.org/mailman/listinfo/omaha-pm > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at jays.net Wed Aug 12 12:47:55 2009 From: jay at jays.net (Jay Hannah) Date: Wed, 12 Aug 2009 14:47:55 -0500 Subject: [Omaha.pm] Next meeting Tue Sep 8 In-Reply-To: References: <8116A17C-7657-4CF4-BEF2-8A10810B9C06@jays.net> Message-ID: <60D90822-FD75-4186-81C3-4E1AC6BF7F47@jays.net> On Aug 12, 2009, at 2:13 PM, Chad Homan wrote: > Is this the dyn lang home page?: http://jays.net/wiki/ODLUG Yup. I also registered odlug.org, but my cheapo ISP Apache config gave me trouble last month and I haven't circled back to fighting it again. We remain http://omaha.pm.org, (which works fine in my cheapo ISP Apache config -- go figure). :) j From dan at linder.org Mon Aug 24 07:17:01 2009 From: dan at linder.org (Dan Linder) Date: Mon, 24 Aug 2009 09:17:01 -0500 Subject: [Omaha.pm] Embeddable database options. Message-ID: <3e2be50908240717g4b5d6a07s2d8adbf3e71daf5b@mail.gmail.com> Guys, I'm looking at rewriting some of the store/retrieve code in a project I'm working on. The current method uses the Data::Dumper and eval() code to store data to a hierarchical directory structure on disk. Over the weekend I all but eliminated the hard-disk overhead by moving the data to a temporary RAM disk -- sadly, the speed-ups were too small to notice. This tells me that the overall Linux file-system caching is working quite well. (Yay!) Unfortunately, this leads me (again) determine that the Dumper/eval() code is probably the bottle-neck. (Definately not what they were designed for, but work remarkably well none the less...) So, I started investigating alternatives: * A true database with client/server model (i.e. MySQL, PostgreSQL, etc) * An embedded database such as SQLite (others?) * Continue using the filesystem+directory structure using freeze()/thaw() from the FreezeThaw CPAN module (speed improvement?) * Use a DBD module to store/retrieve these files (i.e. DBD::File, DBD::CSV, etc) (benefit here is that a simple change in the DB setup code will mean a change from DBD::File to DBD::SQLite or DBD::PostgreSQL should be fairly short work) Internally I have some constraints: * We'd like to keep the number of non-core Perl modules down (currently we're 90% core), and a couple customers are extremely sensitive to anything that is not supplied by their OS provider (Solaris and HPUX for example). * We would also like to keep the files on disk and in a human-readable form so the end users and support staff can peruse this data with simple tools (grep, vi, etc). * The remaining 10% that is non-core Perl modules are local copies of "pure perl" CPAN modules we've merged into the source code branch directly. (We do this because the code runs on Solaris/SPARC, Solaris/x86_64, Linux/x86, Linux/ia64, HPUX/PA-RISC, HPUX/ia64, etc) My personal pick at the moment is SQLite (it is provided natively in Solaris 10, and easy to install on Linux platforms), but I question if the speed up it provides will be over-shadowed by the constant spawning of the sqlite binary each time an element of data is queried. (Anyone know if there is a way to leave a persistent copy of SQLite running in memory that future copies hook into? Getting a bit far afield from the initial SQLite implementation goals...) Thanks for any insight, DanL -- ******************* ***************** ************* *********** ******* ***** *** ** "Quis custodiet ipsos custodes?" (Who can watch the watchmen?) -- from the Satires of Juvenal "I do not fear computers, I fear the lack of them." -- Isaac Asimov (Author) ** *** ***** ******* *********** ************* ***************** ******************* From mario at ruby-im.net Mon Aug 24 14:11:31 2009 From: mario at ruby-im.net (Mario Steele) Date: Mon, 24 Aug 2009 17:11:31 -0400 Subject: [Omaha.pm] Embeddable database options. In-Reply-To: <3e2be50908240717g4b5d6a07s2d8adbf3e71daf5b@mail.gmail.com> References: <3e2be50908240717g4b5d6a07s2d8adbf3e71daf5b@mail.gmail.com> Message-ID: Hello Dan, On Mon, Aug 24, 2009 at 10:17 AM, Dan Linder wrote: > Guys, > > I'm looking at rewriting some of the store/retrieve code in a project > I'm working on. The current method uses the Data::Dumper and eval() > code to store data to a hierarchical directory structure on disk. > Over the weekend I all but eliminated the hard-disk overhead by moving > the data to a temporary RAM disk -- sadly, the speed-ups were too > small to notice. This tells me that the overall Linux file-system > caching is working quite well. (Yay!) Unfortunately, this leads me > (again) determine that the Dumper/eval() code is probably the > bottle-neck. (Definately not what they were designed for, but work > remarkably well none the less...) Eval is more then likely your biggest bottleneck. Dumper not so much, but heavy usage of eval in any language, can create a bottleneck in nothing flat. > So, I started investigating alternatives: > * A true database with client/server model (i.e. MySQL, PostgreSQL, etc) Use MySQL / PostgreSQL is you are going to have many hits to the Perl script that is going to be executing. It does well with threading, and also solves the problem mentioned below about SQLite. * An embedded database such as SQLite (others?) SQLite is a great Database system, for File Based data storage. Unfortunately, it stores in binary, so you can't exactly use grep, vi, etc, etc, to read the contents of the database file. But unlike it's big brother, you can only have one transactional lock (EG Database Open) at a time on a database file. This is to prevent corruption of the data. (And yes, this locks even if your just doing a read query.) > * Continue using the filesystem+directory structure using > freeze()/thaw() from the FreezeThaw CPAN module (speed improvement?) I dunno if freeze()/thaw() will do any good, as it still comes down to Dumper/eval() to properly store the information. > * Use a DBD module to store/retrieve these files (i.e. DBD::File, > DBD::CSV, etc) (benefit here is that a simple change in the DB setup > code will mean a change from DBD::File to DBD::SQLite or > DBD::PostgreSQL should be fairly short work) DBD overall, is a great front end for you to use, for database storage, as it gives you a common api across many different DB Backends. If you want consistency, and the ability to test different database storage engines, then I would strongly recommend you use DBD. > Internally I have some constraints: > * We'd like to keep the number of non-core Perl modules down > (currently we're 90% core), and a couple customers are extremely > sensitive to anything that is not supplied by their OS provider > (Solaris and HPUX for example). This is true in many facets, but you'll find standard that MySQL and SQLite are often the biggest thing that is distributed on most Operating Systems (Aside from Windows, but we won't go there). * We would also like to keep the files on disk and in a > human-readable form so the end users and support staff can peruse this > data with simple tools (grep, vi, etc). Again, as stated above, SQLite, and MySQL won't let you use grep, vi, etc, to view the data, but simple tools can be created to create the same effect, and highly optimize it to specific tasks, instead of looking through hundreds of lines of data, to find a specific field. > * The remaining 10% that is non-core Perl modules are local copies of > "pure perl" CPAN modules we've merged into the source code branch > directly. (We do this because the code runs on Solaris/SPARC, > Solaris/x86_64, Linux/x86, Linux/ia64, HPUX/PA-RISC, HPUX/ia64, etc) > > My personal pick at the moment is SQLite (it is provided natively in > Solaris 10, and easy to install on Linux platforms), but I question if > the speed up it provides will be over-shadowed by the constant > spawning of the sqlite binary each time an element of data is queried. > (Anyone know if there is a way to leave a persistent copy of SQLite > running in memory that future copies hook into? Getting a bit far > afield from the initial SQLite implementation goals...) Now, I come to this, after explaining the above to you, and I will be directly to the point. SQLite Binary (or BLOB) data types, while may seem to be huge for data allocation and stuff, is actually quite minimal in overall speed. This especially can be optimized when you need to look at specific data fields, and could care less about the rest. As well with anything else, SQLite does have overheads, but not nearly as much as you might think. It only allocates the data needed to return the results of a SQL query, or insert data into the database. The SQLite team has put much effort into optimizing the SQLite engine, so that it can store, as well as retrieve data in the most efficient manner possible, and keep the engine fast, and properly working. Many Linux distributions (Ubuntu among most), use SQLite for a large amount of storage within their own system, such as APT/Aptitude/Synaptic. Using SQLite can have it's advantages, but also it's downfalls to. If your wanting to avoid database locking issues, then I suggest MySQL. If your looking for Light weight solution, that is quick, and not so much a worry about Locking issues, then I would suggest SQLite. > Thanks for any insight, > > DanL > > -- > ******************* ***************** ************* *********** > ******* ***** *** ** > "Quis custodiet ipsos custodes?" (Who can watch the watchmen?) -- from > the Satires of Juvenal > "I do not fear computers, I fear the lack of them." -- Isaac Asimov > (Author) > ** *** ***** ******* *********** ************* ***************** > ******************* > _______________________________________________ > Omaha-pm mailing list > Omaha-pm at pm.org > http://mail.pm.org/mailman/listinfo/omaha-pm > Hope this helps, and it is just my own two cents on the deal. -- Mario Steele http://www.trilake.net http://www.ruby-im.net http://rubyforge.org/projects/wxruby/ http://rubyforge.org/projects/wxride/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan at linder.org Wed Aug 26 07:30:32 2009 From: dan at linder.org (Dan Linder) Date: Wed, 26 Aug 2009 09:30:32 -0500 Subject: [Omaha.pm] Embeddable database options. In-Reply-To: References: <3e2be50908240717g4b5d6a07s2d8adbf3e71daf5b@mail.gmail.com> Message-ID: <3e2be50908260730q142e809bjacf6863e062e9744@mail.gmail.com> Mario, Thanks for the feedback. 2009/8/24 Mario Steele : > SQLite is a great Database system, for File Based data storage. > Unfortunately, it stores in binary, so you can't exactly use grep, vi, etc, > etc, to read the contents of the database file.? But unlike it's big > brother, you can only have one transactional lock (EG Database Open) at a > time on a database file.? This is to prevent corruption of the data.? (And > yes, this locks even if your just doing a read query.) Thankfully, the database is very light on the write/update side. Does the read lock lock out other readers at the same time? > DBD overall, is a great front end for you to use, for database storage, as > it gives you a common api across many different DB Backends.? If you want > consistency, and the ability to test different database storage engines, > then I would strongly recommend you use DBD. As much as there is some pressure to stay "Pure Perl" and not rely on non-core modules, I think this is the only route toward expanding this tool. (Plus the added flexibility of adding other DB options by including the appropriate perl module.) >> Internally I have some constraints: >> ?* We'd like to keep the number of non-core Perl modules down >> (currently we're 90% core), and a couple customers are extremely >> sensitive to anything that is not supplied by their OS provider >> (Solaris and HPUX for example). > > This is true in many facets, but you'll find standard that MySQL and SQLite > are often the biggest thing that is distributed on most Operating Systems > (Aside from Windows, but we won't go there). Thankfully the server portion is 100% Unix. :-) >> * We would also like to keep the files on disk and in a >> human-readable form so the end users and support staff can peruse this >> data with simple tools (grep, vi, etc). > > Again, as stated above, SQLite, and MySQL won't let you use grep, vi, etc, > to view the data, but simple tools can be created to create the same effect, > and highly optimize it to specific tasks, instead of looking through > hundreds of lines of data, to find a specific field. I'm thinking that as a work-around to this, I can keep both versions available. Since these data files are only updated in a couple key locations (and the update is mostly through non-interactive means), this should be easily achievable. Once the data is saved in both forms and the flat and db files are consistent, updating the reporting pieces should be easier since I won't (shouldn't) break anything during the transition. An added bonus is that customers who rely on the textual data will not have to immediately re-code for the new DB chosen. As I'm writing it, I'm leaning toward using the DBD interface and accessing SQLite initially. If/when the time comes that MySQL/Postgres/Oracle/AnotherDB is requested, the changes should be minimal. The downfall of being an external module is greatly out-weighed by the flexibility it provides us. I'm hoping to carve out some free time over the next couple weeks to put some test code together to see what speed differentials are achieved by replacing dump/eval with SQLite, MySQL, etc. Thanks, Dan -- ******************* ***************** ************* *********** ******* ***** *** ** "Quis custodiet ipsos custodes?" (Who can watch the watchmen?) -- from the Satires of Juvenal "I do not fear computers, I fear the lack of them." -- Isaac Asimov (Author) ** *** ***** ******* *********** ************* ***************** ******************* From mario at ruby-im.net Wed Aug 26 14:49:59 2009 From: mario at ruby-im.net (Mario Steele) Date: Wed, 26 Aug 2009 17:49:59 -0400 Subject: [Omaha.pm] Embeddable database options. In-Reply-To: <3e2be50908260730q142e809bjacf6863e062e9744@mail.gmail.com> References: <3e2be50908240717g4b5d6a07s2d8adbf3e71daf5b@mail.gmail.com> <3e2be50908260730q142e809bjacf6863e062e9744@mail.gmail.com> Message-ID: Hey Dan, On Wed, Aug 26, 2009 at 10:30 AM, Dan Linder wrote: > Mario, > > Thanks for the feedback. Not a problem. > Thankfully, the database is very light on the write/update side. Does > the read lock lock out other readers at the same time? > This is an excerpt from SQLite's manual regarding concurrency: SQLite uses reader/writer locks on the entire database file. That means if any process is reading from any part of the database, all other processes are prevented from writing any other part of the database. Similarly, if any one process is writing to the database, all other processes are prevented from reading any other part of the database. For many situations, this is not a problem. Each application does its database work quickly and moves on, and no lock lasts for more than a few dozen milliseconds. But there are some applications that require more concurrency, and those applications may need to seek a different solution. In simpler terms, the longer your SQL Statement has to execute, the longer the database is going to be locked in either read/write mode. If your getting small subsets of data from the database, then the milliseconds quote is correct, and it won't take long to execute the SQL Statements. However, if you are getting a few hundred megs of records from the database in a SQL statement, either inserting, or retrieving, then it's going to take the database engine longer to fetch the data, and will hold the lock. I believe you can have as many readers as you want reading from the database, but once a program sends a write SQL statement, all other readers will lock till the writer finishes, the same for the opposite direction. But if it's small updates, and small record sets being retrieved, then you shouldn't have anything to worry about. > As much as there is some pressure to stay "Pure Perl" and not rely on > non-core modules, I think this is the only route toward expanding this > tool. (Plus the added flexibility of adding other DB options by > including the appropriate perl module.) > There's a lot of people that way, same with wanting "Pure Ruby" or "Pure Python" or "Pure" any other language out there. But the thing about it is, you need to realize that with Perl, Ruby, Python, C# and such languages, they are all interpreted languages, and anything "Pure" in those languages, are going to be slow to execute. The best thing about SQLite, atleast in Ruby, haven't really dealt much with the Perl one, is that the extension that binds with the engine itself, is all compiled together into a single dll/so extension, so you don't have to rely on an external library to be installed. And most times, it compiles without any problems, especially on Linux based systems. > Thankfully the server portion is 100% Unix. :-) And again, this is a life send for you, as stated above. > I'm thinking that as a work-around to this, I can keep both versions > available. Since these data files are only updated in a couple key > locations (and the update is mostly through non-interactive means), > this should be easily achievable. Once the data is saved in both > forms and the flat and db files are consistent, updating the reporting > pieces should be easier since I won't (shouldn't) break anything > during the transition. An added bonus is that customers who rely on > the textual data will not have to immediately re-code for the new DB > chosen. Another nice thing that you can do, if you want to go this route, with having the ability for grep/vi, is that you can create simple wrappers around grep and vi, and run SQL statements to generate the output, and pipe it into grep, or vi, and have no problems with it. > As I'm writing it, I'm leaning toward using the DBD interface and > accessing SQLite initially. If/when the time comes that > MySQL/Postgres/Oracle/AnotherDB is requested, the changes should be > minimal. The downfall of being an external module is greatly > out-weighed by the flexibility it provides us. I would definitely agree with that, you want flexibility, not just in the code that you have to write, but flexibility in the programs themselves, to meet the needs of various customers. Not all customers have the same need, and when you start to use the DBD interface, you'll find that it works quite well, and will require hardly any differences between the various database engine backends, that DBD Supports. > I'm hoping to carve out some free time over the next couple weeks to > put some test code together to see what speed differentials are > achieved by replacing dump/eval with SQLite, MySQL, etc. I can almost garuntee that SQLite, and MySQL will overpower dump/eval in nothing flat. At that point, your bottlenecks should dis-appear, least you code sloppy SQL Statements, but JOINs are your friends when cross table data retrieval. And your welcome for the help, Mario > > Thanks, > > Dan > > > -- > ******************* ***************** ************* *********** > ******* ***** *** ** > "Quis custodiet ipsos custodes?" (Who can watch the watchmen?) -- from > the Satires of Juvenal > "I do not fear computers, I fear the lack of them." -- Isaac Asimov > (Author) > ** *** ***** ******* *********** ************* ***************** > ******************* > _______________________________________________ > Omaha-pm mailing list > Omaha-pm at pm.org > http://mail.pm.org/mailman/listinfo/omaha-pm > -- Mario Steele http://www.trilake.net http://www.ruby-im.net http://rubyforge.org/projects/wxruby/ http://rubyforge.org/projects/wxride/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From netarttodd at gmail.com Wed Aug 26 17:29:35 2009 From: netarttodd at gmail.com (Todd Christopher Hamilton) Date: Wed, 26 Aug 2009 19:29:35 -0500 Subject: [Omaha.pm] Embeddable database options. In-Reply-To: References: <3e2be50908240717g4b5d6a07s2d8adbf3e71daf5b@mail.gmail.com> Message-ID: <1fdb7d920908261729m97f7d47p5e2ed2fae0f77d8b@mail.gmail.com> I use Sqlite for embeded but you could also look at FireBird. I know of a cool healthcare project used by the med center that uses FireBird as a local cache for a fat client. 2009/8/24 Mario Steele : > Hello Dan, > > On Mon, Aug 24, 2009 at 10:17 AM, Dan Linder wrote: >> >> Guys, >> >> I'm looking at rewriting some of the store/retrieve code in a project >> I'm working on. ?The current method uses the Data::Dumper and eval() >> code to store data to a hierarchical directory structure on disk. >> Over the weekend I all but eliminated the hard-disk overhead by moving >> the data to a temporary RAM disk -- sadly, the speed-ups were too >> small to notice. ?This tells me that the overall Linux file-system >> caching is working quite well. ?(Yay!) Unfortunately, this leads me >> (again) determine that the Dumper/eval() code is probably the >> bottle-neck. ?(Definately not what they were designed for, but work >> remarkably well none the less...) > > Eval is more then likely your biggest bottleneck.? Dumper not so much, but > heavy usage of eval in any language, can create a bottleneck in nothing > flat. > >> >> So, I started investigating alternatives: >> ?* A true database with client/server model (i.e. MySQL, PostgreSQL, etc) > > Use MySQL / PostgreSQL is you are going to have many hits to the Perl script > that is going to be executing.? It does well with threading, and also solves > the problem mentioned below about SQLite. > >> ?* An embedded database such as SQLite (others?) > > SQLite is a great Database system, for File Based data storage. > Unfortunately, it stores in binary, so you can't exactly use grep, vi, etc, > etc, to read the contents of the database file.? But unlike it's big > brother, you can only have one transactional lock (EG Database Open) at a > time on a database file.? This is to prevent corruption of the data.? (And > yes, this locks even if your just doing a read query.) > >> >> * Continue using the filesystem+directory structure using >> freeze()/thaw() from the FreezeThaw CPAN module (speed improvement?) > > I dunno if freeze()/thaw() will do any good, as it still comes down to > Dumper/eval() to properly store the information. > >> >> * Use a DBD module to store/retrieve these files (i.e. DBD::File, >> DBD::CSV, etc) (benefit here is that a simple change in the DB setup >> code will mean a change from DBD::File to DBD::SQLite or >> DBD::PostgreSQL should be fairly short work) > > DBD overall, is a great front end for you to use, for database storage, as > it gives you a common api across many different DB Backends.? If you want > consistency, and the ability to test different database storage engines, > then I would strongly recommend you use DBD. > >> >> Internally I have some constraints: >> ?* We'd like to keep the number of non-core Perl modules down >> (currently we're 90% core), and a couple customers are extremely >> sensitive to anything that is not supplied by their OS provider >> (Solaris and HPUX for example). > > This is true in many facets, but you'll find standard that MySQL and SQLite > are often the biggest thing that is distributed on most Operating Systems > (Aside from Windows, but we won't go there). > >> * We would also like to keep the files on disk and in a >> human-readable form so the end users and support staff can peruse this >> data with simple tools (grep, vi, etc). > > Again, as stated above, SQLite, and MySQL won't let you use grep, vi, etc, > to view the data, but simple tools can be created to create the same effect, > and highly optimize it to specific tasks, instead of looking through > hundreds of lines of data, to find a specific field. > >> >> ?* The remaining 10% that is non-core Perl modules are local copies of >> "pure perl" CPAN modules we've merged into the source code branch >> directly. ?(We do this because the code runs on Solaris/SPARC, >> Solaris/x86_64, Linux/x86, Linux/ia64, HPUX/PA-RISC, HPUX/ia64, etc) >> >> My personal pick at the moment is SQLite (it is provided natively in >> Solaris 10, and easy to install on Linux platforms), but I question if >> the speed up it provides will be over-shadowed by the constant >> spawning of the sqlite binary each time an element of data is queried. >> ?(Anyone know if there is a way to leave a persistent copy of SQLite >> running in memory that future copies hook into? ?Getting a bit far >> afield from the initial SQLite implementation goals...) > > Now, I come to this, after explaining the above to you, and I will be > directly to the point.? SQLite Binary (or BLOB) data types, while may seem > to be huge for data allocation and stuff, is actually quite minimal in > overall speed.? This especially can be optimized when you need to look at > specific data fields, and could care less about the rest.? As well with > anything else, SQLite does have overheads, but not nearly as much as you > might think.? It only allocates the data needed to return the results of a > SQL query, or insert data into the database. > > The SQLite team has put much effort into optimizing the SQLite engine, so > that it can store, as well as retrieve data in the most efficient manner > possible, and keep the engine fast, and properly working.? Many Linux > distributions (Ubuntu among most), use SQLite for a large amount of storage > within their own system, such as APT/Aptitude/Synaptic.? Using SQLite can > have it's advantages, but also it's downfalls to.? If your wanting to avoid > database locking issues, then I suggest MySQL.? If your looking for Light > weight solution, that is quick, and not so much a worry about Locking > issues, then I would suggest SQLite. > >> >> Thanks for any insight, >> >> DanL >> >> -- >> ******************* ***************** ************* *********** >> ******* ***** *** ** >> "Quis custodiet ipsos custodes?" (Who can watch the watchmen?) -- from >> the Satires of Juvenal >> "I do not fear computers, I fear the lack of them." -- Isaac Asimov >> (Author) >> ** *** ***** ******* *********** ************* ***************** >> ******************* >> _______________________________________________ >> Omaha-pm mailing list >> Omaha-pm at pm.org >> http://mail.pm.org/mailman/listinfo/omaha-pm > > Hope this helps, and it is just my own two cents on the deal. > > -- > Mario Steele > http://www.trilake.net > http://www.ruby-im.net > http://rubyforge.org/projects/wxruby/ > http://rubyforge.org/projects/wxride/ > > _______________________________________________ > Omaha-pm mailing list > Omaha-pm at pm.org > http://mail.pm.org/mailman/listinfo/omaha-pm > -- Todd Christopher Hamilton (402) 660-2787