If I had 1000 servers
scrapping (spidering) the web, at say the Amazon Elastic Compute Cloud
(Amazon EC2).
Could
Foxpro append massive amounts of data?
Is there a upper limit
to the amount of record locks and unlocks (with "append blank" or
"insert into") that can be processed per second, minute or hour?
Has anybody written
about this before?
Is
there a breakdown point?
Is there a failure point? If there is, what
would fail, eventually, if you continued to add servers?
Or is there a slow
down point where it isn't worth having x amount of "append blanks" where
they just start blocking each other?
Yes, there are limits to database inserts. The more robust the database product, the more it can handle. In fact, the more robust hardware it is running on, the more it can handle. For example, with FoxPro, if you're inserting into xBASE (a file based dBASE DB) then you'll be much more limited than for example to MS SQL 2008 which can handle more than SQL 2005. Anyway, I hope that gives you an idea. Follow up with an additional question if you wish.
Most "enterprise" database engines have one or more bulk load techniques. For example, PostgreSQL has "COPY FROM", MySQL has "LOAD DATA INFILE", and Oracle has its Data Pump, its SQL*Loader, and some kind of native import utility. Smaller "convenience" engines such as FoxPro, Access, and BDE do not have bulk insert methods. If you absolutely must insert large amounts of data quickly, you will need to move to a database that supports bulk insertion.
I suspect that if a database engine does not allow bulk insertion, then it is also not recommended for any heavy lifting, such as large record counts, many simultaneous users, deeply nested subqueries, etc. So that even if you could get your FoxPro installation to accept data rapidly, you would soon encounter other frustrating limitations.