Hi everyone.

Hi everyone.
I'm hoping to tap your collective knowledge.
We are working on a three tier application and we need to get data from a database and display it to the user at the other end.
Since this data can become quite large we don't want to store it in memory.
Currently we are using ADO to request the data in chunks and then insert it into an ElevateDB store. This is not very efficient however.
Does anyone know of a good solution to this problem?

Comments

  1. 1) where do you display the data, on a desktop application, web app, etc.
    2) how large can the data be, worst case scenario?

    ReplyDelete
  2. The data is displayed in a desktop application (32 bit only).
    We can have many results displayed at the same time. Worst case is that each of these is upwards of 100 MB in size.

    ReplyDelete
  3. have you considered serving data compressed(something fast like zlib)? that can be stored on client's PC and look up from there, of course, first you have to serve it with all the lookups performed...

    however, who would parse visually over 100MB of data? you may wanna consider filtering data down to something more humanly manageable, charts, few rows of reports, etc. I don't have enough info, but you get the gist...

    ReplyDelete
  4. Dorin Duminica  I guess data can be some sort of audio or video blobs. While the number of retrieved rows might be not that large, the total amount of data transfer can be very high. In this case clever filtering can help but not much. It might be useful to initially load a restricted set of fields for each row and have large data available on request when user decides to have a detailed look.

    ReplyDelete
  5. Alexander Elagin agree, also, if applies, the large data can be distributed to peers and have connections between them in order to share common data more rapidly.

    ReplyDelete
  6. imo, use firebird for data storage it may be helpful to solve lot of problems u are facing and  also its free.

    ReplyDelete
  7. Well 100 MB really is the worst case. In general the data won't be that large.
    To clarify:
    We allow our users to build up the queries they want to see. These are then converted to corresponding SQL queries. If the user does not specify any filters then the data can become large. (These are hard- and software data of a potentially large number of computers)
    A direct connection between the desktop application and our database is not possible so we use ElevateDB to store this data at the moment.
    But getting the data into there is what is causing us performance problems. So we are looking for a way to store the SQL query results in a file based storage directly without having to go through ADO and  then inserting it into some other storage.

    ReplyDelete
  8. if you are storing locally for time being  and concurrency control is some-how is managed and if it is read only system then i should wonder why it is not working.is it readonly system ? locally? if it is not then there lies your enemy.

    ReplyDelete
  9. Christoph Hillefeld you can also page results, i.e. limit sql results to say 1.000 records(very manageable), send results compressed, then, when going "next|previous page", simply do the query by adding where id > X or id < Y, it's going to be nearly instant, if not, then you can toy with the results per "page" and see what works better in worst case scenario

    maybe do a "test" when user connects and see how much a user can get at one time in a reasonable amount of time(say 10-20 seconds), based on that, you set the limit per page

    ReplyDelete
  10. if it is not read only system locally and peers do frequent read write then your are adding extra complexity by emplyoing elevatedb or sqlite as local storage.because the local storage should manage concurrency internally while communicating with your main database that cost time in huge database.what is your database at backe end?
    try to make it plain and simple is my opinion. let us hear from experts.

    ReplyDelete
  11. In a multitier architecture, you should never returns to the client more data that what he can reasonably see in a short time, that is a single page or only a limited number of pages. Then when the user scrolls thru the data, the client app request more data from server. Only what de user see is transferred "on demand". At server side, you either change the query to limit the number of record to the current chunk (Easy) or you store the full result set to a temporary storage (maybe in the same DB) so that the result set remains consistent. You then have to find a way to clean that local cache, for example once a day or after some inactivity timeout.

    ReplyDelete
  12. Or try RemObjects DataAbstract as a middle level to connect to the database. Then you can use dynamic queries, paged fetch etc.

    ReplyDelete
  13. Data abstract provides good pagination solutions

    ReplyDelete

Post a Comment