Database-Centric Applications

<< Click to Display Table of Contents >>

Navigation:  Developer's Guide > Best Practices > N-Tier web applications >

Database-Centric Applications

Most of the web applications require some persistent storage which usually means a shared database. In this kind of applications it is possibly to identify several layers:


1.The database server hosting the database.

2.The data access technology for connecting the application to the database

3.The web application

4.One or several clients running on many browsers.


While the clients could be connected from the Internet or from the local Intranet, the web application server needs a much better/closer connection to the database server for achieving a reasonable performance when answering client requests.


Accessing the database directly from the web application server (meaning that the item 2 will be part of it) increases the use of resources on the server, risking its stability and scalability.


Taking into account the limited bandwidth available between the clients and the server, it is important to limit the amount of information sent to the client (or coming back from it).


As one web server will be answering the requests coming from several clients, it is also important to limit the amount of information required for the client functionality.


Having one shared database server also means that the database should be optimized for multiple connections requesting access to that shared resource.


Let's mention a few best practices for handling this kind of applications.




An enterprise-class application using a shared database as persistent storage usually follows some best practices:


Instead of using the same tables for all purposes, it separates them by tasks. It is even possible to have different databases hosted by different servers. One of the databases takes care of the online transaction processing (OLTP), while other provides data for complex reports (OLAP).

The OLTP database should be in 3rd Normal Form (3NF), making it easier to keep its integrity. It should be optimized for fast data-entry, updates, and simple queries (indexes should be optimized for the expected use). Complex updates should use transactions (so that a transaction could be rolled back at once).

If the amount of transactions slows down the database, it is worth evaluating the option of moving transactions (which at that time could be considered read/only) to a secondary database used for historical purposes or as data source for the OLAP database.

The OLAP database should be optimized for complex queries with fast response time. As a result, it usually aggregates data, and it adds dimensions. The most typical architecture is a big fact table in a star schema.

Any heavy processing of the data stored on the database should be done on the database server (imagine what could happen if we wanted to modify millions of records in a table by reading them from the application server and posting them back!). Even if some modern developers think of the database as just an object storage, an enterprise-class database server can host some business logic as stored procedures which can have a dramatic impact in performance.

Avoid triggering synchronous actions based on client requests unless strictly necessary. For example, if we need to publish/log any update to the database to a remote server or web service, it is better to leave a notification in a local table and process the notifications in a background job.


None of the previous best practices are specifically related to uniGUI, they apply to any data-centric application.


Database Access


Having a powerful database server and a fast connection to it doesn't mean that we should request information we don't need. A database table could hold millions of rows, but we should never ask for ALL of them, as there is no practical purpose in doing so.


Let's assume that we are accessing the database using a data module, a database connection (FireDAC TFDConnection or similar), and a query. What will happen if we ask for ALL the records from the previous HUGE table? Obviously, after some time (it could be a long time depending on the amount of memory available on the server) we will receive an exception "Out of Memory". Do we really need to ask for all records? No, we are just applying the same techniques we used for small applications.


The most important best practice when accessing a database is to ask for enough information, just what we need, never more.


It is also important to know the specific technology we are using for accessing the database:

Some of these technologies are smart enough to accept a request to open such a query, but deliver the data using a clever paging algorithm. The developer will know how many records are available, but the paging process will deliver them only when needed.

Other technologies require manual configuration for avoiding massive data loads and a having to fetch all records just to report the RecordCount.

Most of the middleware products take care of these issues (like RemObjects DataAbstract, kbmMW and DataSnap).


The simplest solution to this kind of issues is to make sure that any information requested by the user is always filtered. If the amount of records is bigger than some predefined limit, the request could be rejected, and the user should provide a narrower filter.


Another option is to accept the original request, but take care of the paging (if not supported natively by the data access technology).


Above all things, as a principle, the amount of information the user is capable of using is always very small, never send superfluous information (in quality and quantity).


Data Controls (server and client)


In the layered approach of N-Tier applications, each layer should concern only about how to answer requests from the next layer by translating them to requests to the previous layer.


Any uniGUI data control makes requests to the previous layer through datasets.

The client-side Ext JS component (rendered dynamically by the uniGUI server) shows the data received and accepts the user input for making further Ajax requests back to the server.


It is obvious that this traffic should be kept at a minimum.


Let's see a few examples about how uniGUI implements the previous best practice:

TUniEdit - Property CheckChangeDelay allows to trigger the OnChange event only after some time (useful when filtering a dataset based on the changed value).

TUniDBLookupComboBox - Allows to enable a custom RemoteQuery which can be optimized according to the information the user is typing (there is also a RemoteDelay).

TUniDBGrid - It provides column filtering, paging, client-side navigation, etc.