Overview
This article is relevant to entity models that utilize the deprecated Visual Studio integration of Telerik Data Access. The current documentation of the Data Access framework is available here.
Operations on persistent classes in a database are always performed in the context of a transaction. A transaction is an ordered sequence of operations that transforms a database from one state of consistency to another state of consistency.
ACID Transactions
The data in Telerik Data Access database is maintained in a transactional consistent state according to the standard definition of ACID transactions.
Atomicity
The Atomicity property states that within a transaction, changes to the values in persistent instances are either executed in their entirety or they are not executed at all, i.e. if one part of the transaction fails, the entire transaction fails. A transaction, therefore, must not leave intermediate states in the database—even when errors occur.
Consistency
The Consistency property states that before the beginning and after the end of a transaction, the state of a database must be consistent. This means, in particular, that after the rollback of a transaction, (the transaction was not successfully processed due to an error) the consistent state of the database must be reestablished. Telerik Data Access assures that changes to the values of persistent instances are consistent with changes to other values in the same instance. Telerik Data Access, however, does not assure referential integrity; i.e., references to other persistent instances can always be resolved.
Isolation
The Isolation property requires that changes to values in persistent instances are isolated from changes to the same instance in other transactions. For a given transaction, it should appear as though it is running independently from all other transactions associated with the database.
Durability
The Durability property states that if a transaction has been committed successfully, all its changes must be durable in the database, i.e. any transaction committed to a database will not be lost. This is guaranteed by the database backend (For some databases, the backend may be disabled or may not be available).
These principles are reflected in the Telerik Data Access API and runtime system. There are two ways to deal with concurrency conflicts - pessimistic concurrency and optimistic concurrency.
Pessimistic concurrency control locks resources as they are required, for the duration of a transaction. Unless deadlocks occur, a transaction is assured of successful completion. The database row is locked when a user retrieves that data and is then released when the user is finished working with that row. Nobody can touch that data until the row is unlocked. The pessimistic concurrency control has the following advantages and disadvantages:
Advantages:
- It greatly reduces the potential of conflicts.
- It is not scalable, because it maintains open connections and it can cause excessive locking, long waits, or even deadlocks.
Disadvantages:
- Certain sequences of operations are rejected even though they do not violate the chosen degree of isolation. By allowing these operations to be accepted, a higher concurrency can be achieved and thus a higher scalability is possible.
Optimistic concurrency control works on the assumption that resource conflicts between multiple users are unlikely (but not impossible), and allows transactions to execute without locking any resources. Only when attempting to change data, it is checked to determine if any conflicts have occurred. If a conflict occurs, Telerik Data Access will throw OptimisticVerificationException. Optimistic concurrency does not lock the database rows, and relies on the developer to provide logic in the application to handle potential conflicts. The optimistic concurrency control has the following advantages and disadvantages:
Advantages:
- Provides better scalability, as less physical database connections are used.
- No write locks are applied, so there are no additional client/server calls.
Disadvantages:
- An unnecessary amount of time can pass before a transaction is aborted, even though it is clear, early on, that it will be aborted. The version verification is not executed until the transaction is committed, but a difference in the versions can occur much earlier.
Telerik Data Access does allow you to implement both controls:
Concurrency Problems
If locking is not available and several users access a database concurrently, problems may occur if their transactions use the same data at the same time. Weakening the isolation property of the ACID principle implies that database inconsistencies can occur when more than one transaction is working concurrently on the same objects. In the space of time between when objects are read and then written, the same objects can be read from the database and even manipulated by other transactions. This leads to concurrency problems.
The following lists some typical concurrency problems:
Uncommitted Dependency (Dirty Read)
Dirty read occurs if one transaction reads data that has been modified by another transaction. This results in a violation of transaction isolation, if the transaction that modified the data is rolled back. In Telerik Data Access, a write operation is first performed in memory only. The changes are not flushed to the backend before a commit or flush is executed. This itself does not prevent dirty reads, but lowers the risk. Whether dirty reads are actually avoided or not depends on the database backend used and/or its configuration (e.g., the isolation level that is set).
An example that demonstrates how to set the isolation level for each instance of the context is available in the How to: Set Isolation Level article.
Lost Updates
Lost update occurs when two transactions read the same object and then modify this object independently. The transaction that is committed last overwrites the changes made by the earlier transaction. This problem could be avoided if the second transaction could not make changes until the first transaction had finished.
Inconsistent Analysis (Non-Repeatable Read)
Non-repeatable read occurs when an object is read twice within a transaction and between the reads, it is modified by another transaction. Therefore, the second read returns different values as compared to the first, i.e. the read operation is non-repeatable. Non-repeatable read is similar to dirty read in that another transaction is changing the data that a second transaction is reading. However, in non-repeatable read, the data read by the second transaction was committed by the transaction that made the change. Telerik Data Access uses in-memory copies of database objects. Once an object is loaded into memory, there is no need to fetch it from the database each time a member is accessed. A new read operation, and therefore a non-repeatable read, could only occur when an application explicitly refreshes an object. This depends on the backend and/or its configuration (e.g., if it is a "versioning" database that maintains multiple versions of a row for concurrently running transactions).
Phantom Reads
Phantom reads are of a totally different nature than the problems previously introduced. They occur when an insert or delete action is performed against a row that belongs to a range of rows being read by a transaction. The transaction's first read of the range of rows returns a row that no longer exists in the second or succeeding read, as a result of a deletion by a different transaction. Similarly, as the result of an insert by a different transaction, the transaction's second or succeeding read shows a row that did not exist in the original read.