Telerik OpenAccess Classic

Telerik OpenAccess ORM Send comments on this topic.
2nd Level Cache
Programmer's Guide > OpenAccess ORM Classic (Old API) > Programming With OpenAccess > The OpenAccess ORM Object Lifecycle > 2nd Level Cache

Glossary Item Box

This documentation article is a legacy resource describing the functionality of the deprecated OpenAccess Classic only. The contemporary documentation of Telerik OpenAccess ORM is available here.

All ObjectScopes from the same database can share a secondary cache. This cache is generally called the "2nd level cache" or the "L2 cache". The L2 cache holds copies of the database content of application objects. That means that the values for the various fields of the user object (like a Person instance) are duplicated in the L2 cache. The cache is populated during read access and gives fast retrieval of commonly used objects. The cache resides in the application process and the cache content is indexed by the object id. Additionally, the cache can contain complete query results. The query string and the parameters are used to index the results.

After a Transaction.Flush() or an other operation that pushes uncommitted data to the server are performed within an objectscope; the operations of this objectscope are not cached until the transaction has ended.

The 2nd level cache is designed to implement a multithreaded application, where, every single thread has its own ObjectScope. In order to, avoid too many calls to the relational server, OpenAccess ORM has a caching mechanism, the 2nd level cache, which is shared by all ObjectScopes in a single process. This is optimal if you implement a Web Application, which typically has a thread pooling and opens a new ObjectScope for all incoming calls. Here, database access is necessary only when the data, which needs to be retrieved is not currently available in the cache, thereby increasing the efficiency and performance of your application.

The L2 Cache can be used in single application scenarios only. For scenarios with more than one application using the L2 cache for the same database, the L2 Cache Cluster has to be used.

The 2nd level cache is used to avoid database access for non-transactional reads and reads in optimistic transactions. It is bypassed for datastore transactions to ensure that database locks are obtained. Only data read in an optimistic transaction or outside of a transaction are cached. Data for individual instances and the results of queries are cached.

Caching can be enabled for a database connection within the <backendconfigurations> section of the configuration file (refer to Caching Settings for more information). The size is defined by the amount of objects the cache can hold.

By default the L2 cache is not created; it needs to be explictly enabled.

Caching can be more fine-grained on a per class basis with the default for each class set on the datastore (refer to cache-strategy for more information). The options for each class are no, yes and all. Classes set to "no" are not cached; this extends to not caching query results involving those classes in some way. Classes set to "all" have all of their instances read the first time an instance is required. This is useful, in cases where you know that most of the instances will be required, as it avoids individual queries to fetch them. This is an alternative to prefetching instances using outer joins (something we also support). It also allows excluding certain types from being cached at all, and it also permits loading of all instances of a class upon the first instance retrieval. The size of the query cache and the size of the object cache can be controlled independently.

OpenAccess ORM will automatically evict modified instances and query results as needed. If other applications are modifying the database you can either disable caching for classes mapped to the tables being modified or manually evict instances, when you know the data has changed. When the cache is full (has reached the maximum configured number of instances) the least recently used instance(s) are evicted (refer to Max Cache Objects for more information on how to control the number of objects in the 2nd level cache).

OQL query results are cached by storing the OIDs of all the instances returned by the query. If the same query is executed with the same parameters and options, we avoid running a database query, since the cache is accessed. Cached query results are automatically evicted when any instance of any class involved in the query (filter, ordering etc.) is modified. All of this caching is transparent to the application using OpenAccess.

The Evict(object) method on the ObjectScope and ObjectContext can currently only be used to manually evict instances, which are in memory from the IObjectContext/IObjectScope cache, and not from the 2nd level cache. An API for manual evictions from the 2nd level cache is planned for a future release.

Manual evictions are useful if your data gets changed infrequently by an external system and you know when this happens. You can also use this after doing bulk SQL updates.