Dave Ward 52c0d4ddca Merged V3.4-BUG-FIX to HEAD
30947: ALF-10619: Merged PATCHES/V3.1.2 to V3.4-BUG-FIX
      30884: ALF-10588: Another possible race condition resulting in out of sync transactions - found on SQL Server and JBoss in 3.1.2
         - FTS could process updated and deleted nodes in the same transaction before the tracker got to them, leaving behind the correct transaction ID and deleted nodes but undeleted container docs!
         - We now have to validate all deletions have been honoured when index tracking
      30890: ALF-10588: Temporarily disable FTS during IndexCheckServiceImplTest
         - Otherwise can get confused by intermediate FTS state of its own nodes!
      30894: ALF-10588: Correction to deletion checking
         - Only search for deleted nodes, not updated ones too!
   30948: ALF-10619: Fixed merge issue
   30982: - ALF-10503 60k Site Performance: Admin Console | Groups: search with a value that matches all 60 groups: maxClasuesCount=10000
   - ALF-10511 60k Site Performance: Admin Console | Users | Edit User | Group Search with a value that matches all 60 groups: maxClauseCount=10000
   - ALF-10608 60k Site Performance: Searching for a group to add to a site with a value that matches all 60 groups: maxClauseCount=10000
   - ALF-10515 60k Site Performance: Edit Group Display Name: The first time, nothing appears to happen for 10 seconds after pressing [Save]
   - ALF-10514 60k Site Performance: Admin Console | Groups | Search | Delete Group: no feedback to user for 20 seconds after clicking delete icon
   30985: Increases in node, property and aspect caches.
   30987: Merged DEV/TEMPORARY to V3.4-BUG-FIX
      30984: ALF-9880 : ContentGet web script throws NullPointerException for nodes missing cm:modified property
         The check for null was added for cm:modified property (similar to BaseDownloadContentServlet).
   30995: Fix for ALF-9021
   30996: ALF-10324 Cannot disable Home Folder Creation
      - Bug introduced into V3.1 on the 8 March 2010
      - ChainingUserRegistrySynchronizerTest enhanced to check for this
      - Fix to PersonService: Home folder was not being created for 'missing' persons
      - PersonService: Changed autoCreate parameters to more descriptive names (okay long) and updated Javadoc
   30998: ALF-10512 60k Site Performance: Clicking on Sites (left hand side) in the Repository browser causes transactional limit to be reached
     - Changed node, aspect, property and parentAssoc cache sizes (based on Derek's Skype message)
   31006: ALF-10512 60k Site Performance: Clicking on Sites (left hand side) in the Repository browser causes a transactional limit to be reached
     - Having changed cache sizes in previous commit, the nodeOwner and acl transactional caches were then blown with test case for ALF-10512
       Changed to 20k from 10k. Tried 15k but it still had a problem.
   31052: Fix for ALF-10520
   Merged HEAD to V3.4-BUG-FIX
      31051: Performance improvements for Share Repository browser queries.
             DB with ~50,000 nodes under Company Home:
             Before:
             - I'm Editing - 16 secs, Favorites - 17 secs, Tag - 14 secs
             After: 
             - I'm Editing - 1.5 secs, Favorites - 1.2 secs, Tag - 1.25 secs
   31058: ALF-10324 Cannot disable Home Folder Creation
      - ChainingUserRegistrySynchronizerTest check using personService with both eager and non eager home folder creation
   31064: ALF-9360: Merged PATCHES/V3.4.4 to V3.4-BUG-FIX
      30244: Merged DEV/DAVEW/IMAP_NEW to PATCHES/V3.4.4
         29635: Rework of IMAP to use lightweight caching and correctly set UIDVALIDITY, NEXTUID and Marked / Unmarked state
         29668: 1. Changed get AlfrescoImapFolder.getFullNameInternal to be dynamic for cache support
         29692: 1. Reverts changed in AlfrescoImapServer to allow ImapHostManager to be a session key for folder.
         2. getFlags relies on FileInfo.getProperties()
         29741: 1. Changed AbstractMimeMessage.updateMessageId() to follow RFC2822 (3.6.4. Identification fields)
         2. Changed ImapServiceImpl to handle absent folders and return "NO" reply to a client.
         3. Changed ImapServiceImpl that behaviours don't fail when Alfresco is being first time bootstrapped with IMAP enabled.
         4. Cleared AlfrescoImapFolder constructor. 
         5. Fixed SelectCommand's response to adhere RFC3501 (6.3.1.  SELECT Command)
         6. Fixed CommandParser to be able parse the flag which is not surrounded by braces (STORE 2:4 +FLAGS \Deleted)
         30235: Completion of IMAP rework
         - Scalable caching
           - Proper transactional cache for assembled messages
           - No more assumption that EHcache will always hold entire folder set at once (and perhaps it can't)
           - Per session (TCP connection) cache of accessed folders
           - Session cache validation via a 'change token' that is incremented on all significant events
           - Folder status attributes evaluated once and reused until the change token changes
           - Now only changed folders need to be queried on an IMAP sync and the server doesn't have to hold all folders in memory
           - User's view is consistent with their security permissions
         - Simplification / overhaul of ImapServiceImpl including efficient recursive path building and matching
         - AlfrescoImapFolder immutable as it should be
         - Greenmail fixes
            - Fixed quoting of mailbox names
            - Fixed hanging problem in ImapRequestLineReader - regression caused by our 8 bit encoding fix. Avoid using an InputStreamReader to read ISO-8859-1 bytes as it has an internal buffer.
      30275: Fix failing IMAP tests broken by my recent refactor!
      - Fixed greenmail conversion of ISO-8859-1 bytes to chars
      - Transaction read write attributes on service
      - Read only commands on AbstractImapFolder
      - Imap aspect properties must be managed as SYSTEM user
      - Restored persistence of new mail messages
      - Avoid unit test txn rollback woes by making it possible to check for existence of a path with FileFolderService
      30487: ALF-10268: Merged V3.4-BUG-FIX to PATCHES/V3.4.4
         30264: ALF-10187: Merged V3.3 to V3.4-BUG-FIX
            30003: ALF-9898: More defensive exception handling to avoid packet pool leaks and extra logging on packet pool exhaustion
      30540: ALF-10257: Fixed logic error introduced into Greenmail ImapRequestLineReader
      30988: ALF-9361: Merged DEV/DAVEW/IMAP_NEW to PATCHES/V3.4.4 (by Arseny)
         30419: Remote test for generic client request sequence.
         30547: 1. A bug with FetchCommand particularly with FETCH (BODY.PEEK[1]) with an error 
            1315912197.789640 1.5 NO FETCH failed. java.lang.String cannot be cast to javax.mail.internet.MimeMultipart
            This happened while message content is being proceeded like MimeMultipart mp = (MimeMultipart) mimeMessage.getContent();, but javadoc of mimeMessage.getContent() says that this content can be a String in case of non-multipart message. Fixed FetchCommand accordingly to mimeMessage.getContent() javadoc. 
         2. A bug with RFC822MetadataExtracter 
            When mimeMessage.getHeader("received"); is used with the message with following header 
            Received: with ECARTIS (v1.0.0; list dovecot); Tue, 06 Aug 2002 13:01:17 +0300 (EEST) 
            It doesn't extract a date, because it uses lastReceived.indexOf(';') which returns the position IN the ECARTIS (v1.0.0; list dovecot) after v1.0.0, 
            So it should use lastReceived.lastIndexOf(';') to get the position after ECARTIS (v1.0.0; list dovecot). 
      31025: ALF-9361: IMAP Performance
      - Introduced folder status MRU cache
      - Keyed by user ID and change token so no need to cluster
      - Now means we should get reuse across IMAP sessions
      - Also fixed isMarked() implementation to only return true if there are recent or unseen mails
      31038: ALF-9361: Prevent the starting of unnecessary transactions in AlfrescoImapFolder interface
      - getFolderStatus regulates its own transaction
      - Dropped all those *Internal methods from the abstract class
      - getUnqualifiedMailboxPattern moved to AlfrescoImapHostManager
      - Fixes to session folder cache validation / reuse
      31039: ALF-9361: Repository tuning for IMAP performance
      - Backed out ALF-5575 60 second timeout on node caches - Should be covered by ALF-8607 fix
      - Also made TransactionalCache.NewCacheBucket save new values to the shared cache for 'mutable' caches. Previously it was only possibly to load into the node caches in a read only transaction!
      - Also added fix to make AbstractNodeDAOImpl bulk load empty node aspect sets
      - Result is a drastic speedup of full sync times as most items can be served from the cache`
      31042: ALF-9361: Fix ImapServiceImplTest
      31048: ALF-9361: Make ConcurrentNodeServiceTest work again, after relaxation of 'mutable' transactional caches
      - aspect and property caches validated by node transaction ID, as per parent assocs in ALF-8607
      31050: ALF-9361: Caching correction
         Always use the cached mailbox reference if it is equivalent (because the session remembers the last selected mailbox)
      31060: ALF-9361: Fix CacheTest, following back out of ALF-5575 behaviour
      31061: ALF-9361: More caching fixes
      31062: ALF-9361: Undo accidental changes to ConcurrentNodeServiceTest
      31063: ALF-9361: Build fix: replaced assertSame with assertEquals


git-svn-id: https://svn.alfresco.com/repos/alfresco-enterprise/alfresco/HEAD/root@31079 c4b6b30b-aa2e-2d43-bbcb-ca4b014f7261
2011-10-10 12:07:32 +00:00

974 lines
34 KiB
Java

/*
* Copyright (C) 2005-2010 Alfresco Software Limited.
*
* This file is part of Alfresco
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
*/
package org.alfresco.repo.cache;
import java.io.Serializable;
import java.util.Collection;
import java.util.HashSet;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.Set;
import net.sf.ehcache.CacheException;
import org.alfresco.error.AlfrescoRuntimeException;
import org.alfresco.repo.transaction.AlfrescoTransactionSupport;
import org.alfresco.repo.transaction.AlfrescoTransactionSupport.TxnReadState;
import org.alfresco.repo.transaction.TransactionListener;
import org.alfresco.util.EqualsHelper;
import org.alfresco.util.PropertyCheck;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.InitializingBean;
/**
* A 2-level cache that mainains both a transaction-local cache and
* wraps a non-transactional (shared) cache.
* <p>
* It uses the <b>Ehcache</b> <tt>Cache</tt> for it's per-transaction
* caches as these provide automatic size limitations, etc.
* <p>
* Instances of this class <b>do not require a transaction</b>. They will work
* directly with the shared cache when no transaction is present. There is
* virtually no overhead when running out-of-transaction.
* <p>
* The first phase of the commit ensures that any values written to the cache in the
* current transaction are not already superceded by values in the shared cache. In
* this case, the transaction is failed for concurrency reasons and will have to retry.
* The second phase occurs post-commit. We are sure that the transaction committed
* correctly, but things may have changed in the cache between the commit and post-commit.
* If this is the case, then the offending values are merely removed from the shared
* cache.
* <p>
* When the cache is {@link #clear() cleared}, a flag is set on the transaction.
* The shared cache, instead of being cleared itself, is just ignored for the remainder
* of the tranasaction. At the end of the transaction, if the flag is set, the
* shared transaction is cleared <i>before</i> updates are added back to it.
* <p>
* Because there is a limited amount of space available to the in-transaction caches,
* when either of these becomes full, the cleared flag is set. This ensures that
* the shared cache will not have stale data in the event of the transaction-local
* caches dropping items. It is therefore important to size the transactional caches
* correctly.
*
* @author Derek Hulley
*/
public class TransactionalCache<K extends Serializable, V extends Object>
implements SimpleCache<K, V>, TransactionListener, InitializingBean
{
private static final String RESOURCE_KEY_TXN_DATA = "TransactionalCache.TxnData";
private Log logger;
private boolean isDebugEnabled;
/** a name used to uniquely identify the transactional caches */
private String name;
/** enable/disable write through to the shared cache */
private boolean disableSharedCache;
/** the shared cache that will get updated after commits */
private SimpleCache<Serializable, Object> sharedCache;
/** can the cached values be modified */
private boolean isMutable;
/** the maximum number of elements to be contained in the cache */
private int maxCacheSize = 500;
/** a unique string identifying this instance when binding resources */
private String resourceKeyTxnData;
/**
* Public constructor.
*/
public TransactionalCache()
{
logger = LogFactory.getLog(TransactionalCache.class);
isDebugEnabled = logger.isDebugEnabled();
disableSharedCache = false;
isMutable = true;
}
/**
* @see #setName(String)
*/
public String toString()
{
return name;
}
public boolean equals(Object obj)
{
if (obj == this)
{
return true;
}
if (obj == null)
{
return false;
}
if (!(obj instanceof TransactionalCache<?, ?>))
{
return false;
}
@SuppressWarnings("rawtypes")
TransactionalCache that = (TransactionalCache) obj;
return EqualsHelper.nullSafeEquals(this.name, that.name);
}
public int hashCode()
{
return name.hashCode();
}
/**
* Set the shared cache to use during transaction synchronization or when no transaction
* is present.
*
* @param sharedCache underlying cache shared by transactions
*/
public void setSharedCache(SimpleCache<Serializable, Object> sharedCache)
{
this.sharedCache = sharedCache;
}
/**
* Set whether values must be written through to the shared cache or not
*
* @param disableSharedCache <tt>true</tt> to prevent values from being written to
* the shared cache
*/
public void setDisableSharedCache(boolean disableSharedCache)
{
this.disableSharedCache = disableSharedCache;
}
/**
* @param isMutable <tt>true</tt> if the data stored in the cache is modifiable
*/
public void setMutable(boolean isMutable)
{
this.isMutable = isMutable;
}
/**
* Set the maximum number of elements to store in the update and remove caches.
* The maximum number of elements stored in the transaction will be twice the
* value given.
* <p>
* The removed list will overflow to disk in order to ensure that deletions are
* not lost.
*
* @param maxCacheSize
*/
public void setMaxCacheSize(int maxCacheSize)
{
this.maxCacheSize = maxCacheSize;
}
/**
* Set the name that identifies this cache from other instances. This is optional.
*
* @param name
*/
public void setName(String name)
{
this.name = name;
}
/**
* Ensures that all properties have been set
*/
public void afterPropertiesSet() throws Exception
{
PropertyCheck.mandatory(this, "name", name);
PropertyCheck.mandatory(this, "sharedCache", sharedCache);
// generate the resource binding key
resourceKeyTxnData = RESOURCE_KEY_TXN_DATA + "." + name;
// Refine the log category
logger = LogFactory.getLog(TransactionalCache.class.getName() + "." + name);
isDebugEnabled = logger.isDebugEnabled();
// Assign a 'null' cache if write-through is disabled
if (disableSharedCache)
{
sharedCache = new NullCache<Serializable, Object>();
}
}
/**
* To be used in a transaction only.
*/
private TransactionData getTransactionData()
{
@SuppressWarnings("unchecked")
TransactionData data = (TransactionData) AlfrescoTransactionSupport.getResource(resourceKeyTxnData);
if (data == null)
{
data = new TransactionData();
// create and initialize caches
data.updatedItemsCache = new LRULinkedHashMap<K, CacheBucket<V>>(23);
data.removedItemsCache = new HashSet<K>(13);
data.isReadOnly = AlfrescoTransactionSupport.getTransactionReadState() == TxnReadState.TXN_READ_ONLY;
// ensure that we get the transaction callbacks as we have bound the unique
// transactional caches to a common manager
AlfrescoTransactionSupport.bindListener(this);
AlfrescoTransactionSupport.bindResource(resourceKeyTxnData, data);
}
return data;
}
/**
* Checks the transactional removed and updated caches before checking the shared cache.
*/
public boolean contains(K key)
{
Object value = get(key);
if (value == null)
{
return false;
}
else
{
return true;
}
}
/**
* The keys returned are a union of the set of keys in the current transaction and
* those in the backing cache.
*/
@SuppressWarnings("unchecked")
public Collection<K> getKeys()
{
Collection<K> keys = null;
// in-txn layering
if (AlfrescoTransactionSupport.getTransactionId() != null)
{
keys = new HashSet<K>(23);
TransactionData txnData = getTransactionData();
if (!txnData.isClearOn)
{
// the backing cache is not due for a clear
Collection<K> backingKeys = (Collection<K>) sharedCache.getKeys();
keys.addAll(backingKeys);
}
// add keys
keys.addAll(txnData.updatedItemsCache.keySet());
// remove keys
keys.removeAll(txnData.removedItemsCache);
}
else
{
// no transaction, so just use the backing cache
keys = (Collection<K>) sharedCache.getKeys();
}
// done
return keys;
}
/**
* Fetches a value from the shared cache.
*
* @param key the key
* @return Returns the value or <tt>null</tt>
*/
@SuppressWarnings("unchecked")
private V getSharedCacheValue(K key)
{
return (V) sharedCache.get(key);
}
/**
* Checks the per-transaction caches for the object before going to the shared cache.
* If the thread is not in a transaction, then the shared cache is accessed directly.
*/
public V get(K key)
{
boolean ignoreSharedCache = false;
// are we in a transaction?
if (AlfrescoTransactionSupport.getTransactionId() != null)
{
TransactionData txnData = getTransactionData();
if (txnData.isClosed)
{
// This check could have been done in the first if block, but that would have added another call to the
// txn resources.
}
else // The txn is still active
{
try
{
if (!txnData.isClearOn) // deletions cache only useful before a clear
{
// check to see if the key is present in the transaction's removed items
if (txnData.removedItemsCache.contains(key))
{
// it has been removed in this transaction
if (isDebugEnabled)
{
logger.debug("get returning null - item has been removed from transactional cache: \n" +
" cache: " + this + "\n" +
" key: " + key);
}
return null;
}
}
// check for the item in the transaction's new/updated items
CacheBucket<V> bucket = (CacheBucket<V>) txnData.updatedItemsCache.get(key);
if (bucket != null)
{
V value = bucket.getValue();
// element was found in transaction-specific updates/additions
if (isDebugEnabled)
{
logger.debug("Found item in transactional cache: \n" +
" cache: " + this + "\n" +
" key: " + key + "\n" +
" value: " + value);
}
return value;
}
}
catch (CacheException e)
{
throw new AlfrescoRuntimeException("Cache failure", e);
}
// check if the cleared flag has been set - cleared flag means ignore shared as unreliable
ignoreSharedCache = txnData.isClearOn;
}
}
// no value found - must we ignore the shared cache?
if (!ignoreSharedCache)
{
V value = getSharedCacheValue(key);
// go to the shared cache
if (isDebugEnabled)
{
logger.debug("No value found in transaction - fetching instance from shared cache: \n" +
" cache: " + this + "\n" +
" key: " + key + "\n" +
" value: " + value);
}
return value;
}
else // ignore shared cache
{
if (isDebugEnabled)
{
logger.debug("No value found in transaction and ignoring shared cache: \n" +
" cache: " + this + "\n" +
" key: " + key);
}
return null;
}
}
/**
* Goes direct to the shared cache in the absence of a transaction.
* <p>
* Where a transaction is present, a cache of updated items is lazily added to the
* thread and the <tt>Object</tt> put onto that.
*/
@SuppressWarnings("unchecked")
public void put(K key, V value)
{
// are we in a transaction?
if (AlfrescoTransactionSupport.getTransactionId() == null) // not in transaction
{
// no transaction
sharedCache.put(key, value);
// done
if (isDebugEnabled)
{
logger.debug("No transaction - adding item direct to shared cache: \n" +
" cache: " + this + "\n" +
" key: " + key + "\n" +
" value: " + value);
}
}
else // transaction present
{
TransactionData txnData = getTransactionData();
// Ensure that the cache isn't being modified
if (txnData.isClosed)
{
if (isDebugEnabled)
{
logger.debug(
"In post-commit add: \n" +
" cache: " + this + "\n" +
" key: " + key + "\n" +
" value: " + value);
}
}
else
{
// we have an active transaction - add the item into the updated cache for this transaction
// are we in an overflow condition?
if (txnData.updatedItemsCache.hasHitSize())
{
// overflow about to occur or has occured - we can only guarantee non-stale
// data by clearing the shared cache after the transaction. Also, the
// shared cache needs to be ignored for the rest of the transaction.
txnData.isClearOn = true;
if (!txnData.haveIssuedFullWarning && logger.isWarnEnabled())
{
logger.warn("Transactional update cache '" + name + "' is full (" + maxCacheSize + ").");
txnData.haveIssuedFullWarning = true;
}
}
Object existingValueObj = sharedCache.get(key);
CacheBucket<V> bucket = null;
if (existingValueObj == null)
{
// ALF-5134: Performance of Alfresco cluster less than performance of single node
// The 'null' marker that used to be inserted also triggered an update in the afterCommit
// phase; the update triggered cache invalidation in the cluster. Now, the null cannot
// be verified to be the same null - there is no null equivalence
//
// The value didn't exist before
bucket = new NewCacheBucket<V>(value);
}
else
{
// Record the existing value as is
bucket = new UpdateCacheBucket<V>((V)existingValueObj, value);
}
txnData.updatedItemsCache.put(key, bucket);
// remove the item from the removed cache, if present
txnData.removedItemsCache.remove(key);
// done
if (isDebugEnabled)
{
logger.debug("In transaction - adding item direct to transactional update cache: \n" +
" cache: " + this + "\n" +
" key: " + key + "\n" +
" value: " + value);
}
}
}
}
/**
* Goes direct to the shared cache in the absence of a transaction.
* <p>
* Where a transaction is present, a cache of removed items is lazily added to the
* thread and the <tt>Object</tt> put onto that.
*/
public void remove(K key)
{
// are we in a transaction?
if (AlfrescoTransactionSupport.getTransactionId() == null) // not in transaction
{
// no transaction
sharedCache.remove(key);
// done
if (isDebugEnabled)
{
logger.debug("No transaction - removing item from shared cache: \n" +
" cache: " + this + "\n" +
" key: " + key);
}
}
else // transaction present
{
TransactionData txnData = getTransactionData();
// Ensure that the cache isn't being modified
if (txnData.isClosed)
{
if (isDebugEnabled)
{
logger.debug(
"In post-commit remove: \n" +
" cache: " + this + "\n" +
" key: " + key);
}
}
else
{
// is the shared cache going to be cleared?
if (txnData.isClearOn)
{
// don't store removals if we're just going to clear it all out later
}
else
{
// are we in an overflow condition?
if (txnData.removedItemsCache.size() >= maxCacheSize)
{
// overflow about to occur or has occured - we can only guarantee non-stale
// data by clearing the shared cache after the transaction. Also, the
// shared cache needs to be ignored for the rest of the transaction.
txnData.isClearOn = true;
if (!txnData.haveIssuedFullWarning && logger.isWarnEnabled())
{
logger.warn("Transactional removal cache '" + name + "' is full (" + maxCacheSize + ").");
txnData.haveIssuedFullWarning = true;
}
}
else
{
// Create a bucket to remove the value from the shared cache
txnData.removedItemsCache.add(key);
}
}
// remove the item from the udpated cache, if present
txnData.updatedItemsCache.remove(key);
// done
if (isDebugEnabled)
{
logger.debug("In transaction - adding item direct to transactional removed cache: \n" +
" cache: " + this + "\n" +
" key: " + key);
}
}
}
}
/**
* Clears out all the caches.
*/
public void clear()
{
// clear local caches
if (AlfrescoTransactionSupport.getTransactionId() != null)
{
if (isDebugEnabled)
{
logger.debug("In transaction clearing cache: \n" +
" cache: " + this + "\n" +
" txn: " + AlfrescoTransactionSupport.getTransactionId());
}
TransactionData txnData = getTransactionData();
// Ensure that the cache isn't being modified
if (txnData.isClosed)
{
if (isDebugEnabled)
{
logger.debug(
"In post-commit clear: \n" +
" cache: " + this);
}
}
else
{
// the shared cache must be cleared at the end of the transaction
// and also serves to ensure that the shared cache will be ignored
// for the remainder of the transaction
txnData.isClearOn = true;
txnData.updatedItemsCache.clear();
txnData.removedItemsCache.clear();
}
}
else // no transaction
{
if (isDebugEnabled)
{
logger.debug("No transaction - clearing shared cache");
}
// clear shared cache
sharedCache.clear();
}
}
/**
* NO-OP
*/
public void flush()
{
}
/**
* NO-OP
*/
public void beforeCompletion()
{
}
/**
* Merge the transactional caches into the shared cache
*/
public void beforeCommit(boolean readOnly)
{
if (isDebugEnabled)
{
logger.debug("Processing before-commit");
}
TransactionData txnData = getTransactionData();
try
{
if (txnData.isClearOn)
{
// clear shared cache
sharedCache.clear();
if (isDebugEnabled)
{
logger.debug("Clear notification recieved in commit - clearing shared cache");
}
}
else
{
// transfer any removed items
for (Serializable key : txnData.removedItemsCache)
{
sharedCache.remove(key);
}
if (isDebugEnabled)
{
logger.debug("Removed " + txnData.removedItemsCache.size() + " values from shared cache in commit");
}
}
// transfer updates
Set<K> keys = (Set<K>) txnData.updatedItemsCache.keySet();
for (Map.Entry<K, CacheBucket<V>> entry : (Set<Map.Entry<K, CacheBucket<V>>>) txnData.updatedItemsCache.entrySet())
{
K key = entry.getKey();
CacheBucket<V> bucket = entry.getValue();
bucket.doPreCommit(sharedCache, key, this.isMutable, txnData.isReadOnly);
}
if (isDebugEnabled)
{
logger.debug("Pre-commit called for " + keys.size() + " values.");
}
}
catch (CacheException e)
{
throw new AlfrescoRuntimeException("Failed to transfer updates to shared cache", e);
}
finally
{
// Block any further updates
txnData.isClosed = true;
}
}
/**
* Merge the transactional caches into the shared cache
*/
public void afterCommit()
{
if (isDebugEnabled)
{
logger.debug("Processing after-commit");
}
TransactionData txnData = getTransactionData();
try
{
if (txnData.isClearOn)
{
// clear shared cache
sharedCache.clear();
if (isDebugEnabled)
{
logger.debug("Clear notification recieved in commit - clearing shared cache");
}
}
else
{
// transfer any removed items
for (Serializable key : txnData.removedItemsCache)
{
sharedCache.remove(key);
}
if (isDebugEnabled)
{
logger.debug("Removed " + txnData.removedItemsCache.size() + " values from shared cache in commit");
}
}
// transfer updates
Set<K> keys = (Set<K>) txnData.updatedItemsCache.keySet();
for (Map.Entry<K, CacheBucket<V>> entry : (Set<Map.Entry<K, CacheBucket<V>>>) txnData.updatedItemsCache.entrySet())
{
K key = entry.getKey();
CacheBucket<V> bucket = entry.getValue();
bucket.doPostCommit(sharedCache, key, this.isMutable, txnData.isReadOnly);
}
if (isDebugEnabled)
{
logger.debug("Post-commit called for " + keys.size() + " values.");
}
}
catch (CacheException e)
{
throw new AlfrescoRuntimeException("Failed to transfer updates to shared cache", e);
}
finally
{
removeCaches(txnData);
}
}
/**
* Transfers cache removals or clears. This allows explicit cache cleanup to be propagated
* to the shared cache even in the event of rollback - useful if the cause of a problem is
* the shared cache value.
*/
public void afterRollback()
{
TransactionData txnData = getTransactionData();
try
{
if (txnData.isClearOn)
{
// clear shared cache
sharedCache.clear();
if (isDebugEnabled)
{
logger.debug("Clear notification recieved in rollback - clearing shared cache");
}
}
else
{
// transfer any removed items
for (Serializable key : txnData.removedItemsCache)
{
sharedCache.remove(key);
}
if (isDebugEnabled)
{
logger.debug("Removed " + txnData.removedItemsCache.size() + " values from shared cache in rollback");
}
}
}
catch (CacheException e)
{
throw new AlfrescoRuntimeException("Failed to transfer updates to shared cache", e);
}
finally
{
removeCaches(txnData);
}
}
/**
* Ensures that the transactional caches are removed from the common cache manager.
*
* @param txnData the data with references to the the transactional caches
*/
private void removeCaches(TransactionData txnData)
{
txnData.isClosed = true;
}
/**
* Interface for the transactional cache buckets. These hold the actual values along
* with some state and behaviour around writing from the in-transaction caches to the
* shared.
*
* @author Derek Hulley
*/
private interface CacheBucket<BV extends Object> extends Serializable
{
/**
* @return Returns the bucket's value
*/
BV getValue();
/**
* Flush the current bucket to the shared cache as far as possible.
*
* @param sharedCache the cache to flush to
* @param key the key that the bucket was stored against
*/
public void doPreCommit(
SimpleCache<Serializable, Object> sharedCache,
Serializable key,
boolean mutable, boolean readOnly);
/**
* Flush the current bucket to the shared cache as far as possible.
*
* @param sharedCache the cache to flush to
* @param key the key that the bucket was stored against
*/
public void doPostCommit(
SimpleCache<Serializable, Object> sharedCache,
Serializable key,
boolean mutable, boolean readOnly);
}
/**
* A bucket class to hold values for the caches.<br/>
*
* @author Derek Hulley
*/
private static class NewCacheBucket<BV> implements CacheBucket<BV>
{
private static final long serialVersionUID = -8536386687213957425L;
private final BV value;
public NewCacheBucket(BV value)
{
this.value = value;
}
public BV getValue()
{
return value;
}
public void doPreCommit(
SimpleCache<Serializable, Object> sharedCache,
Serializable key,
boolean mutable, boolean readOnly)
{
}
public void doPostCommit(
SimpleCache<Serializable, Object> sharedCache,
Serializable key,
boolean mutable, boolean readOnly)
{
Object sharedObj = sharedCache.get(key);
if (!mutable)
{
// Value can't change
if (sharedObj == null)
{
// Still nothing in the cache
sharedCache.put(key, value);
}
}
else if (readOnly)
{
// Only add if nothing else has been added in the interim
if (sharedObj == null)
{
sharedCache.put(key, value);
}
}
else
{
// Mutable, read-write
if (sharedObj == null)
{
sharedCache.put(key, value);
}
else
{
// Remove new value in the cache
sharedCache.remove(key);
}
}
}
}
/**
* Data holder to keep track of a cached value's ID in order to detect stale
* shared cache values. This bucket assumes the presence of a pre-existing entry in
* the shared cache.
*/
private static class UpdateCacheBucket<BV> implements CacheBucket<BV>
{
private static final long serialVersionUID = 7885689778259779578L;
private final BV value;
private final BV originalValue;
public UpdateCacheBucket(BV originalValue, BV value)
{
this.originalValue = originalValue;
this.value = value;
}
public BV getValue()
{
return value;
}
public void doPreCommit(
SimpleCache<Serializable, Object> sharedCache,
Serializable key,
boolean mutable, boolean readOnly)
{
}
public void doPostCommit(
SimpleCache<Serializable, Object> sharedCache,
Serializable key,
boolean mutable, boolean readOnly)
{
Object sharedObj = sharedCache.get(key);
if (!mutable)
{
// Not normally required as mutable objects don't change,
// but we can write it straight through as it should represent
// unchanging values
sharedCache.put(key, value);
}
else if (readOnly)
{
// Only add if value has not changed in the interim
if (sharedObj == originalValue)
{
sharedCache.put(key, value);
}
}
else
{
// Mutable, read-write
if (sharedObj == originalValue)
{
sharedCache.put(key, value);
}
else
{
// The value changed
sharedCache.remove(key);
}
}
}
}
/** Data holder to bind data to the transaction */
private class TransactionData
{
private LRULinkedHashMap<K, CacheBucket<V>> updatedItemsCache;
private Set<K> removedItemsCache;
private boolean haveIssuedFullWarning;
private boolean isClearOn;
private boolean isClosed;
private boolean isReadOnly;
}
/**
* Simple LRU based on {@link LinkedHashMap}
*
* @author Derek Hulley
* @since 3.4
*/
private class LRULinkedHashMap<K1, V1> extends LinkedHashMap<K1, V1>
{
private static final long serialVersionUID = -4874684348174271106L;
private LRULinkedHashMap(int initialSize)
{
super(initialSize);
}
private boolean hasHitSize()
{
return size() >= maxCacheSize;
}
/**
* Remove the eldest entry if the size has reached the maximum cache size
*/
@Override
protected boolean removeEldestEntry(Map.Entry<K1, V1> eldest)
{
return (size() > maxCacheSize);
}
}
}