Merged DEV/ALAN/SITE_PERF to HEAD

30342: Dev branch for Site performance issues (including rework of AuthorityService.getAuthorities() to use a 'lazy' set and DM indexing rework)
   ALF-9899 Huge share site migration, add group to site and user access site related performance issue.
   ALF-9208 Performance issue, during load tests /share/page/user/user-sites is showing to be the most expensive.
   ALF-9692 Performance: General performance of Alfresco degrades when there are 1000s of sites present
   - ancestor-preloading
   - hasAuthority
   - huge site test
   30370: - Save changed to do with adding childAuthorityCache to AuthorityDAOImpl
   - Increase aspectsTransactionalCache size as it blows up
   30387: Experimental solution to 'cascading reindex' performance problem
   - Now only Lucene container documents for a single subtree are reprocessed on addition / removal of a secondary child association
   - No need to delete and re-evaluate ALL the paths to all the nodes in the subtree - just the paths within the subtree
   - Lucene deltas now store the IDs of ANCESTORs to mask out as well as documents to reindex
   - Merge handles deletion of these efficiently
   - Node service cycle checks changed from getPaths to recursive cycleCheck method
   - Adding a group to 60,000 sites might not require all paths to all sites to be re-evaluated on every change!
   30389: Missed files from last checkin
   30390: Optimizations / fixes to Alan's test!
   30393: Bug fix - wasn't adding new documents into the index!
   30397: Fixed a problem with bulk loading trying to bulk load zero parent associations
   Also tweaked reindex calls
   30399: Correction - don't cascade below containers during path cascading
   30400: Another optimization - no need to trigger node bulk loading during path cascading - pass false for the preload flag
   30404: Further optimizations
   - On creation of a secondary child association, make a decision on whether it is cheaper to cascade reindex the parent or the child, based on the number of parent associations to the child
     - Assumes that if there are more than 5 parent associations, it's cheaper to cascade reindex the parent
     - Add a new authority to a zone (containing 60,000 authorities) - cascade reindex the authority, not the zone
     - Add a group (in 60,000 sites) to a site - cascade reindex the site, not the group
   - Caching of child associations already traversed during cascade reindexing
   - Site creation time much reduced!
   30407: Logic fix: Use 'delete only nodes' behaviour on DM index filtering and merging, now we are managing container deletions separately
   30408: Small correction related to last change.
   30409: Correction to deletion reindex behaviour (no need to regenerate masked out containers)
   - Site CRUD operations now all sub-second with 60,000 sites!
   30410: Stop the heartbeat from trying to load and count all site groups
   - Too expensive, as we might have 60,000 sites, each with 4 groups
   - Now just counts the groups in the default zone (the UI visible ones)
   30411: Increased lucene parameters to allow for 'path explosion'
   - 9 million lucene documents in my index after creating 60,000 Share sites (most of them probably paths) resulting in sluggish index write performance
   - Set lucene.indexer.mergerTargetIndexCount=8 (142 documents in smallest index)
   - Increased lucene.indexer.maxDocsForInMemoryMerge, lucene.indexer.maxDocsForInMemoryIndex
   30412: Test fixes
   30413: Revert 'parent association batch loading' changes (as it was a bad idea and is no longer necessary!)
   - Retain a few caching bug fixes however
   30416: Moved UserAuthoritySet (lazy load authority set) from PermissionServiceImpl to AuthorityServiceImpl
   30418: - Remove 'new' hasAuthority from authorityService so it is back to where we started.
   - SiteServiceHugeTest minor changes
   30421: Prevent creation of a duplicate root node on updating the root
   - Use the ANCESTOR field rather than ISCONTAINER to detect a node document, as the root node is both a container and a node!
   30447: Pulled new indexing behaviour into ADMLuceneIndexerImpl and restored old behaviour to AVMLuceneIndexerImpl to restore normal AVM behaviour
   30448: - Cache in PermissionServiceImpl cleared if an authority container has an association added or removed
     Supports the generateKey method which includes the username
     Supports changes in group structures
   - Moved logic to do with ROLE_GUEST from PermissionServiceImpl to AuthorityServiceImpl 
   30465: - Tidy up tests in SiteServiceTestHuge 
   30532: - Added getContainingAuthoritiesInZone to AuthorityService
     - Dave Changed PeopleService.getContainerGroups to only return groups in the DEFAULT zone
   - Fixed RM code to use getAuthoritiesForUser method with just the username again.
   30558: Build fixes
   - Fixed cycleCheck to throw a CyclicChildRelationshipException
   - More tidy up of AVM / ADM indexer split
   - Properly control when path generation is cascaded (not required on a full reindex or a tracker transaction)
   - Support indexing of a 'fake root' parent. Ouch my head hurts!
   30588: Build fixes
   - StringIndexOutOfBoundsException in NodeMonitor
   - Corrections to 'node only' delete behaviour
   - Use the PATH field to detect non-leaf nodes (it's the only stored field with which we can recognize the root)
   - Moved DOD5015Test.testVitalRecords() to the end - the only way I could work out how to get the full TestCase to run
   30600: More build fixes
   - Broadcast ALL node deletions to indexer (even those from cascade deletion of primary associations)
     - Allows indexer to wipe out all affected documents from the delta even if some have already been flushed under different parents by an intricate DOD unit test!
   - Pause FTS in DOD5015Test to prevent intermittent test failures (FTS can temporarily leave deleted documents in the index until it catches up)
   - More tidy up of ADMLuceneIndexerImpl
     - flushPending optimized and some unnecessary member variables removed
     - correction to cascade deletion behaviour (leave behind containers of unaffected secondary references)
     - unused MOVE action removed
     - further legacy logic moved into AVMLuceneIndexerImpl
   30620: More build fixes
   - Cope with a node morphing from a 'leaf' to a container during its lifetime
   - Container documents now created lazily in index as and when necessary
   - Blank out 'nth sibling' field of synthesized paths
   - ADMLuceneTest now passes!
   - TaggingServiceImplTest also passes - more special treatment for categories
   30627: Multi tenancy fixes
   30629: Possible build fix - retrying transaction in ReplicationServiceIntegrationTest.tearDown()
   30632: Build fix - lazy container generation after a move
   30636: Build fix: authority comparisons are case sensitive, even when that authority corresponds to a user (PermissionServiceTest.testPermissionCase())
   30638: Run SiteServiceTestHuge form a cmd line
      set SITE_CPATH=%TOMCAT_HOME%/lib/*;%TOMCAT_HOME%/endorsed/*;%TOMCAT_HOME%/webapps/alfresco/WEB-INF/lib/*;\
                     %TOMCAT_HOME%/webapps/alfresco/WEB-INF/classes;%TOMCAT_HOME%/shared/classes;
      java -Xmx2048m -XX:MaxPermSize=512M -classpath %SITE_CPATH% org.alfresco.repo.site.SiteServiceTestHuge ...
   
      Usage: -Daction=usersOnly
             -Dfrom=<fromSiteId> -Dto=<toSiteId>
             -Dfrom=<fromSiteId> -Dto=<toSiteId> -Daction=sites  -Drestart=<restartAtSiteId>
             -Dfrom=<fromSiteId> -Dto=<toSiteId> -Daction=groups -Drestart=<restartAtSiteId>
   30639: Minor changes to commented out command line code for SiteServiceTestHuge
   30643: Round of improvements to MySites dashlet relating to huge DB testing:
    - 10,000 site database, user is a member of ~2000 sites
    - Improvements to site.lib.ftl and related SiteService methods
    - To return MySites dashlet for the user, order of magnitude improvement from 7562ms to 618ms in the profiler (now ~350ms in the browser)
   30644: Fixed performance regression - too much opening and closing of the delta reader and writer
   30661: More reader opening / closing
   30668: Performance improvements to Site Finder and My Sites in user profile page.
    - faster to bring back lists and site memberships (used by the Site Finder)
    - related further improvements to APIs used by this and My Sites on dashboard
   30713: Configuration for MySites dashlet maximum list size
   30725: Merged V3.4-BUG-FIX to DEV/ALAN/SITE_PERF
      30708: ALF-10040: Added missing ReferenceCountingReadOnlyIndexReaderFactory wrapper to IndexInfo.getMainIndexReferenceCountingReadOnlyIndexReader() to make it consistent with IndexInfo.getMainIndexReferenceCountingReadOnlyIndexReader(String, Set<String>, boolean) and allow SingleFieldSelectors to make it through from LeafScorer to the path caches! Affects ALL Lucene queries that run OUTSIDE of a transaction.
   30729: Use getAuthoritiesForUser rather than getContainingAuthorities if possible.
   SiteServiceTestHuge: command line version
   30733: Performance improves to user dashboard relating to User Calendar 
    - converted web-tier calendar dashlet to Ajax client-side rendering - faster user experience and also less load on the web-tier
    - improvements to query from Andy
    - maximum sites/list size to query now configurable (default 100 instead of previously 1000)
   30743: Restore site CRUD performance from cold caches
   - Introduced NodeService.getAllRootNodes(), returning all nodes in a store with the root aspect, backed by a transactional cache and invalidated at key points
   - Means indexing doesn't have to load all parent nodes just to check for 'fake roots'
   - Site CRUD performance now back to sub-second with 60,000 nodes
   30747: Improvement to previous checkin - prevent cross cluster invalidation of every store root when a single store drops out of the cache
   30748: User dashboard finally loading within seconds with 60,000 sites, 60 groups, 100 users (thanks mostly to Kev's UI changes)
   - post-process IBatis mapped statements with MySQL dialect to apply fetchSize=Integer.MIN_VALUE to all _Limited statements
      - Means we can stream first 10,000 site groups without the MySQL JDBC driver reading all 240,000 into memory
   - New NodeService getChildAssocs method with a maxResults argument (makes use of the above)
   - Perfected getContainingAuthoritiesInZone implementation, adding a cutoff parameter, allowing only the first 1000 site memberships to be returned quickly and caches to be warmed for ACL evaluations
   - New cache of first 10,000 groups in APP.SHARE zone
   - Cache sizes tuned for 60,000 site scenario
   - Site service warms caches on bootstrap
   - PreferencesService applies ASPECT_IGNORE_INHERITED_RULES to person node to prevent the rule service trying to crawl the group hierarchy on a preference save
   - WorkflowServiceImpl.getPooledTasks only looks in APP.DEFAULT zone (thus avoiding site group noise)
   30749: Fix compilation errors
   30761: Minor change to SiteServiceTestHuge
   30762: Derek code review: Reworked fetchSize specification for select_ChildAssocsOfParent_Limited statement for MySQL
   - Now fetchSize stated explicitly in a MySQL specific config file resolved by the HierarchicalResourceLoader
   - No need for any Java-based post processing
   30763: Build fix: don't add a user into its own authorities (until specifically asked to)
   30767: Build fix
   - IBatis / MySQL needs a streaming result statement to be run in an isolation transaction (because it doesn't release PreparedStatements until the end)
   30771: Backed out previous change which was fundamentally flawed
   - Resolved underlying problem which was that the select_ChildAssocsOfParent_Limited SQL string needs to be unique in order to not cause confusion in the prepared statement cache
   30772: Backed out previous change which was fundamentally flawed
   - Resolved underlying problem which was that the select_ChildAssocsOfParent_Limited SQL string needs to be unique in order to not cause confusion in the prepared statement cache


git-svn-id: https://svn.alfresco.com/repos/alfresco-enterprise/alfresco/HEAD/root@30797 c4b6b30b-aa2e-2d43-bbcb-ca4b014f7261
This commit is contained in:
Dave Ward
2011-09-27 12:24:57 +00:00
parent f4830cff15
commit 2e62d4fb29
47 changed files with 3536 additions and 1028 deletions

View File

@@ -55,9 +55,9 @@ import org.alfresco.repo.domain.usage.UsageDAO;
import org.alfresco.repo.policy.BehaviourFilter;
import org.alfresco.repo.security.permissions.AccessControlListProperties;
import org.alfresco.repo.transaction.AlfrescoTransactionSupport;
import org.alfresco.repo.transaction.AlfrescoTransactionSupport.TxnReadState;
import org.alfresco.repo.transaction.TransactionAwareSingleton;
import org.alfresco.repo.transaction.TransactionListenerAdapter;
import org.alfresco.repo.transaction.AlfrescoTransactionSupport.TxnReadState;
import org.alfresco.service.cmr.dictionary.DataTypeDefinition;
import org.alfresco.service.cmr.dictionary.DictionaryService;
import org.alfresco.service.cmr.dictionary.InvalidTypeException;
@@ -71,20 +71,20 @@ import org.alfresco.service.cmr.repository.DuplicateChildNodeNameException;
import org.alfresco.service.cmr.repository.InvalidNodeRefException;
import org.alfresco.service.cmr.repository.InvalidStoreRefException;
import org.alfresco.service.cmr.repository.NodeRef;
import org.alfresco.service.cmr.repository.NodeRef.Status;
import org.alfresco.service.cmr.repository.Path;
import org.alfresco.service.cmr.repository.StoreRef;
import org.alfresco.service.cmr.repository.NodeRef.Status;
import org.alfresco.service.cmr.repository.datatype.DefaultTypeConverter;
import org.alfresco.service.namespace.QName;
import org.alfresco.service.transaction.ReadOnlyServerException;
import org.alfresco.service.transaction.TransactionService;
import org.alfresco.util.EqualsHelper;
import org.alfresco.util.EqualsHelper.MapValueComparison;
import org.alfresco.util.GUID;
import org.alfresco.util.Pair;
import org.alfresco.util.PropertyCheck;
import org.alfresco.util.ReadWriteLockExecuter;
import org.alfresco.util.SerializationUtils;
import org.alfresco.util.EqualsHelper.MapValueComparison;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.dao.ConcurrencyFailureException;
@@ -135,6 +135,15 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
* VALUE KEY: IGNORED<br/>
*/
private EntityLookupCache<StoreRef, Node, Serializable> rootNodesCache;
/**
* Cache for nodes with the root aspect by StoreRef:<br/>
* KEY: StoreRef<br/>
* VALUE: A set of nodes with the root aspect<br/>
*/
private SimpleCache<StoreRef, Set<NodeRef>> allRootNodesCache;
/**
* Bidirectional cache for the Node ID to Node lookups:<br/>
* KEY: Node ID<br/>
@@ -163,7 +172,7 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
* VALUE KEY: ChildByNameKey<br/s>
*/
private EntityLookupCache<Long, ParentAssocsInfo, ChildByNameKey> parentAssocsCache;
/**
* Constructor. Set up various instance-specific members such as caches and locks.
*/
@@ -272,8 +281,18 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
cache,
CACHE_REGION_ROOT_NODES,
new RootNodesCacheCallbackDAO());
}
}
/**
* Set the cache that maintains the extended Store root node data
*
* @param cache the cache
*/
public void setAllRootNodesCache(SimpleCache<StoreRef, Set<NodeRef>> allRootNodesCache)
{
this.allRootNodesCache = allRootNodesCache;
}
/**
* Set the cache that maintains node ID-NodeRef cross referencing data
*
@@ -636,6 +655,48 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
return rootNodePair.getSecond().getNodePair();
}
}
public Set<NodeRef> getAllRootNodes(StoreRef storeRef)
{
Set<NodeRef> rootNodes = allRootNodesCache.get(storeRef);
if (rootNodes == null)
{
final Map<StoreRef, Set<NodeRef>> allRootNodes = new HashMap<StoreRef, Set<NodeRef>>(97);
getNodesWithAspects(Collections.singleton(ContentModel.ASPECT_ROOT), 0L, Long.MAX_VALUE, new NodeRefQueryCallback()
{
@Override
public boolean handle(Pair<Long, NodeRef> nodePair)
{
NodeRef nodeRef = nodePair.getSecond();
StoreRef storeRef = nodeRef.getStoreRef();
Set<NodeRef> rootNodes = allRootNodes.get(storeRef);
if (rootNodes == null)
{
rootNodes = new HashSet<NodeRef>(97);
allRootNodes.put(storeRef, rootNodes);
}
rootNodes.add(nodeRef);
return true;
}
});
rootNodes = allRootNodes.get(storeRef);
if (rootNodes == null)
{
rootNodes = Collections.emptySet();
allRootNodes.put(storeRef, rootNodes);
}
for (Map.Entry<StoreRef, Set<NodeRef>> entry : allRootNodes.entrySet())
{
StoreRef entryStoreRef = entry.getKey();
// Prevent unnecessary cross-invalidation
if (!allRootNodesCache.contains(entryStoreRef))
{
allRootNodesCache.put(entryStoreRef, entry.getValue());
}
}
}
return rootNodes;
}
public Pair<Long, NodeRef> newStore(StoreRef storeRef)
{
@@ -684,6 +745,7 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
}
// All the NodeRef-based caches are invalid. ID-based caches are fine.
rootNodesCache.removeByKey(oldStoreRef);
allRootNodesCache.remove(oldStoreRef);
nodesCache.clear();
if (isDebugEnabled)
@@ -1251,7 +1313,7 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
childAssocRetryingHelper.doWithRetry(callback);
// Check for cyclic relationships
getPaths(newChildNode.getNodePair(), false);
cycleCheck(newChildNode.getNodePair());
// Update ACLs for moved tree
Long newParentAclId = newParentNode.getAclId();
@@ -1568,6 +1630,10 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
nodeUpdate.setAuditableProperties(auditableProps);
nodeUpdate.setUpdateAuditableProperties(true);
}
if (nodeAspects.contains(ContentModel.ASPECT_ROOT))
{
allRootNodesCache.remove(node.getNodePair().getSecond().getStoreRef());
}
// Remove value from the cache
nodesCache.removeByKey(nodeId);
@@ -2178,7 +2244,9 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
// If we are adding the sys:aspect_root, then the parent assocs cache is unreliable
if (newAspectQNames.contains(ContentModel.ASPECT_ROOT))
{
Pair <Long, NodeRef> nodePair = getNodePair(nodeId);
invalidateCachesByNodeId(null, nodeId, parentAssocsCache);
allRootNodesCache.remove(nodePair.getSecond().getStoreRef());
}
// Touch to bring into current txn
@@ -2226,7 +2294,9 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
// If we are removing the sys:aspect_root, then the parent assocs cache is unreliable
if (aspectQNames.contains(ContentModel.ASPECT_ROOT))
{
Pair <Long, NodeRef> nodePair = getNodePair(nodeId);
invalidateCachesByNodeId(null, nodeId, parentAssocsCache);
allRootNodesCache.remove(nodePair.getSecond().getStoreRef());
}
// Touch to bring into current txn
@@ -2563,12 +2633,12 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
QName assocQName,
String childNodeName)
{
ParentAssocsInfo parentAssocInfo = getParentAssocsCached(childNodeId);
ChildAssocEntity assoc = newChildAssocImpl(
parentNodeId, childNodeId, false, assocTypeQName, assocQName, childNodeName);
Long assocId = assoc.getId();
// update cache
ParentAssocsInfo parentAssocInfo = getParentAssocsCached(childNodeId);
parentAssocInfo = parentAssocInfo.addAssoc(assocId, assoc);
parentAssocInfo = parentAssocInfo.addAssoc(assocId, assoc, getCurrentTransactionId());
setParentAssocsCached(childNodeId, parentAssocInfo);
// Done
return assoc.getPair(qnameDAO);
@@ -2584,7 +2654,7 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
// Update cache
Long childNodeId = assoc.getChildNode().getId();
ParentAssocsInfo parentAssocInfo = getParentAssocsCached(childNodeId);
parentAssocInfo = parentAssocInfo.removeAssoc(assocId);
parentAssocInfo = parentAssocInfo.removeAssoc(assocId, getCurrentTransactionId());
setParentAssocsCached(childNodeId, parentAssocInfo);
// Delete it
int count = deleteChildAssocById(assocId);
@@ -2948,12 +3018,13 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
assoc.getParentNode().getNodePair(),
assoc.getChildNode().getNodePair());
}
resultsCallback.done();
}
else
{
// Decide whether we query or filter
ParentAssocsInfo parentAssocs = getParentAssocsCacheOnly(childNodeId);
if ((parentAssocs == null) || (parentAssocs.getParentAssocs().size() > PARENT_ASSOCS_CACHE_FILTER_THRESHOLD))
ParentAssocsInfo parentAssocs = getParentAssocsCached(childNodeId);
if (parentAssocs.getParentAssocs().size() > PARENT_ASSOCS_CACHE_FILTER_THRESHOLD)
{
// Query
selectParentAssocs(childNodeId, assocTypeQName, assocQName, isPrimary, resultsCallback);
@@ -2973,11 +3044,70 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
assoc.getChildNode().getNodePair());
}
}
resultsCallback.done();
}
}
}
/**
* Potentially cheaper than evaluating all of a node's paths to check for child association cycles
*
* @param nodePair
* the node to check
* @param path
* a set containing the nodes in the path to the node
*/
public void cycleCheck(Pair<Long, NodeRef> nodePair)
{
CycleCallBack callback = new CycleCallBack();
callback.cycleCheck(nodePair);
if (callback.toThrow != null)
{
throw callback.toThrow;
}
}
class CycleCallBack implements ChildAssocRefQueryCallback
{
final Set<ChildAssociationRef> path = new HashSet<ChildAssociationRef>(97);
CyclicChildRelationshipException toThrow;
@Override
public void done()
{
}
@Override
public boolean handle(Pair<Long, ChildAssociationRef> childAssocPair, Pair<Long, NodeRef> parentNodePair,
Pair<Long, NodeRef> childNodePair)
{
ChildAssociationRef childAssociationRef = childAssocPair.getSecond();
if (!path.add(childAssociationRef))
{
// Remember exception we want to throw and exit. If we throw within here, it will be wrapped by IBatis
toThrow = new CyclicChildRelationshipException("Child Association Cycle Detected " + path, childAssociationRef);
return false;
}
cycleCheck(childNodePair);
path.remove(childAssociationRef);
return toThrow == null;
}
@Override
public boolean preLoadNodes()
{
return false;
}
public void cycleCheck(Pair<Long, NodeRef> nodePair)
{
getChildAssocs(nodePair.getFirst(), null, null, null, null, null, this);
}
};
public List<Path> getPaths(Pair<Long, NodeRef> nodePair, boolean primaryOnly) throws InvalidNodeRefException
{
// create storage for the paths - only need 1 bucket if we are looking for the primary path
@@ -3203,7 +3333,8 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
// Validate that we aren't pairing up a cached node with historic parent associations from an old
// transaction (or the other way around)
Long txnId = parentAssocsInfo.getTxnId();
if (txnId != null && !txnId.equals(child.getTransaction().getId()))
Long childTxnId = child.getTransaction().getId();
if (txnId != null && !txnId.equals(childTxnId))
{
if (logger.isDebugEnabled())
{
@@ -3211,7 +3342,17 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
+ " detected loading parent associations. Cached transaction ID: "
+ child.getTransaction().getId() + ", actual transaction ID: " + txnId);
}
invalidateNodeCaches(nodeId);
if (AlfrescoTransactionSupport.getTransactionReadState() != TxnReadState.TXN_READ_WRITE
|| !getCurrentTransaction().getId().equals(childTxnId))
{
// Force a reload of the node and its parent assocs
invalidateNodeCaches(nodeId);
}
else
{
// The node is for the current transaction, so only invalidate the parent assocs
invalidateCachesByNodeId(null, nodeId, parentAssocsCache);
}
}
else
{
@@ -3256,7 +3397,7 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
// Select all the parent associations
List<ChildAssocEntity> assocs = selectParentAssocs(nodeId);
// Retrieve the transaction ID from the DB for validation purposes - prevents skew between a cached node and
// its parent assocs
Long txnId = assocs.isEmpty() ? null : assocs.get(0).getChildNode().getTransaction().getId();
@@ -3516,6 +3657,12 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
HashSet<Long> qnameIdsSet = new HashSet<Long>(qnameIds);
Set<QName> qnames = qnameDAO.convertIdsToQNames(qnameIdsSet);
aspectsCache.setValue(nodeId, qnames);
aspectNodeIds.remove(nodeId);
}
// Cache the absence of aspects too!
for (Long nodeId: aspectNodeIds)
{
aspectsCache.setValue(nodeId, Collections.<QName>emptySet());
}
Map<Long, Map<NodePropertyKey, NodePropertyValue>> propsByNodeId = selectNodeProperties(propertiesNodeIds);