Merged DEV/ALAN/SITE_PERF to HEAD

30342: Dev branch for Site performance issues (including rework of AuthorityService.getAuthorities() to use a 'lazy' set and DM indexing rework)
   ALF-9899 Huge share site migration, add group to site and user access site related performance issue.
   ALF-9208 Performance issue, during load tests /share/page/user/user-sites is showing to be the most expensive.
   ALF-9692 Performance: General performance of Alfresco degrades when there are 1000s of sites present
   - ancestor-preloading
   - hasAuthority
   - huge site test
   30370: - Save changed to do with adding childAuthorityCache to AuthorityDAOImpl
   - Increase aspectsTransactionalCache size as it blows up
   30387: Experimental solution to 'cascading reindex' performance problem
   - Now only Lucene container documents for a single subtree are reprocessed on addition / removal of a secondary child association
   - No need to delete and re-evaluate ALL the paths to all the nodes in the subtree - just the paths within the subtree
   - Lucene deltas now store the IDs of ANCESTORs to mask out as well as documents to reindex
   - Merge handles deletion of these efficiently
   - Node service cycle checks changed from getPaths to recursive cycleCheck method
   - Adding a group to 60,000 sites might not require all paths to all sites to be re-evaluated on every change!
   30389: Missed files from last checkin
   30390: Optimizations / fixes to Alan's test!
   30393: Bug fix - wasn't adding new documents into the index!
   30397: Fixed a problem with bulk loading trying to bulk load zero parent associations
   Also tweaked reindex calls
   30399: Correction - don't cascade below containers during path cascading
   30400: Another optimization - no need to trigger node bulk loading during path cascading - pass false for the preload flag
   30404: Further optimizations
   - On creation of a secondary child association, make a decision on whether it is cheaper to cascade reindex the parent or the child, based on the number of parent associations to the child
     - Assumes that if there are more than 5 parent associations, it's cheaper to cascade reindex the parent
     - Add a new authority to a zone (containing 60,000 authorities) - cascade reindex the authority, not the zone
     - Add a group (in 60,000 sites) to a site - cascade reindex the site, not the group
   - Caching of child associations already traversed during cascade reindexing
   - Site creation time much reduced!
   30407: Logic fix: Use 'delete only nodes' behaviour on DM index filtering and merging, now we are managing container deletions separately
   30408: Small correction related to last change.
   30409: Correction to deletion reindex behaviour (no need to regenerate masked out containers)
   - Site CRUD operations now all sub-second with 60,000 sites!
   30410: Stop the heartbeat from trying to load and count all site groups
   - Too expensive, as we might have 60,000 sites, each with 4 groups
   - Now just counts the groups in the default zone (the UI visible ones)
   30411: Increased lucene parameters to allow for 'path explosion'
   - 9 million lucene documents in my index after creating 60,000 Share sites (most of them probably paths) resulting in sluggish index write performance
   - Set lucene.indexer.mergerTargetIndexCount=8 (142 documents in smallest index)
   - Increased lucene.indexer.maxDocsForInMemoryMerge, lucene.indexer.maxDocsForInMemoryIndex
   30412: Test fixes
   30413: Revert 'parent association batch loading' changes (as it was a bad idea and is no longer necessary!)
   - Retain a few caching bug fixes however
   30416: Moved UserAuthoritySet (lazy load authority set) from PermissionServiceImpl to AuthorityServiceImpl
   30418: - Remove 'new' hasAuthority from authorityService so it is back to where we started.
   - SiteServiceHugeTest minor changes
   30421: Prevent creation of a duplicate root node on updating the root
   - Use the ANCESTOR field rather than ISCONTAINER to detect a node document, as the root node is both a container and a node!
   30447: Pulled new indexing behaviour into ADMLuceneIndexerImpl and restored old behaviour to AVMLuceneIndexerImpl to restore normal AVM behaviour
   30448: - Cache in PermissionServiceImpl cleared if an authority container has an association added or removed
     Supports the generateKey method which includes the username
     Supports changes in group structures
   - Moved logic to do with ROLE_GUEST from PermissionServiceImpl to AuthorityServiceImpl 
   30465: - Tidy up tests in SiteServiceTestHuge 
   30532: - Added getContainingAuthoritiesInZone to AuthorityService
     - Dave Changed PeopleService.getContainerGroups to only return groups in the DEFAULT zone
   - Fixed RM code to use getAuthoritiesForUser method with just the username again.
   30558: Build fixes
   - Fixed cycleCheck to throw a CyclicChildRelationshipException
   - More tidy up of AVM / ADM indexer split
   - Properly control when path generation is cascaded (not required on a full reindex or a tracker transaction)
   - Support indexing of a 'fake root' parent. Ouch my head hurts!
   30588: Build fixes
   - StringIndexOutOfBoundsException in NodeMonitor
   - Corrections to 'node only' delete behaviour
   - Use the PATH field to detect non-leaf nodes (it's the only stored field with which we can recognize the root)
   - Moved DOD5015Test.testVitalRecords() to the end - the only way I could work out how to get the full TestCase to run
   30600: More build fixes
   - Broadcast ALL node deletions to indexer (even those from cascade deletion of primary associations)
     - Allows indexer to wipe out all affected documents from the delta even if some have already been flushed under different parents by an intricate DOD unit test!
   - Pause FTS in DOD5015Test to prevent intermittent test failures (FTS can temporarily leave deleted documents in the index until it catches up)
   - More tidy up of ADMLuceneIndexerImpl
     - flushPending optimized and some unnecessary member variables removed
     - correction to cascade deletion behaviour (leave behind containers of unaffected secondary references)
     - unused MOVE action removed
     - further legacy logic moved into AVMLuceneIndexerImpl
   30620: More build fixes
   - Cope with a node morphing from a 'leaf' to a container during its lifetime
   - Container documents now created lazily in index as and when necessary
   - Blank out 'nth sibling' field of synthesized paths
   - ADMLuceneTest now passes!
   - TaggingServiceImplTest also passes - more special treatment for categories
   30627: Multi tenancy fixes
   30629: Possible build fix - retrying transaction in ReplicationServiceIntegrationTest.tearDown()
   30632: Build fix - lazy container generation after a move
   30636: Build fix: authority comparisons are case sensitive, even when that authority corresponds to a user (PermissionServiceTest.testPermissionCase())
   30638: Run SiteServiceTestHuge form a cmd line
      set SITE_CPATH=%TOMCAT_HOME%/lib/*;%TOMCAT_HOME%/endorsed/*;%TOMCAT_HOME%/webapps/alfresco/WEB-INF/lib/*;\
                     %TOMCAT_HOME%/webapps/alfresco/WEB-INF/classes;%TOMCAT_HOME%/shared/classes;
      java -Xmx2048m -XX:MaxPermSize=512M -classpath %SITE_CPATH% org.alfresco.repo.site.SiteServiceTestHuge ...
   
      Usage: -Daction=usersOnly
             -Dfrom=<fromSiteId> -Dto=<toSiteId>
             -Dfrom=<fromSiteId> -Dto=<toSiteId> -Daction=sites  -Drestart=<restartAtSiteId>
             -Dfrom=<fromSiteId> -Dto=<toSiteId> -Daction=groups -Drestart=<restartAtSiteId>
   30639: Minor changes to commented out command line code for SiteServiceTestHuge
   30643: Round of improvements to MySites dashlet relating to huge DB testing:
    - 10,000 site database, user is a member of ~2000 sites
    - Improvements to site.lib.ftl and related SiteService methods
    - To return MySites dashlet for the user, order of magnitude improvement from 7562ms to 618ms in the profiler (now ~350ms in the browser)
   30644: Fixed performance regression - too much opening and closing of the delta reader and writer
   30661: More reader opening / closing
   30668: Performance improvements to Site Finder and My Sites in user profile page.
    - faster to bring back lists and site memberships (used by the Site Finder)
    - related further improvements to APIs used by this and My Sites on dashboard
   30713: Configuration for MySites dashlet maximum list size
   30725: Merged V3.4-BUG-FIX to DEV/ALAN/SITE_PERF
      30708: ALF-10040: Added missing ReferenceCountingReadOnlyIndexReaderFactory wrapper to IndexInfo.getMainIndexReferenceCountingReadOnlyIndexReader() to make it consistent with IndexInfo.getMainIndexReferenceCountingReadOnlyIndexReader(String, Set<String>, boolean) and allow SingleFieldSelectors to make it through from LeafScorer to the path caches! Affects ALL Lucene queries that run OUTSIDE of a transaction.
   30729: Use getAuthoritiesForUser rather than getContainingAuthorities if possible.
   SiteServiceTestHuge: command line version
   30733: Performance improves to user dashboard relating to User Calendar 
    - converted web-tier calendar dashlet to Ajax client-side rendering - faster user experience and also less load on the web-tier
    - improvements to query from Andy
    - maximum sites/list size to query now configurable (default 100 instead of previously 1000)
   30743: Restore site CRUD performance from cold caches
   - Introduced NodeService.getAllRootNodes(), returning all nodes in a store with the root aspect, backed by a transactional cache and invalidated at key points
   - Means indexing doesn't have to load all parent nodes just to check for 'fake roots'
   - Site CRUD performance now back to sub-second with 60,000 nodes
   30747: Improvement to previous checkin - prevent cross cluster invalidation of every store root when a single store drops out of the cache
   30748: User dashboard finally loading within seconds with 60,000 sites, 60 groups, 100 users (thanks mostly to Kev's UI changes)
   - post-process IBatis mapped statements with MySQL dialect to apply fetchSize=Integer.MIN_VALUE to all _Limited statements
      - Means we can stream first 10,000 site groups without the MySQL JDBC driver reading all 240,000 into memory
   - New NodeService getChildAssocs method with a maxResults argument (makes use of the above)
   - Perfected getContainingAuthoritiesInZone implementation, adding a cutoff parameter, allowing only the first 1000 site memberships to be returned quickly and caches to be warmed for ACL evaluations
   - New cache of first 10,000 groups in APP.SHARE zone
   - Cache sizes tuned for 60,000 site scenario
   - Site service warms caches on bootstrap
   - PreferencesService applies ASPECT_IGNORE_INHERITED_RULES to person node to prevent the rule service trying to crawl the group hierarchy on a preference save
   - WorkflowServiceImpl.getPooledTasks only looks in APP.DEFAULT zone (thus avoiding site group noise)
   30749: Fix compilation errors
   30761: Minor change to SiteServiceTestHuge
   30762: Derek code review: Reworked fetchSize specification for select_ChildAssocsOfParent_Limited statement for MySQL
   - Now fetchSize stated explicitly in a MySQL specific config file resolved by the HierarchicalResourceLoader
   - No need for any Java-based post processing
   30763: Build fix: don't add a user into its own authorities (until specifically asked to)
   30767: Build fix
   - IBatis / MySQL needs a streaming result statement to be run in an isolation transaction (because it doesn't release PreparedStatements until the end)
   30771: Backed out previous change which was fundamentally flawed
   - Resolved underlying problem which was that the select_ChildAssocsOfParent_Limited SQL string needs to be unique in order to not cause confusion in the prepared statement cache
   30772: Backed out previous change which was fundamentally flawed
   - Resolved underlying problem which was that the select_ChildAssocsOfParent_Limited SQL string needs to be unique in order to not cause confusion in the prepared statement cache


git-svn-id: https://svn.alfresco.com/repos/alfresco-enterprise/alfresco/HEAD/root@30797 c4b6b30b-aa2e-2d43-bbcb-ca4b014f7261
This commit is contained in:
Dave Ward 2011-09-27 12:24:57 +00:00
parent f4830cff15
commit 2e62d4fb29
47 changed files with 3536 additions and 1028 deletions

View File

@ -82,6 +82,12 @@
<property name="userAuthorityCache">
<ref bean="userToAuthorityCache" />
</property>
<property name="childAuthorityCache">
<ref bean="authorityToChildAuthorityCache" />
</property>
<property name="zoneAuthorityCache">
<ref bean="zoneToAuthorityCache" />
</property>
<property name="policyComponent">
<ref bean="policyComponent"/>
</property>

View File

@ -784,6 +784,12 @@
<property name="companyHomePath"><value>/${spaces.company_home.childname}</value></property>
</bean>
<!-- Site service cache warmer -->
<bean id="siteServiceBootstrap" class="org.alfresco.repo.site.SiteServiceBootstrap">
<property name="siteService" ref="SiteService" />
<property name="tenantAdminService" ref="tenantAdminService" />
</bean>
<!-- Scheduled persisted actions - load into quartz -->
<bean id="scheduledPersistedActionServiceBootstrap" class="org.alfresco.repo.action.scheduled.ScheduledPersistedActionServiceImpl$ScheduledPersistedActionServiceBootstrap">
<property name="scheduledPersistedActionService" ref="scheduledPersistedActionService" />

View File

@ -148,6 +148,35 @@
<property name="disableSharedCache" value="${system.cache.disableImmutableSharedCaches}" />
</bean>
<!-- The cross-transaction shared cache for Root Nodes -->
<bean name="node.allRootNodesSharedCache" class="org.alfresco.repo.cache.EhCacheAdapter">
<property name="cache">
<bean class="org.springframework.cache.ehcache.EhCacheFactoryBean" >
<property name="cacheManager">
<ref bean="internalEHCacheManager" />
</property>
<property name="cacheName">
<value>org.alfresco.cache.node.allRootNodesCache</value>
</property>
</bean>
</property>
</bean>
<!-- The transactional cache for Root Nodes -->
<bean name="node.allRootNodesCache" class="org.alfresco.repo.cache.TransactionalCache">
<property name="sharedCache">
<ref bean="node.allRootNodesSharedCache" />
</property>
<property name="name">
<value>org.alfresco.cache.node.allRootNodesTransactionalCache</value>
</property>
<property name="maxCacheSize" value="500" />
<property name="mutable" value="false" />
<property name="disableSharedCache" value="${system.cache.disableImmutableSharedCaches}" />
</bean>
<!-- ===================================== -->
<!-- Nodes lookup -->
<!-- ===================================== -->
@ -176,7 +205,7 @@
<property name="name">
<value>org.alfresco.cache.node.nodesTransactionalCache</value>
</property>
<property name="maxCacheSize" value="50000" />
<property name="maxCacheSize" value="80000" />
<property name="mutable" value="true" />
<property name="disableSharedCache" value="${system.cache.disableMutableSharedCaches}" />
</bean>
@ -209,7 +238,7 @@
<property name="name">
<value>org.alfresco.cache.node.aspectsTransactionalCache</value>
</property>
<property name="maxCacheSize" value="10000" />
<property name="maxCacheSize" value="50000" />
<property name="mutable" value="true" />
<property name="disableSharedCache" value="${system.cache.disableMutableSharedCaches}" />
</bean>
@ -392,7 +421,73 @@
<property name="name">
<value>org.alfresco.authorityTransactionalCache</value>
</property>
<property name="maxCacheSize" value="100" />
<property name="maxCacheSize" value="10000" />
<property name="mutable" value="true" />
<property name="disableSharedCache" value="${system.cache.disableMutableSharedCaches}" />
</bean>
<!-- ================================================ -->
<!-- Authority NodeRef lookup to ChildAssociationRefs -->
<!-- ================================================ -->
<!-- The cross-transaction shared cache for authority containers -->
<bean name="authorityToChildAuthoritySharedCache" class="org.alfresco.repo.cache.EhCacheAdapter">
<property name="cache">
<bean class="org.springframework.cache.ehcache.EhCacheFactoryBean" >
<property name="cacheManager">
<ref bean="internalEHCacheManager" />
</property>
<property name="cacheName">
<value>org.alfresco.cache.authorityToChildAuthorityCache</value>
</property>
</bean>
</property>
</bean>
<!-- The transactional cache for authority containers -->
<bean name="authorityToChildAuthorityCache" class="org.alfresco.repo.cache.TransactionalCache">
<property name="sharedCache">
<ref bean="authorityToChildAuthoritySharedCache" />
</property>
<property name="name">
<value>org.alfresco.authorityToChildAuthorityTransactionalCache</value>
</property>
<property name="maxCacheSize" value="40000" />
<property name="mutable" value="true" />
<property name="disableSharedCache" value="${system.cache.disableMutableSharedCaches}" />
</bean>
<!-- ================================================ -->
<!-- Zone lookup to ChildAssociationRefs -->
<!-- ================================================ -->
<!-- The cross-transaction shared cache for authority containers -->
<bean name="zoneToAuthoritySharedCache" class="org.alfresco.repo.cache.EhCacheAdapter">
<property name="cache">
<bean class="org.springframework.cache.ehcache.EhCacheFactoryBean" >
<property name="cacheManager">
<ref bean="internalEHCacheManager" />
</property>
<property name="cacheName">
<value>org.alfresco.cache.zoneToAuthorityCache</value>
</property>
</bean>
</property>
</bean>
<!-- The transactional cache for authority containers -->
<bean name="zoneToAuthorityCache" class="org.alfresco.repo.cache.TransactionalCache">
<property name="sharedCache">
<ref bean="zoneToAuthoritySharedCache" />
</property>
<property name="name">
<value>org.alfresco.zoneToAuthorityTransactionalCache</value>
</property>
<property name="maxCacheSize" value="500" />
<property name="mutable" value="true" />
<property name="disableSharedCache" value="${system.cache.disableMutableSharedCaches}" />
</bean>

View File

@ -115,6 +115,7 @@
<property name="localeDAO" ref="localeDAO"/>
<property name="usageDAO" ref="usageDAO"/>
<property name="rootNodesCache" ref="node.rootNodesCache"/>
<property name="allRootNodesCache" ref="node.allRootNodesCache"/>
<property name="nodesCache" ref="node.nodesCache"/>
<property name="aspectsCache" ref="node.aspectsCache"/>
<property name="propertiesCache" ref="node.propertiesCache"/>

View File

@ -42,6 +42,13 @@
overflowToDisk="false"
statistics="false"
/>
<cache
name="org.alfresco.cache.node.allRootNodesCache"
maxElementsInMemory="500"
eternal="true"
overflowToDisk="false"
statistics="false"
/>
<cache
name="org.alfresco.cache.node.nodesCache"
maxElementsInMemory="100000"
@ -52,7 +59,7 @@
/>
<cache
name="org.alfresco.cache.node.aspectsCache"
maxElementsInMemory="40000"
maxElementsInMemory="80000"
eternal="false"
timeToLiveSeconds="60"
overflowToDisk="false"
@ -154,6 +161,20 @@
overflowToDisk="false"
statistics="false"
/>
<cache
name="org.alfresco.cache.authorityToChildAuthorityCache"
maxElementsInMemory="40000"
eternal="true"
overflowToDisk="false"
statistics="false"
/>
<cache
name="org.alfresco.cache.zoneToAuthorityCache"
maxElementsInMemory="500"
eternal="true"
overflowToDisk="false"
statistics="false"
/>
<cache
name="org.alfresco.cache.authenticationCache"
maxElementsInMemory="5000"
@ -163,7 +184,7 @@
/>
<cache
name="org.alfresco.cache.authorityCache"
maxElementsInMemory="5000"
maxElementsInMemory="10000"
eternal="true"
overflowToDisk="false"
statistics="false"

View File

@ -104,6 +104,25 @@
replicateAsynchronously = false"/>
</cache>
<cache
name="org.alfresco.cache.node.allRootNodesCache"
maxElementsInMemory="500"
eternal="true"
timeToIdleSeconds="0"
timeToLiveSeconds="0"
overflowToDisk="false"
statistics="false"
>
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
properties="replicatePuts = false,
replicateUpdates = true,
replicateRemovals = true,
replicateUpdatesViaCopy = false,
replicateAsynchronously = false"/>
</cache>
<cache
name="org.alfresco.cache.node.nodesCache"
maxElementsInMemory="100000"
@ -351,6 +370,40 @@
replicateAsynchronously = false"/>
</cache>
<cache
name="org.alfresco.cache.authorityToChildAuthorityCache"
maxElementsInMemory="40000"
eternal="true"
overflowToDisk="false"
statistics="false"
>
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
properties="replicatePuts = false,
replicateUpdates = true,
replicateRemovals = true,
replicateUpdatesViaCopy = false,
replicateAsynchronously = false"/>
</cache>
<cache
name="org.alfresco.cache.zoneToAuthorityCache"
maxElementsInMemory="500"
eternal="true"
overflowToDisk="false"
statistics="false"
>
<cacheEventListenerFactory
class="net.sf.ehcache.distribution.RMICacheReplicatorFactory"
properties="replicatePuts = false,
replicateUpdates = true,
replicateRemovals = true,
replicateUpdatesViaCopy = false,
replicateAsynchronously = false"/>
</cache>
<cache
name="org.alfresco.cache.authenticationCache"
maxElementsInMemory="5000"
@ -370,7 +423,7 @@
<cache
name="org.alfresco.cache.authorityCache"
maxElementsInMemory="5000"
maxElementsInMemory="10000"
eternal="true"
overflowToDisk="false"
statistics="false"

View File

@ -183,6 +183,7 @@ Inbound settings from iBatis
<mapper resource="alfresco/ibatis/#resource.dialect#/content-common-SqlMap.xml"/>
<mapper resource="alfresco/ibatis/#resource.dialect#/content-insert-SqlMap.xml"/>
<mapper resource="alfresco/ibatis/#resource.dialect#/node-common-SqlMap.xml"/>
<mapper resource="alfresco/ibatis/#resource.dialect#/node-select-children-SqlMap.xml"/>
<mapper resource="alfresco/ibatis/#resource.dialect#/node-update-acl-SqlMap.xml"/>
<mapper resource="alfresco/ibatis/#resource.dialect#/node-insert-SqlMap.xml"/>
<mapper resource="alfresco/ibatis/#resource.dialect#/patch-common-SqlMap.xml"/>

View File

@ -921,6 +921,42 @@
assoc.assoc_index ASC,
assoc.id ASC
</sql>
<sql id="select_ChildAssocsOfParent_Query">
<include refid="alfresco.node.select_ChildAssoc_Results"/>
<include refid="alfresco.node.select_ChildAssoc_FromSimple"/>
where
parentNode.id = #{parentNode.id}
<if test="childNode != null">and assoc.child_node_id = #{childNode.id}</if>
<if test="typeQNameIds != null">
and assoc.type_qname_id in
<foreach item="item" index="index" collection="typeQNameIds" open="(" separator="," close=")">
#{item}
</foreach>
</if>
<if test="typeQNameId != null">and assoc.type_qname_id = #{typeQNameId}</if>
<if test="qnameCrc != null">and assoc.qname_crc = #{qnameCrc}</if>
<if test="qnameNamespaceId != null">and assoc.qname_ns_id = #{qnameNamespaceId}</if>
<if test="qnameLocalName != null">and assoc.qname_localname = #{qnameLocalName}</if>
<if test="isPrimary != null">and assoc.is_primary = #{isPrimary}</if>
<if test="childNodeName != null">and assoc.child_node_name = #{childNodeName}</if>
<if test="childNodeNameCrc != null">and assoc.child_node_name_crc = #{childNodeNameCrc}</if>
<if test="childNodeNameCrcs != null">
and child_node_name_crc in
<foreach item="item" index="index" collection="childNodeNameCrcs" open="(" separator="," close=")">
#{item}
</foreach>
</if>
<if test="childNodeTypeQNameIds != null">
and childNode.type_qname_id in
<foreach item="item" index="index" collection="childNodeTypeQNameIds" open="(" separator="," close=")">
#{item}
</foreach>
</if>
<if test="sameStore != null">
<if test="sameStore == true">and parentStore.id = childStore.id</if>
<if test="sameStore == false"><![CDATA[and parentStore.id <> childStore.id]]></if>
</if>
</sql>
<select id="select_ChildAssocById" parameterType="ChildAssoc" resultMap="result_ChildAssoc">
<include refid="alfresco.node.select_ChildAssoc_Results"/>
@ -1048,40 +1084,7 @@
</select>
<select id="select_ChildAssocsOfParent" parameterType="ChildAssoc" resultMap="result_ChildAssoc">
<include refid="alfresco.node.select_ChildAssoc_Results"/>
<include refid="alfresco.node.select_ChildAssoc_FromSimple"/>
where
parentNode.id = #{parentNode.id}
<if test="childNode != null">and assoc.child_node_id = #{childNode.id}</if>
<if test="typeQNameIds != null">
and assoc.type_qname_id in
<foreach item="item" index="index" collection="typeQNameIds" open="(" separator="," close=")">
#{item}
</foreach>
</if>
<if test="typeQNameId != null">and assoc.type_qname_id = #{typeQNameId}</if>
<if test="qnameCrc != null">and assoc.qname_crc = #{qnameCrc}</if>
<if test="qnameNamespaceId != null">and assoc.qname_ns_id = #{qnameNamespaceId}</if>
<if test="qnameLocalName != null">and assoc.qname_localname = #{qnameLocalName}</if>
<if test="isPrimary != null">and assoc.is_primary = #{isPrimary}</if>
<if test="childNodeName != null">and assoc.child_node_name = #{childNodeName}</if>
<if test="childNodeNameCrc != null">and assoc.child_node_name_crc = #{childNodeNameCrc}</if>
<if test="childNodeNameCrcs != null">
and child_node_name_crc in
<foreach item="item" index="index" collection="childNodeNameCrcs" open="(" separator="," close=")">
#{item}
</foreach>
</if>
<if test="childNodeTypeQNameIds != null">
and childNode.type_qname_id in
<foreach item="item" index="index" collection="childNodeTypeQNameIds" open="(" separator="," close=")">
#{item}
</foreach>
</if>
<if test="sameStore != null">
<if test="sameStore == true">and parentStore.id = childStore.id</if>
<if test="sameStore == false"><![CDATA[and parentStore.id <> childStore.id]]></if>
</if>
<include refid="alfresco.node.select_ChildAssocsOfParent_Query"/>
<if test="ordered == true">
<include refid="alfresco.node.select_ChildAssoc_OrderBy"/>
</if>

View File

@ -0,0 +1,14 @@
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="alfresco.node.select.children">
<select id="select_ChildAssocsOfParent_Limited" parameterType="ChildAssoc" resultMap="result_ChildAssoc">
<include refid="alfresco.node.select_ChildAssocsOfParent_Query"/>
<if test="ordered == true">
<include refid="alfresco.node.select_ChildAssoc_OrderBy"/>
</if>
</select>
</mapper>

View File

@ -0,0 +1,16 @@
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="alfresco.node.select.children">
<!-- Note the MySQL specific fetch size limitation (Integer.MIN_VALUE) on this statement. This activates result set streaming. -->
<select id="select_ChildAssocsOfParent_Limited" parameterType="ChildAssoc" resultMap="result_ChildAssoc" fetchSize="-2147483648">
<include refid="alfresco.node.select_ChildAssocsOfParent_Query"/>
and 1=1 <!-- This part present to make the SQL string unique WRT the prepared statement cache -->
<if test="ordered == true">
<include refid="alfresco.node.select_ChildAssoc_OrderBy"/>
</if>
</select>
</mapper>

View File

@ -359,6 +359,7 @@
org.alfresco.service.cmr.repository.NodeService.exists=ACL_ALLOW
org.alfresco.service.cmr.repository.NodeService.getNodeStatus=ACL_NODE.0.sys:base.ReadProperties
org.alfresco.service.cmr.repository.NodeService.getNodeRef=AFTER_ACL_NODE.sys:base.ReadProperties
org.alfresco.service.cmr.repository.NodeService.getAllRootNodes=ACL_NODE.0.sys:base.ReadProperties,AFTER_ACL_NODE.sys:base.ReadProperties
org.alfresco.service.cmr.repository.NodeService.getRootNode=ACL_NODE.0.sys:base.ReadProperties
org.alfresco.service.cmr.repository.NodeService.createNode=ACL_NODE.0.sys:base.CreateChildren
org.alfresco.service.cmr.repository.NodeService.moveNode=ACL_NODE.0.sys:base.DeleteNode,ACL_NODE.1.sys:base.CreateChildren
@ -758,6 +759,7 @@
org.alfresco.service.cmr.security.AuthorityService.deleteAuthority=ACL_METHOD.ROLE_ADMINISTRATOR
org.alfresco.service.cmr.security.AuthorityService.getContainedAuthorities=ACL_ALLOW
org.alfresco.service.cmr.security.AuthorityService.getContainingAuthorities=ACL_ALLOW
org.alfresco.service.cmr.security.AuthorityService.getContainingAuthoritiesInZone=ACL_ALLOW
org.alfresco.service.cmr.security.AuthorityService.getShortName=ACL_ALLOW
org.alfresco.service.cmr.security.AuthorityService.getName=ACL_ALLOW
org.alfresco.service.cmr.security.AuthorityService.authorityExists=ACL_ALLOW

View File

@ -269,12 +269,12 @@ lucene.indexer.writerRamBufferSizeMb=16
#
# Target number of indexes and deltas in the overall index and what index size to merge in memory
#
lucene.indexer.mergerTargetIndexCount=5
lucene.indexer.mergerTargetIndexCount=8
lucene.indexer.mergerTargetOverlayCount=5
lucene.indexer.mergerTargetOverlaysBlockingFactor=2
lucene.indexer.maxDocsForInMemoryMerge=10000
lucene.indexer.maxDocsForInMemoryMerge=60000
lucene.indexer.maxRamInMbForInMemoryMerge=16
lucene.indexer.maxDocsForInMemoryIndex=10000
lucene.indexer.maxDocsForInMemoryIndex=60000
lucene.indexer.maxRamInMbForInMemoryIndex=16
#
# Other lucene properties

View File

@ -381,7 +381,7 @@ public class EmailServiceImpl implements EmailService
*/
private boolean isEmailContributeUser(String userName)
{
return this.authorityService.getContainingAuthorities(AuthorityType.GROUP, userName, false).contains(
return this.authorityService.getAuthoritiesForUser(userName).contains(
authorityService.getName(AuthorityType.GROUP, "EMAIL_CONTRIBUTORS"));
}
}

View File

@ -336,6 +336,11 @@ public class AVMNodeService extends AbstractNodeServiceImpl implements NodeServi
}
}
public Set<NodeRef> getAllRootNodes(StoreRef storeRef)
{
return Collections.singleton(getRootNode(storeRef));
}
/**
* @see #createNode(NodeRef, QName, QName, QName, Map)
*/
@ -1662,7 +1667,17 @@ public class AVMNodeService extends AbstractNodeServiceImpl implements NodeServi
return result;
}
@Override
public List<ChildAssociationRef> getChildAssocs(NodeRef nodeRef, QName typeQName, QName qname, int maxResults,
boolean preload) throws InvalidNodeRefException
{
List<ChildAssociationRef> result = getChildAssocs(nodeRef, typeQName, qname);
if (result.size() > maxResults)
{
return result.subList(0, maxResults);
}
return result;
}
public List<ChildAssociationRef> getChildAssocs(NodeRef nodeRef, QNamePattern typeQNamePattern,
QNamePattern qnamePattern, boolean preload) throws InvalidNodeRefException

View File

@ -111,6 +111,8 @@ public class AVMServiceConcurrentTest extends AVMServiceTestBase
testTX = fTransactionService.getUserTransaction();
testTX.begin();
try
{
searchService = fIndexerAndSearcher.getSearcher(AVMNodeConverter.ToStoreRef("main"), true);
results = searchService.query(storeRef, "lucene", "PATH:\"/test/*\"");
@ -121,7 +123,11 @@ public class AVMServiceConcurrentTest extends AVMServiceTestBase
assertEquals(loops, results.length());
results.close();
testTX.commit();
}
finally
{
try { testTX.commit(); } catch (Exception e) {}
}
// delete
@ -233,7 +239,8 @@ public class AVMServiceConcurrentTest extends AVMServiceTestBase
testTX = fTransactionService.getUserTransaction();
testTX.begin();
try
{
searchService = fIndexerAndSearcher.getSearcher(AVMNodeConverter.ToStoreRef("main"), true);
results = searchService.query(storeRef, "lucene", "PATH:\"/test/*\"");
for(ResultSetRow row : results)
@ -242,8 +249,11 @@ public class AVMServiceConcurrentTest extends AVMServiceTestBase
}
assertEquals(loops, results.length());
results.close();
testTX.commit();
}
finally
{
try { testTX.commit(); } catch (Exception e) {}
}
// update

View File

@ -483,15 +483,7 @@ public class AVMLockingServiceImpl implements AVMLockingService
{
return true;
}
Set<String> containing = authorityService.getContainingAuthorities(null, user, false);
for (String parent : containing)
{
if (parent.equalsIgnoreCase(authority))
{
return true;
}
}
return false;
return authorityService.getAuthoritiesForUser(user).contains(authority);
}
/**

View File

@ -55,9 +55,9 @@ import org.alfresco.repo.domain.usage.UsageDAO;
import org.alfresco.repo.policy.BehaviourFilter;
import org.alfresco.repo.security.permissions.AccessControlListProperties;
import org.alfresco.repo.transaction.AlfrescoTransactionSupport;
import org.alfresco.repo.transaction.AlfrescoTransactionSupport.TxnReadState;
import org.alfresco.repo.transaction.TransactionAwareSingleton;
import org.alfresco.repo.transaction.TransactionListenerAdapter;
import org.alfresco.repo.transaction.AlfrescoTransactionSupport.TxnReadState;
import org.alfresco.service.cmr.dictionary.DataTypeDefinition;
import org.alfresco.service.cmr.dictionary.DictionaryService;
import org.alfresco.service.cmr.dictionary.InvalidTypeException;
@ -71,20 +71,20 @@ import org.alfresco.service.cmr.repository.DuplicateChildNodeNameException;
import org.alfresco.service.cmr.repository.InvalidNodeRefException;
import org.alfresco.service.cmr.repository.InvalidStoreRefException;
import org.alfresco.service.cmr.repository.NodeRef;
import org.alfresco.service.cmr.repository.NodeRef.Status;
import org.alfresco.service.cmr.repository.Path;
import org.alfresco.service.cmr.repository.StoreRef;
import org.alfresco.service.cmr.repository.NodeRef.Status;
import org.alfresco.service.cmr.repository.datatype.DefaultTypeConverter;
import org.alfresco.service.namespace.QName;
import org.alfresco.service.transaction.ReadOnlyServerException;
import org.alfresco.service.transaction.TransactionService;
import org.alfresco.util.EqualsHelper;
import org.alfresco.util.EqualsHelper.MapValueComparison;
import org.alfresco.util.GUID;
import org.alfresco.util.Pair;
import org.alfresco.util.PropertyCheck;
import org.alfresco.util.ReadWriteLockExecuter;
import org.alfresco.util.SerializationUtils;
import org.alfresco.util.EqualsHelper.MapValueComparison;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.dao.ConcurrencyFailureException;
@ -135,6 +135,15 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
* VALUE KEY: IGNORED<br/>
*/
private EntityLookupCache<StoreRef, Node, Serializable> rootNodesCache;
/**
* Cache for nodes with the root aspect by StoreRef:<br/>
* KEY: StoreRef<br/>
* VALUE: A set of nodes with the root aspect<br/>
*/
private SimpleCache<StoreRef, Set<NodeRef>> allRootNodesCache;
/**
* Bidirectional cache for the Node ID to Node lookups:<br/>
* KEY: Node ID<br/>
@ -274,6 +283,16 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
new RootNodesCacheCallbackDAO());
}
/**
* Set the cache that maintains the extended Store root node data
*
* @param cache the cache
*/
public void setAllRootNodesCache(SimpleCache<StoreRef, Set<NodeRef>> allRootNodesCache)
{
this.allRootNodesCache = allRootNodesCache;
}
/**
* Set the cache that maintains node ID-NodeRef cross referencing data
*
@ -637,6 +656,48 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
}
}
public Set<NodeRef> getAllRootNodes(StoreRef storeRef)
{
Set<NodeRef> rootNodes = allRootNodesCache.get(storeRef);
if (rootNodes == null)
{
final Map<StoreRef, Set<NodeRef>> allRootNodes = new HashMap<StoreRef, Set<NodeRef>>(97);
getNodesWithAspects(Collections.singleton(ContentModel.ASPECT_ROOT), 0L, Long.MAX_VALUE, new NodeRefQueryCallback()
{
@Override
public boolean handle(Pair<Long, NodeRef> nodePair)
{
NodeRef nodeRef = nodePair.getSecond();
StoreRef storeRef = nodeRef.getStoreRef();
Set<NodeRef> rootNodes = allRootNodes.get(storeRef);
if (rootNodes == null)
{
rootNodes = new HashSet<NodeRef>(97);
allRootNodes.put(storeRef, rootNodes);
}
rootNodes.add(nodeRef);
return true;
}
});
rootNodes = allRootNodes.get(storeRef);
if (rootNodes == null)
{
rootNodes = Collections.emptySet();
allRootNodes.put(storeRef, rootNodes);
}
for (Map.Entry<StoreRef, Set<NodeRef>> entry : allRootNodes.entrySet())
{
StoreRef entryStoreRef = entry.getKey();
// Prevent unnecessary cross-invalidation
if (!allRootNodesCache.contains(entryStoreRef))
{
allRootNodesCache.put(entryStoreRef, entry.getValue());
}
}
}
return rootNodes;
}
public Pair<Long, NodeRef> newStore(StoreRef storeRef)
{
// Create the store
@ -684,6 +745,7 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
}
// All the NodeRef-based caches are invalid. ID-based caches are fine.
rootNodesCache.removeByKey(oldStoreRef);
allRootNodesCache.remove(oldStoreRef);
nodesCache.clear();
if (isDebugEnabled)
@ -1251,7 +1313,7 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
childAssocRetryingHelper.doWithRetry(callback);
// Check for cyclic relationships
getPaths(newChildNode.getNodePair(), false);
cycleCheck(newChildNode.getNodePair());
// Update ACLs for moved tree
Long newParentAclId = newParentNode.getAclId();
@ -1568,6 +1630,10 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
nodeUpdate.setAuditableProperties(auditableProps);
nodeUpdate.setUpdateAuditableProperties(true);
}
if (nodeAspects.contains(ContentModel.ASPECT_ROOT))
{
allRootNodesCache.remove(node.getNodePair().getSecond().getStoreRef());
}
// Remove value from the cache
nodesCache.removeByKey(nodeId);
@ -2178,7 +2244,9 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
// If we are adding the sys:aspect_root, then the parent assocs cache is unreliable
if (newAspectQNames.contains(ContentModel.ASPECT_ROOT))
{
Pair <Long, NodeRef> nodePair = getNodePair(nodeId);
invalidateCachesByNodeId(null, nodeId, parentAssocsCache);
allRootNodesCache.remove(nodePair.getSecond().getStoreRef());
}
// Touch to bring into current txn
@ -2226,7 +2294,9 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
// If we are removing the sys:aspect_root, then the parent assocs cache is unreliable
if (aspectQNames.contains(ContentModel.ASPECT_ROOT))
{
Pair <Long, NodeRef> nodePair = getNodePair(nodeId);
invalidateCachesByNodeId(null, nodeId, parentAssocsCache);
allRootNodesCache.remove(nodePair.getSecond().getStoreRef());
}
// Touch to bring into current txn
@ -2563,12 +2633,12 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
QName assocQName,
String childNodeName)
{
ParentAssocsInfo parentAssocInfo = getParentAssocsCached(childNodeId);
ChildAssocEntity assoc = newChildAssocImpl(
parentNodeId, childNodeId, false, assocTypeQName, assocQName, childNodeName);
Long assocId = assoc.getId();
// update cache
ParentAssocsInfo parentAssocInfo = getParentAssocsCached(childNodeId);
parentAssocInfo = parentAssocInfo.addAssoc(assocId, assoc);
parentAssocInfo = parentAssocInfo.addAssoc(assocId, assoc, getCurrentTransactionId());
setParentAssocsCached(childNodeId, parentAssocInfo);
// Done
return assoc.getPair(qnameDAO);
@ -2584,7 +2654,7 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
// Update cache
Long childNodeId = assoc.getChildNode().getId();
ParentAssocsInfo parentAssocInfo = getParentAssocsCached(childNodeId);
parentAssocInfo = parentAssocInfo.removeAssoc(assocId);
parentAssocInfo = parentAssocInfo.removeAssoc(assocId, getCurrentTransactionId());
setParentAssocsCached(childNodeId, parentAssocInfo);
// Delete it
int count = deleteChildAssocById(assocId);
@ -2948,12 +3018,13 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
assoc.getParentNode().getNodePair(),
assoc.getChildNode().getNodePair());
}
resultsCallback.done();
}
else
{
// Decide whether we query or filter
ParentAssocsInfo parentAssocs = getParentAssocsCacheOnly(childNodeId);
if ((parentAssocs == null) || (parentAssocs.getParentAssocs().size() > PARENT_ASSOCS_CACHE_FILTER_THRESHOLD))
ParentAssocsInfo parentAssocs = getParentAssocsCached(childNodeId);
if (parentAssocs.getParentAssocs().size() > PARENT_ASSOCS_CACHE_FILTER_THRESHOLD)
{
// Query
selectParentAssocs(childNodeId, assocTypeQName, assocQName, isPrimary, resultsCallback);
@ -2973,11 +3044,70 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
assoc.getChildNode().getNodePair());
}
}
resultsCallback.done();
}
}
}
/**
* Potentially cheaper than evaluating all of a node's paths to check for child association cycles
*
* @param nodePair
* the node to check
* @param path
* a set containing the nodes in the path to the node
*/
public void cycleCheck(Pair<Long, NodeRef> nodePair)
{
CycleCallBack callback = new CycleCallBack();
callback.cycleCheck(nodePair);
if (callback.toThrow != null)
{
throw callback.toThrow;
}
}
class CycleCallBack implements ChildAssocRefQueryCallback
{
final Set<ChildAssociationRef> path = new HashSet<ChildAssociationRef>(97);
CyclicChildRelationshipException toThrow;
@Override
public void done()
{
}
@Override
public boolean handle(Pair<Long, ChildAssociationRef> childAssocPair, Pair<Long, NodeRef> parentNodePair,
Pair<Long, NodeRef> childNodePair)
{
ChildAssociationRef childAssociationRef = childAssocPair.getSecond();
if (!path.add(childAssociationRef))
{
// Remember exception we want to throw and exit. If we throw within here, it will be wrapped by IBatis
toThrow = new CyclicChildRelationshipException("Child Association Cycle Detected " + path, childAssociationRef);
return false;
}
cycleCheck(childNodePair);
path.remove(childAssociationRef);
return toThrow == null;
}
@Override
public boolean preLoadNodes()
{
return false;
}
public void cycleCheck(Pair<Long, NodeRef> nodePair)
{
getChildAssocs(nodePair.getFirst(), null, null, null, null, null, this);
}
};
public List<Path> getPaths(Pair<Long, NodeRef> nodePair, boolean primaryOnly) throws InvalidNodeRefException
{
// create storage for the paths - only need 1 bucket if we are looking for the primary path
@ -3203,7 +3333,8 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
// Validate that we aren't pairing up a cached node with historic parent associations from an old
// transaction (or the other way around)
Long txnId = parentAssocsInfo.getTxnId();
if (txnId != null && !txnId.equals(child.getTransaction().getId()))
Long childTxnId = child.getTransaction().getId();
if (txnId != null && !txnId.equals(childTxnId))
{
if (logger.isDebugEnabled())
{
@ -3211,7 +3342,17 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
+ " detected loading parent associations. Cached transaction ID: "
+ child.getTransaction().getId() + ", actual transaction ID: " + txnId);
}
invalidateNodeCaches(nodeId);
if (AlfrescoTransactionSupport.getTransactionReadState() != TxnReadState.TXN_READ_WRITE
|| !getCurrentTransaction().getId().equals(childTxnId))
{
// Force a reload of the node and its parent assocs
invalidateNodeCaches(nodeId);
}
else
{
// The node is for the current transaction, so only invalidate the parent assocs
invalidateCachesByNodeId(null, nodeId, parentAssocsCache);
}
}
else
{
@ -3516,6 +3657,12 @@ public abstract class AbstractNodeDAOImpl implements NodeDAO, BatchingDAO
HashSet<Long> qnameIdsSet = new HashSet<Long>(qnameIds);
Set<QName> qnames = qnameDAO.convertIdsToQNames(qnameIdsSet);
aspectsCache.setValue(nodeId, qnames);
aspectNodeIds.remove(nodeId);
}
// Cache the absence of aspects too!
for (Long nodeId: aspectNodeIds)
{
aspectsCache.setValue(nodeId, Collections.<QName>emptySet());
}
Map<Long, Map<NodePropertyKey, NodePropertyValue>> propsByNodeId = selectNodeProperties(propertiesNodeIds);

View File

@ -113,6 +113,8 @@ public interface NodeDAO extends NodeBulkLoader
public Pair<Long, NodeRef> getRootNode(StoreRef storeRef);
public Set<NodeRef> getAllRootNodes(StoreRef storeRef);
/*
* Node
*/
@ -491,6 +493,27 @@ public interface NodeDAO extends NodeBulkLoader
ChildAssocRefQueryCallback resultsCallback);
/**
* Gets the first n child associations of a given parent node, optionally filtering on association <tt>QName</tt>
* and association type <tt>QName</tt>.
* <p/>
* This is an efficient query for node paths.
*
* @param parentNodeId the parent node ID
* @param assocTypeQName the association type qname to filter on; <tt>null<tt> for no filtering
* @param assocQName the association qname to filter on; <tt>null</tt> for no filtering
* @param maxResults the maximum number of results to return. The query will be terminated efficiently
* after that number of results
* @param preload should the child nodes be batch loaded?
* @return a list of child associations
*/
public List<ChildAssociationRef> getChildAssocs(
Long parentNodeId,
QName assocTypeQName,
QName assocQName,
final int maxResults,
boolean preload);
/**
* Get the child associations of a given parent node, optionally filtering on type <tt>QName</tt>.
*
* @param parentNodeId the parent node ID
@ -597,6 +620,14 @@ public interface NodeDAO extends NodeBulkLoader
*/
public List<Path> getPaths(Pair<Long, NodeRef> nodePair, boolean primaryOnly) throws InvalidNodeRefException;
/**
* Potentially cheaper than evaluating all of a node's paths to check for child association cycles.
*
* @param nodePair
* the node to check
*/
public void cycleCheck(Pair<Long, NodeRef> nodePair);
/*
* Transactions
*/

View File

@ -159,27 +159,27 @@ import org.apache.commons.logging.LogFactory;
return (primaryAssocId != null) ? parentAssocsById.get(primaryAssocId) : null;
}
public ParentAssocsInfo changeIsRoot(boolean isRoot)
public ParentAssocsInfo changeIsRoot(boolean isRoot, Long txnId)
{
return new ParentAssocsInfo(this.txnId, isRoot, this.isRoot, parentAssocsById, primaryAssocId);
return new ParentAssocsInfo(txnId, isRoot, this.isRoot, parentAssocsById, primaryAssocId);
}
public ParentAssocsInfo changeIsStoreRoot(boolean isStoreRoot)
public ParentAssocsInfo changeIsStoreRoot(boolean isStoreRoot, Long txnId)
{
return new ParentAssocsInfo(this.txnId, this.isRoot, isStoreRoot, parentAssocsById, primaryAssocId);
return new ParentAssocsInfo(txnId, this.isRoot, isStoreRoot, parentAssocsById, primaryAssocId);
}
public ParentAssocsInfo addAssoc(Long assocId, ChildAssocEntity parentAssoc)
public ParentAssocsInfo addAssoc(Long assocId, ChildAssocEntity parentAssoc, Long txnId)
{
Map<Long, ChildAssocEntity> parentAssocs = new HashMap<Long, ChildAssocEntity>(parentAssocsById);
parentAssocs.put(parentAssoc.getId(), parentAssoc);
return new ParentAssocsInfo(this.txnId, isRoot, isStoreRoot, parentAssocs, primaryAssocId);
return new ParentAssocsInfo(txnId, isRoot, isStoreRoot, parentAssocs, primaryAssocId);
}
public ParentAssocsInfo removeAssoc(Long assocId)
public ParentAssocsInfo removeAssoc(Long assocId, Long txnId)
{
Map<Long, ChildAssocEntity> parentAssocs = new HashMap<Long, ChildAssocEntity>(parentAssocsById);
parentAssocs.remove(assocId);
return new ParentAssocsInfo(this.txnId, isRoot, isStoreRoot, parentAssocs, primaryAssocId);
return new ParentAssocsInfo(txnId, isRoot, isStoreRoot, parentAssocs, primaryAssocId);
}
}

View File

@ -23,6 +23,7 @@ import java.util.Collection;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Set;
@ -55,6 +56,7 @@ import org.alfresco.service.cmr.repository.NodeRef;
import org.alfresco.service.cmr.repository.StoreRef;
import org.alfresco.service.namespace.QName;
import org.alfresco.util.Pair;
import org.apache.ibatis.executor.result.DefaultResultContext;
import org.apache.ibatis.session.ResultContext;
import org.apache.ibatis.session.ResultHandler;
import org.apache.ibatis.session.RowBounds;
@ -117,6 +119,7 @@ public class NodeDAOImpl extends AbstractNodeDAOImpl
private static final String SELECT_CHILD_ASSOC_BY_ID = "alfresco.node.select_ChildAssocById";
private static final String SELECT_CHILD_ASSOCS_BY_PROPERTY_VALUE = "alfresco.node.select_ChildAssocsByPropertyValue";
private static final String SELECT_CHILD_ASSOCS_OF_PARENT = "alfresco.node.select_ChildAssocsOfParent";
private static final String SELECT_CHILD_ASSOCS_OF_PARENT_LIMITED = "alfresco.node.select_ChildAssocsOfParent_Limited";
private static final String SELECT_CHILD_ASSOC_OF_PARENT_BY_NAME = "alfresco.node.select_ChildAssocOfParentByName";
private static final String SELECT_CHILD_ASSOCS_OF_PARENT_WITHOUT_PARENT_ASSOCS_OF_TYPE =
"alfresco.node.select_ChildAssocsOfParentWithoutParentAssocsOfType";
@ -1053,6 +1056,77 @@ public class NodeDAOImpl extends AbstractNodeDAOImpl
resultsCallback.done();
}
public List<ChildAssociationRef> getChildAssocs(
Long parentNodeId,
QName assocTypeQName,
QName assocQName,
final int maxResults,
boolean preload)
{
ChildAssocEntity assoc = new ChildAssocEntity();
// Parent
NodeEntity parentNode = new NodeEntity();
parentNode.setId(parentNodeId);
assoc.setParentNode(parentNode);
// Type QName
if (assocTypeQName != null)
{
if (!assoc.setTypeQNameAll(qnameDAO, assocTypeQName, false))
{
return Collections.emptyList(); // Shortcut
}
}
// QName
if (assocQName != null)
{
if (!assoc.setQNameAll(qnameDAO, assocQName, false))
{
return Collections.emptyList(); // Shortcut
}
}
final List<ChildAssociationRef> result = new LinkedList<ChildAssociationRef>();
final List<NodeRef> toLoad = new LinkedList<NodeRef>();
// We can't invoke the row handler whilst the limited query is running as it's illegal on some databases (MySQL)
List<?> entities = template.selectList(SELECT_CHILD_ASSOCS_OF_PARENT_LIMITED, assoc, new RowBounds(0,
maxResults));
ChildAssocResultHandler rowHandler = new ChildAssocResultHandler(new ChildAssocRefQueryCallback(){
@Override
public boolean handle(Pair<Long, ChildAssociationRef> childAssocPair, Pair<Long, NodeRef> parentNodePair,
Pair<Long, NodeRef> childNodePair)
{
result.add(childAssocPair.getSecond());
toLoad.add(childNodePair.getSecond());
return true;
}
@Override
public void done()
{
}
@Override
public boolean preLoadNodes()
{
return false;
}});
final DefaultResultContext resultContext = new DefaultResultContext();
for (Object entity : entities)
{
resultContext.nextResultObject(entity);
rowHandler.handleResult(resultContext);
}
if (preload && !toLoad.isEmpty())
{
cacheNodes(toLoad);
}
return result;
}
@Override
protected void selectChildAssocs(
Long parentNodeId,

View File

@ -871,10 +871,10 @@ public final class People extends BaseScopableProcessorExtension implements Init
{
ParameterCheck.mandatory("Person", person);
Object[] parents = null;
Set<String> authorities = this.authorityService.getContainingAuthorities(
Set<String> authorities = this.authorityService.getContainingAuthoritiesInZone(
AuthorityType.GROUP,
(String)person.getProperties().get(ContentModel.PROP_USERNAME),
false);
AuthorityService.ZONE_APP_DEFAULT, null, 1000);
parents = new Object[authorities.size()];
int i = 0;
for (String authority : authorities)

View File

@ -66,10 +66,10 @@ import org.alfresco.service.cmr.repository.InvalidChildAssociationRefException;
import org.alfresco.service.cmr.repository.InvalidNodeRefException;
import org.alfresco.service.cmr.repository.InvalidStoreRefException;
import org.alfresco.service.cmr.repository.NodeRef;
import org.alfresco.service.cmr.repository.NodeRef.Status;
import org.alfresco.service.cmr.repository.NodeService;
import org.alfresco.service.cmr.repository.Path;
import org.alfresco.service.cmr.repository.StoreRef;
import org.alfresco.service.cmr.repository.NodeRef.Status;
import org.alfresco.service.cmr.repository.datatype.DefaultTypeConverter;
import org.alfresco.service.namespace.QName;
import org.alfresco.service.namespace.QNamePattern;
@ -282,6 +282,12 @@ public class DbNodeServiceImpl extends AbstractNodeServiceImpl
return rootNodePair.getSecond();
}
@Override
public Set<NodeRef> getAllRootNodes(StoreRef storeRef)
{
return nodeDAO.getAllRootNodes(storeRef);
}
/**
* @see #createNode(NodeRef, QName, QName, QName, Map)
*/
@ -1124,6 +1130,9 @@ public class DbNodeServiceImpl extends AbstractNodeServiceImpl
propagateTimeStamps(childParentAssocRef);
invokeOnDeleteNode(childParentAssocRef, childNodeType, childNodeQNames, false);
// Index
nodeIndexer.indexDeleteNode(childParentAssocRef);
// lose interest in tracking this node ref
untrackNewNodeRef(childNodeRef);
}
@ -1168,8 +1177,7 @@ public class DbNodeServiceImpl extends AbstractNodeServiceImpl
}
// check that the child addition of the child has not created a cyclic relationship
// this functionality is provided for free in getPath
getPaths(childRef, false);
nodeDAO.cycleCheck(childNodePair);
// Invoke policy behaviours
for (ChildAssociationRef childAssocRef : childAssociationRefs)
@ -1686,6 +1694,22 @@ public class DbNodeServiceImpl extends AbstractNodeServiceImpl
return orderedList;
}
/**
* Fetches the first n child associations in an efficient manner
*/
public List<ChildAssociationRef> getChildAssocs(
NodeRef nodeRef,
final QName typeQName,
final QName qname,
final int maxResults,
final boolean preload)
{
// Get the node
Pair<Long, NodeRef> nodePair = getNodePairNotNull(nodeRef);
// Get the assocs pointing to it
return nodeDAO.getChildAssocs(nodePair.getFirst(), typeQName, qname, maxResults, preload);
}
public List<ChildAssociationRef> getChildAssocs(NodeRef nodeRef, Set<QName> childNodeTypeQNames)
{
// Get the node

View File

@ -28,6 +28,7 @@ import java.util.Map;
import org.alfresco.error.AlfrescoRuntimeException;
import org.alfresco.model.ContentModel;
import org.alfresco.repo.content.MimetypeMap;
import org.alfresco.repo.rule.RuleModel;
import org.alfresco.repo.security.authentication.AuthenticationContext;
import org.alfresco.repo.security.authentication.AuthenticationUtil;
import org.alfresco.repo.security.authentication.AuthenticationUtil.RunAsWork;
@ -255,6 +256,12 @@ public class PreferenceServiceImpl implements PreferenceService
contentWriter.setEncoding("UTF-8");
contentWriter.setMimetype(MimetypeMap.MIMETYPE_TEXT_PLAIN);
contentWriter.putContent(jsonPrefs.toString());
// Lets stop rule inheritance from trying to kick in - we may be in many groups
if (!PreferenceServiceImpl.this.nodeService.hasAspect(personNodeRef, RuleModel.ASPECT_IGNORE_INHERITED_RULES))
{
PreferenceServiceImpl.this.nodeService.addAspect(personNodeRef, RuleModel.ASPECT_IGNORE_INHERITED_RULES, null);
}
}
catch (JSONException exception)
{

View File

@ -76,6 +76,7 @@ import org.alfresco.service.transaction.TransactionService;
import org.alfresco.util.ApplicationContextHelper;
import org.alfresco.util.GUID;
import org.alfresco.util.Pair;
import org.apache.tools.ant.taskdefs.Retry;
import org.springframework.context.ConfigurableApplicationContext;
/**

View File

@ -2270,6 +2270,8 @@ public class ADMLuceneTest extends TestCase implements DictionaryListener
results.close();
nodeService.addAspect(n14, aspectWithChildren, null);
nodeService.createNode(n14, QName.createQName(TEST_NAMESPACE, "unused"), QName.createQName(TEST_NAMESPACE,
"unused"), testSuperType, getOrderProperties());
testTX.commit();
testTX = transactionService.getUserTransaction();

View File

@ -30,9 +30,12 @@ import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.Set;
import org.alfresco.error.AlfrescoRuntimeException;
import org.alfresco.model.ContentModel;
@ -56,6 +59,7 @@ import org.alfresco.repo.search.impl.lucene.fts.FullTextSearchIndexer;
import org.alfresco.repo.security.authentication.AuthenticationUtil;
import org.alfresco.repo.security.authentication.AuthenticationUtil.RunAsWork;
import org.alfresco.repo.transaction.AlfrescoTransactionSupport;
import org.alfresco.repo.transaction.RetryingTransactionHelper.RetryingTransactionCallback;
import org.alfresco.service.cmr.avm.AVMException;
import org.alfresco.service.cmr.avm.AVMNodeDescriptor;
import org.alfresco.service.cmr.avm.AVMService;
@ -70,6 +74,7 @@ import org.alfresco.service.cmr.repository.ContentIOException;
import org.alfresco.service.cmr.repository.ContentReader;
import org.alfresco.service.cmr.repository.ContentService;
import org.alfresco.service.cmr.repository.ContentWriter;
import org.alfresco.service.cmr.repository.InvalidNodeRefException;
import org.alfresco.service.cmr.repository.MLText;
import org.alfresco.service.cmr.repository.NodeRef;
import org.alfresco.service.cmr.repository.StoreRef;
@ -87,6 +92,7 @@ import org.apache.lucene.analysis.Token;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.TermDocs;
import org.apache.lucene.index.TermEnum;
@ -107,6 +113,8 @@ public class AVMLuceneIndexerImpl extends AbstractLuceneIndexerImpl<String> impl
MAIN, DELTA;
}
protected enum IndexDeleteMode {REINDEX, DELETE};
private static String SNAP_SHOT_ID = "SnapShot";
static Log s_logger = LogFactory.getLog(AVMLuceneIndexerImpl.class);
@ -130,6 +138,11 @@ public class AVMLuceneIndexerImpl extends AbstractLuceneIndexerImpl<String> impl
private int endVersion = -1;
/**
* A list of deletions associated with the changes to nodes in the current flush
*/
protected Set<String> deletionsSinceFlush = new HashSet<String>();
private long indexedDocCount = 0;
/**
@ -170,6 +183,16 @@ public class AVMLuceneIndexerImpl extends AbstractLuceneIndexerImpl<String> impl
this.contentService = contentService;
}
/**
* Are we deleting leaves only (not meta data)
*
* @return - deleting only nodes.
*/
public boolean getDeleteOnlyNodes()
{
return indexUpdateStatus == IndexUpdateStatus.ASYNCHRONOUS;
}
/**
* Generate an indexer
*
@ -430,7 +453,92 @@ public class AVMLuceneIndexerImpl extends AbstractLuceneIndexerImpl<String> impl
}
@Override
protected Set<String> deleteImpl(String nodeRef, IndexDeleteMode mode, boolean cascade, IndexReader mainReader)
throws LuceneIndexException, IOException
{
Set<String> leafrefs = new LinkedHashSet<String>();
IndexReader deltaReader = null;
// startTimer();
getDeltaReader();
// outputTime("Delete "+nodeRef+" size = "+getDeltaWriter().docCount());
Set<String> refs = new LinkedHashSet<String>();
Set<String> containerRefs = new LinkedHashSet<String>();
Set<String> temp = null;
switch(mode)
{
case REINDEX:
temp = deleteContainerAndBelow(nodeRef, getDeltaReader(), true, cascade);
closeDeltaReader();
refs.addAll(temp);
deletions.addAll(temp);
// should not be included as a delete for optimisation in deletionsSinceFlush
// should be optimised out
// defensive against any issue with optimisation of events
// the nodes have not been deleted and would require a real delete
temp = deleteContainerAndBelow(nodeRef, mainReader, false, cascade);
refs.addAll(temp);
deletions.addAll(temp);
// should not be included as a delete for optimisation
// should be optimised out
// defensive agaainst any issue with optimisation of events
// the nodes have not been deleted and would require a real delete
break;
case DELETE:
// if already deleted don't do it again ...
if(deletionsSinceFlush.contains(nodeRef))
{
// nothing to do
break;
}
else
{
// Delete all and reindex as they could be secondary links we have deleted and they need to be updated.
// Most will skip any indexing as they will really have gone.
temp = deleteContainerAndBelow(nodeRef, getDeltaReader(), true, cascade);
closeDeltaReader();
containerRefs.addAll(temp);
refs.addAll(temp);
temp = deleteContainerAndBelow(nodeRef, mainReader, false, cascade);
containerRefs.addAll(temp);
temp = deletePrimary(containerRefs, getDeltaReader(), true);
leafrefs.addAll(temp);
closeDeltaReader();
temp = deletePrimary(containerRefs, mainReader, false);
leafrefs.addAll(temp);
// May not have to delete references
temp = deleteReference(containerRefs, getDeltaReader(), true);
leafrefs.addAll(temp);
closeDeltaReader();
temp = deleteReference(containerRefs, mainReader, false);
leafrefs.addAll(temp);
refs.addAll(containerRefs);
refs.addAll(leafrefs);
deletions.addAll(refs);
// do not delete anything we have deleted before in this flush
// probably OK to cache for the TX as a whole but done per flush => See ALF-8007
deletionsSinceFlush.addAll(refs);
// make sure leaves are also removed from the delta before reindexing
deltaReader = getDeltaReader();
for(String id : leafrefs)
{
deltaReader.deleteDocuments(new Term("ID", id));
}
closeDeltaReader();
break;
}
}
return refs;
}
protected List<Document> createDocuments(String stringNodeRef, FTSStatus ftsStatus, boolean indexAllProperties, boolean includeDirectoryDocuments)
{
List<Document> docs = new ArrayList<Document>();
@ -636,6 +744,161 @@ public class AVMLuceneIndexerImpl extends AbstractLuceneIndexerImpl<String> impl
return docs;
}
protected List<Document> readDocuments(final String stringNodeRef, final FTSStatus ftsStatus,
final boolean indexAllProperties, final boolean includeDirectoryDocuments)
{
return doInReadthroughTransaction(new RetryingTransactionCallback<List<Document>>()
{
@Override
public List<Document> execute() throws Throwable
{
return createDocuments(stringNodeRef, ftsStatus, indexAllProperties,
includeDirectoryDocuments);
}
});
}
protected void indexImpl(String nodeRef, boolean isNew) throws LuceneIndexException, IOException
{
IndexWriter writer = getDeltaWriter();
// avoid attempting to index nodes that don't exist
try
{
List<Document> docs = readDocuments(nodeRef, isNew ? FTSStatus.New : FTSStatus.Dirty, false, true);
for (Document doc : docs)
{
try
{
writer.addDocument(doc);
}
catch (IOException e)
{
throw new LuceneIndexException("Failed to add document to index", e);
}
}
}
catch (InvalidNodeRefException e)
{
// The node does not exist
return;
}
}
void indexImpl(Set<String> refs, boolean isNew) throws LuceneIndexException, IOException
{
for (String ref : refs)
{
indexImpl(ref, isNew);
}
}
/**
* @throws LuceneIndexException
*/
public void flushPending() throws LuceneIndexException
{
IndexReader mainReader = null;
try
{
saveDelta();
// Make sure the in flush deletion list is clear at the start
deletionsSinceFlush.clear();
if (commandList.isEmpty())
{
return;
}
mainReader = getReader();
Set<String> forIndex = new LinkedHashSet<String>();
for (Command<String> command : commandList)
{
if (command.action == Action.INDEX)
{
// Indexing just requires the node to be added to the list
forIndex.add(command.ref.toString());
}
else if (command.action == Action.REINDEX)
{
// Reindex is a delete and then and index
Set<String> set = deleteImpl(command.ref.toString(), IndexDeleteMode.REINDEX, false, mainReader);
// Deleting any pending index actions
// - make sure we only do at most one index
forIndex.removeAll(set);
// Add the nodes for index
forIndex.addAll(set);
}
else if (command.action == Action.CASCADEREINDEX)
{
// Reindex is a delete and then and index
Set<String> set = deleteImpl(command.ref.toString(), IndexDeleteMode.REINDEX, true, mainReader);
// Deleting any pending index actions
// - make sure we only do at most one index
forIndex.removeAll(set);
// Add the nodes for index
forIndex.addAll(set);
}
else if (command.action == Action.DELETE)
{
// Delete the nodes
Set<String> set = deleteImpl(command.ref.toString(), IndexDeleteMode.DELETE, true, mainReader);
// Remove any pending indexes
forIndex.removeAll(set);
// Add the leaf nodes for reindex
forIndex.addAll(set);
}
}
commandList.clear();
indexImpl(forIndex, false);
docs = getDeltaWriter().docCount();
deletionsSinceFlush.clear();
}
catch (IOException e)
{
// If anything goes wrong we try and do a roll back
throw new LuceneIndexException("Failed to flush index", e);
}
finally
{
if (mainReader != null)
{
try
{
mainReader.close();
}
catch (IOException e)
{
throw new LuceneIndexException("Filed to close main reader", e);
}
}
// Make sure deletes are sent
try
{
closeDeltaReader();
}
catch (IOException e)
{
}
// Make sure writes and updates are sent.
try
{
closeDeltaWriter();
}
catch (IOException e)
{
}
}
}
private String[] splitPath(String path)
{
String[] pathParts = path.split(":");
@ -1247,12 +1510,12 @@ public class AVMLuceneIndexerImpl extends AbstractLuceneIndexerImpl<String> impl
{
if (indexUpdateStatus == IndexUpdateStatus.ASYNCHRONOUS)
{
setInfo(docs, getDeletions(), false);
setInfo(docs, getDeletions(), getContainerDeletions(), false);
// FTS does not trigger indexing request
}
else
{
setInfo(docs, getDeletions(), false);
setInfo(docs, getDeletions(), getContainerDeletions(), false);
// TODO: only register if required
fullTextSearchIndexer.requiresIndex(store);
}
@ -1261,7 +1524,7 @@ public class AVMLuceneIndexerImpl extends AbstractLuceneIndexerImpl<String> impl
callBack.indexCompleted(store, remainingCount, null);
}
setInfo(docs, deletions, false);
setInfo(docs, deletions, containerDeletions, false);
}
@Override
@ -2148,4 +2411,74 @@ public class AVMLuceneIndexerImpl extends AbstractLuceneIndexerImpl<String> impl
deleteIndex();
}
/**
* Delete all entries from the index.
*/
public void deleteAll()
{
deleteAll(null);
}
/**
* Delete all index entries which do not start with the given prefix
*
* @param prefix
*/
public void deleteAll(String prefix)
{
IndexReader mainReader = null;
try
{
mainReader = getReader();
for (int doc = 0; doc < mainReader.maxDoc(); doc++)
{
if (!mainReader.isDeleted(doc))
{
Document document = mainReader.document(doc);
String[] ids = document.getValues("ID");
if ((prefix == null) || nonStartwWith(ids, prefix))
{
deletions.add(ids[ids.length - 1]);
// should be included in the deletion cache if we move back to caching at the TX level and not the flush level
// Entries here will currently be ignored as the list is cleared at the start and end of a flush.
deletionsSinceFlush.add(ids[ids.length - 1]);
}
}
}
}
catch (IOException e)
{
// If anything goes wrong we try and do a roll back
throw new LuceneIndexException("Failed to delete all entries from the index", e);
}
finally
{
if (mainReader != null)
{
try
{
mainReader.close();
}
catch (IOException e)
{
throw new LuceneIndexException("Filed to close main reader", e);
}
}
}
}
private boolean nonStartwWith(String[] values, String prefix)
{
for (String value : values)
{
if (value.startsWith(prefix))
{
return false;
}
}
return true;
}
}

View File

@ -175,7 +175,8 @@ public abstract class AbstractLuceneBase
//
luceneIndexer.flushPending();
return new ClosingIndexSearcher(indexInfo.getMainIndexReferenceCountingReadOnlyIndexReader(deltaId,
luceneIndexer.getDeletions(), luceneIndexer.getDeleteOnlyNodes()));
luceneIndexer.getDeletions(), luceneIndexer.getContainerDeletions(), luceneIndexer
.getDeleteOnlyNodes()));
}
}
@ -252,9 +253,9 @@ public abstract class AbstractLuceneBase
closeDeltaWriter();
}
protected void setInfo(long docs, Set<String> deletions, boolean deleteNodesOnly) throws IOException
protected void setInfo(long docs, Set<String> deletions, Set<String> containerDeletions, boolean deleteNodesOnly) throws IOException
{
indexInfo.setPreparedState(deltaId, deletions, docs, deleteNodesOnly);
indexInfo.setPreparedState(deltaId, deletions, containerDeletions, docs, deleteNodesOnly);
}
protected void setStatus(TransactionStatus status) throws IOException

View File

@ -22,7 +22,6 @@ import java.io.IOException;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.HashSet;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.ListIterator;
@ -31,6 +30,7 @@ import java.util.Set;
import javax.transaction.Status;
import javax.transaction.xa.XAResource;
import org.alfresco.repo.search.Indexer;
import org.alfresco.repo.search.IndexerException;
import org.alfresco.repo.search.impl.lucene.index.TransactionStatus;
import org.alfresco.repo.transaction.RetryingTransactionHelper.RetryingTransactionCallback;
@ -39,8 +39,8 @@ import org.alfresco.service.transaction.TransactionService;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.TermDocs;
import org.springframework.dao.ConcurrencyFailureException;
@ -52,7 +52,7 @@ import org.springframework.dao.ConcurrencyFailureException;
* @param <T> -
* the type used to generate the key in the index file
*/
public abstract class AbstractLuceneIndexerImpl<T> extends AbstractLuceneBase
public abstract class AbstractLuceneIndexerImpl<T> extends AbstractLuceneBase implements Indexer
{
/**
* Enum for indexing actions against a node
@ -71,7 +71,6 @@ public abstract class AbstractLuceneIndexerImpl<T> extends AbstractLuceneBase
* A delete
*/
DELETE,
MOVE,
/**
* A cascaded reindex (ensures directory structre is ok)
*/
@ -94,8 +93,6 @@ public abstract class AbstractLuceneIndexerImpl<T> extends AbstractLuceneBase
ASYNCHRONOUS;
}
protected enum IndexDeleteMode {REINDEX, DELETE, MOVE};
protected enum FTSStatus {New, Dirty, Clean};
protected long docs;
@ -284,6 +281,65 @@ public abstract class AbstractLuceneIndexerImpl<T> extends AbstractLuceneBase
return refs;
}
protected boolean locateContainer(String nodeRef, IndexReader reader)
{
boolean found = false;
try
{
TermDocs td = reader.termDocs(new Term("ID", nodeRef));
while (td.next())
{
int doc = td.doc();
Document document = reader.document(doc);
if (document.getField("ISCONTAINER") != null)
{
found = true;
break;
}
}
td.close();
}
catch (IOException e)
{
throw new LuceneIndexException("Failed to delete container and below for " + nodeRef, e);
}
return found;
}
protected boolean deleteLeafOnly(String nodeRef, IndexReader reader, boolean delete) throws LuceneIndexException
{
boolean found = false;
try
{
TermDocs td = reader.termDocs(new Term("ID", nodeRef));
while (td.next())
{
int doc = td.doc();
Document document = reader.document(doc);
// Exclude all containers except the root (which is also a node!)
Field path = document.getField("PATH");
if (path == null || path.stringValue().length() == 0)
{
found = true;
if (delete)
{
reader.deleteDocument(doc);
}
else
{
break;
}
}
}
td.close();
}
catch (IOException e)
{
throw new LuceneIndexException("Failed to delete container and below for " + nodeRef, e);
}
return found;
}
/** the maximum transformation time to allow atomically, defaulting to 20ms */
protected long maxAtomicTransformationTime = 20;
@ -294,9 +350,9 @@ public abstract class AbstractLuceneIndexerImpl<T> extends AbstractLuceneBase
protected Set<String> deletions = new LinkedHashSet<String>();
/**
* A list of deletions associated with the changes to nodes in the current flush
* A list of cascading container deletions we have made - at merge these deletions need to be made against the main index.
*/
protected Set<String> deletionsSinceFlush = new HashSet<String>();
protected Set<String> containerDeletions = new LinkedHashSet<String>();
/**
* List of pending indexing commands.
@ -629,24 +685,19 @@ public abstract class AbstractLuceneIndexerImpl<T> extends AbstractLuceneBase
protected abstract void doSetRollbackOnly() throws IOException;
protected abstract List<Document> createDocuments(String stringNodeRef, FTSStatus ftsStatus, boolean indexAllProperties,
boolean includeDirectoryDocuments);
protected List<Document> readDocuments(final String stringNodeRef, final FTSStatus ftsStatus,
final boolean indexAllProperties, final boolean includeDirectoryDocuments)
protected <T2> T2 doInReadthroughTransaction(final RetryingTransactionCallback<T2> callback)
{
if (isReadThrough)
{
return transactionService.getRetryingTransactionHelper().doInTransaction(
new RetryingTransactionCallback<List<Document>>()
new RetryingTransactionCallback<T2>()
{
@Override
public List<Document> execute() throws Throwable
public T2 execute() throws Throwable
{
try
{
return createDocuments(stringNodeRef, ftsStatus, indexAllProperties,
includeDirectoryDocuments);
return callback.execute();
}
catch (InvalidNodeRefException e)
{
@ -660,164 +711,23 @@ public abstract class AbstractLuceneIndexerImpl<T> extends AbstractLuceneBase
}
else
{
return createDocuments(stringNodeRef, ftsStatus, indexAllProperties, includeDirectoryDocuments);
}
}
protected Set<String> deleteImpl(String nodeRef, IndexDeleteMode mode, boolean cascade, IndexReader mainReader)
throws LuceneIndexException, IOException
{
Set<String> leafrefs = new LinkedHashSet<String>();
IndexReader deltaReader = null;
// startTimer();
getDeltaReader();
// outputTime("Delete "+nodeRef+" size = "+getDeltaWriter().docCount());
Set<String> refs = new LinkedHashSet<String>();
Set<String> containerRefs = new LinkedHashSet<String>();
Set<String> temp = null;
switch(mode)
{
case MOVE:
temp = deleteContainerAndBelow(nodeRef, getDeltaReader(), true, cascade);
closeDeltaReader();
containerRefs.addAll(temp);
temp = deleteContainerAndBelow(nodeRef, mainReader, false, cascade);
containerRefs.addAll(temp);
temp = deletePrimary(containerRefs, getDeltaReader(), true);
leafrefs.addAll(temp);
closeDeltaReader();
// May not have to delete references
temp = deleteReference(containerRefs, getDeltaReader(), true);
leafrefs.addAll(temp);
closeDeltaReader();
refs.addAll(containerRefs);
refs.addAll(leafrefs);
deletions.addAll(refs);
// should not be included as a delete for optimisation in deletionsSinceFlush
// should be optimised out
// defensive against any issue with optimisation of events
// the node has only moved - it still requires a real delete
// make sure leaves are also removed from the delta before reindexing
deltaReader = getDeltaReader();
for(String id : leafrefs)
try
{
deltaReader.deleteDocuments(new Term("ID", id));
return callback.execute();
}
closeDeltaReader();
break;
case REINDEX:
temp = deleteContainerAndBelow(nodeRef, getDeltaReader(), true, cascade);
closeDeltaReader();
refs.addAll(temp);
deletions.addAll(temp);
// should not be included as a delete for optimisation in deletionsSinceFlush
// should be optimised out
// defensive against any issue with optimisation of events
// the nodes have not been deleted and would require a real delete
temp = deleteContainerAndBelow(nodeRef, mainReader, false, cascade);
refs.addAll(temp);
deletions.addAll(temp);
// should not be included as a delete for optimisation
// should be optimised out
// defensive agaainst any issue with optimisation of events
// the nodes have not been deleted and would require a real delete
break;
case DELETE:
// if already deleted don't do it again ...
if(deletionsSinceFlush.contains(nodeRef))
catch (RuntimeException e)
{
// nothing to do
break;
throw e;
}
else
catch (Error e)
{
// Delete all and reindex as they could be secondary links we have deleted and they need to be updated.
// Most will skip any indexing as they will really have gone.
temp = deleteContainerAndBelow(nodeRef, getDeltaReader(), true, cascade);
closeDeltaReader();
containerRefs.addAll(temp);
refs.addAll(temp);
temp = deleteContainerAndBelow(nodeRef, mainReader, false, cascade);
containerRefs.addAll(temp);
temp = deletePrimary(containerRefs, getDeltaReader(), true);
leafrefs.addAll(temp);
closeDeltaReader();
temp = deletePrimary(containerRefs, mainReader, false);
leafrefs.addAll(temp);
// May not have to delete references
temp = deleteReference(containerRefs, getDeltaReader(), true);
leafrefs.addAll(temp);
closeDeltaReader();
temp = deleteReference(containerRefs, mainReader, false);
leafrefs.addAll(temp);
refs.addAll(containerRefs);
refs.addAll(leafrefs);
deletions.addAll(refs);
// do not delete anything we have deleted before in this flush
// probably OK to cache for the TX as a whole but done per flush => See ALF-8007
deletionsSinceFlush.addAll(refs);
// make sure leaves are also removed from the delta before reindexing
deltaReader = getDeltaReader();
for(String id : leafrefs)
{
deltaReader.deleteDocuments(new Term("ID", id));
}
closeDeltaReader();
break;
throw e;
}
}
return refs;
}
protected void indexImpl(String nodeRef, boolean isNew) throws LuceneIndexException, IOException
{
IndexWriter writer = getDeltaWriter();
// avoid attempting to index nodes that don't exist
try
{
List<Document> docs = readDocuments(nodeRef, isNew ? FTSStatus.New : FTSStatus.Dirty, false, true);
for (Document doc : docs)
catch (Throwable e)
{
try
{
writer.addDocument(doc);
}
catch (IOException e)
{
throw new LuceneIndexException("Failed to add document to index", e);
}
throw new RuntimeException(e);
}
}
catch (InvalidNodeRefException e)
{
// The node does not exist
return;
}
}
void indexImpl(Set<String> refs, boolean isNew) throws LuceneIndexException, IOException
{
for (String ref : refs)
{
indexImpl(ref, isNew);
}
}
protected void index(T ref) throws LuceneIndexException
@ -835,11 +745,6 @@ public abstract class AbstractLuceneIndexerImpl<T> extends AbstractLuceneBase
addCommand(new Command<T>(ref, Action.DELETE));
}
protected void move(T ref) throws LuceneIndexException
{
addCommand(new Command<T>(ref, Action.MOVE));
}
private void addCommand(Command<T> command)
{
if (commandList.size() > 0)
@ -861,22 +766,7 @@ public abstract class AbstractLuceneIndexerImpl<T> extends AbstractLuceneBase
private void purgeCommandList(Command<T> command)
{
if (command.action == Action.DELETE)
{
removeFromCommandList(command, false);
}
else if (command.action == Action.REINDEX)
{
removeFromCommandList(command, true);
}
else if (command.action == Action.INDEX)
{
removeFromCommandList(command, true);
}
else if (command.action == Action.CASCADEREINDEX)
{
removeFromCommandList(command, true);
}
removeFromCommandList(command, command.action != Action.DELETE);
}
private void removeFromCommandList(Command<T> command, boolean matchExact)
@ -912,128 +802,6 @@ public abstract class AbstractLuceneIndexerImpl<T> extends AbstractLuceneBase
}
}
/**
* @throws LuceneIndexException
*/
public void flushPending() throws LuceneIndexException
{
IndexReader mainReader = null;
try
{
saveDelta();
// Make sure the in flush deletion list is clear at the start
deletionsSinceFlush.clear();
if (commandList.isEmpty())
{
return;
}
mainReader = getReader();
Set<String> forIndex = new LinkedHashSet<String>();
for (Command<T> command : commandList)
{
if (command.action == Action.INDEX)
{
// Indexing just requires the node to be added to the list
forIndex.add(command.ref.toString());
}
else if (command.action == Action.REINDEX)
{
// Reindex is a delete and then and index
Set<String> set = deleteImpl(command.ref.toString(), IndexDeleteMode.REINDEX, false, mainReader);
// Deleting any pending index actions
// - make sure we only do at most one index
forIndex.removeAll(set);
// Add the nodes for index
forIndex.addAll(set);
}
else if (command.action == Action.CASCADEREINDEX)
{
// Reindex is a delete and then and index
Set<String> set = deleteImpl(command.ref.toString(), IndexDeleteMode.REINDEX, true, mainReader);
// Deleting any pending index actions
// - make sure we only do at most one index
forIndex.removeAll(set);
// Add the nodes for index
forIndex.addAll(set);
}
else if (command.action == Action.DELETE)
{
// Delete the nodes
Set<String> set = deleteImpl(command.ref.toString(), IndexDeleteMode.DELETE, true, mainReader);
// Remove any pending indexes
forIndex.removeAll(set);
// Add the leaf nodes for reindex
forIndex.addAll(set);
}
else if (command.action == Action.MOVE)
{
// Delete the nodes
Set<String> set = deleteImpl(command.ref.toString(), IndexDeleteMode.MOVE, true, mainReader);
// Remove any pending indexes
forIndex.removeAll(set);
// Add the leaf nodes for reindex
forIndex.addAll(set);
}
}
commandList.clear();
indexImpl(forIndex, false);
docs = getDeltaWriter().docCount();
deletionsSinceFlush.clear();
}
catch (IOException e)
{
// If anything goes wrong we try and do a roll back
throw new LuceneIndexException("Failed to flush index", e);
}
finally
{
if (mainReader != null)
{
try
{
mainReader.close();
}
catch (IOException e)
{
throw new LuceneIndexException("Filed to close main reader", e);
}
}
// Make sure deletes are sent
try
{
closeDeltaReader();
}
catch (IOException e)
{
}
// Make sure writes and updates are sent.
try
{
closeDeltaWriter();
}
catch (IOException e)
{
}
}
}
/**
* Are we deleting leaves only (not meta data)
*
* @return - deleting only nodes.
*/
public boolean getDeleteOnlyNodes()
{
return indexUpdateStatus == IndexUpdateStatus.ASYNCHRONOUS;
}
/**
* Get the deletions
*
@ -1045,72 +813,12 @@ public abstract class AbstractLuceneIndexerImpl<T> extends AbstractLuceneBase
}
/**
* Delete all entries from the index.
*/
public void deleteAll()
{
deleteAll(null);
}
/**
* Delete all index entries which do not start with the given prefix
* Get the container deletions
*
* @param prefix
* @return - the ids to delete
*/
public void deleteAll(String prefix)
public Set<String> getContainerDeletions()
{
IndexReader mainReader = null;
try
{
mainReader = getReader();
for (int doc = 0; doc < mainReader.maxDoc(); doc++)
{
if (!mainReader.isDeleted(doc))
{
Document document = mainReader.document(doc);
String[] ids = document.getValues("ID");
if ((prefix == null) || nonStartwWith(ids, prefix))
{
deletions.add(ids[ids.length - 1]);
// should be included in the deletion cache if we move back to caching at the TX level and not the flush level
// Entries here will currently be ignored as the list is cleared at the start and end of a flush.
deletionsSinceFlush.add(ids[ids.length - 1]);
}
}
}
}
catch (IOException e)
{
// If anything goes wrong we try and do a roll back
throw new LuceneIndexException("Failed to delete all entries from the index", e);
}
finally
{
if (mainReader != null)
{
try
{
mainReader.close();
}
catch (IOException e)
{
throw new LuceneIndexException("Filed to close main reader", e);
}
}
}
return Collections.unmodifiableSet(containerDeletions);
}
private boolean nonStartwWith(String[] values, String prefix)
{
for (String value : values)
{
if (value.startsWith(prefix))
{
return false;
}
}
return true;
}
}

View File

@ -27,6 +27,7 @@ import org.alfresco.error.AlfrescoRuntimeException;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.FilterIndexReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.Term;
@ -52,6 +53,7 @@ public class FilterIndexReaderByStringId extends FilterIndexReader
private OpenBitSet deletedDocuments;
private final Set<String> deletions;
private final Set<String> containerDeletions;
private final boolean deleteNodesOnly;
private final ReadWriteLock lock = new ReentrantReadWriteLock();
@ -65,12 +67,13 @@ public class FilterIndexReaderByStringId extends FilterIndexReader
* @param deletions
* @param deleteNodesOnly
*/
public FilterIndexReaderByStringId(String id, IndexReader reader, Set<String> deletions, boolean deleteNodesOnly)
public FilterIndexReaderByStringId(String id, IndexReader reader, Set<String> deletions, Set<String> containerDeletions, boolean deleteNodesOnly)
{
super(reader);
reader.incRef();
this.id = id;
this.deletions = deletions;
this.containerDeletions = containerDeletions;
this.deleteNodesOnly = deleteNodesOnly;
if (s_logger.isDebugEnabled())
@ -103,9 +106,10 @@ public class FilterIndexReaderByStringId extends FilterIndexReader
}
deletedDocuments = new OpenBitSet(in.maxDoc());
if (!deleteNodesOnly)
Searcher searcher = new IndexSearcher(in);
for (String stringRef : deletions)
{
for (String stringRef : deletions)
if (!deleteNodesOnly || containerDeletions.contains(stringRef))
{
TermDocs td = in.termDocs(new Term("ID", stringRef));
while (td.next())
@ -114,12 +118,7 @@ public class FilterIndexReaderByStringId extends FilterIndexReader
}
td.close();
}
}
else
{
Searcher searcher = new IndexSearcher(in);
for (String stringRef : deletions)
else
{
TermQuery query = new TermQuery(new Term("ID", stringRef));
Hits hits = searcher.search(query);
@ -128,7 +127,9 @@ public class FilterIndexReaderByStringId extends FilterIndexReader
for (int i = 0; i < hits.length(); i++)
{
Document doc = hits.doc(i);
if (doc.getField("ISCONTAINER") == null)
// Exclude all containers except the root (which is also a node!)
Field path = doc.getField("PATH");
if (path == null || path.stringValue().length() == 0)
{
deletedDocuments.set(hits.id(i));
// There should only be one thing to delete
@ -137,7 +138,17 @@ public class FilterIndexReaderByStringId extends FilterIndexReader
}
}
}
// searcher does not need to be closed, the reader is live
}
// searcher does not need to be closed, the reader is live
for (String stringRef : containerDeletions)
{
TermDocs td = in.termDocs(new Term("ANCESTOR", stringRef));
while (td.next())
{
deletedDocuments.set(td.doc());
}
td.close();
}
return deletedDocuments;
}

View File

@ -31,6 +31,7 @@ public interface LuceneIndexer extends Indexer, TransactionSynchronisationAwareI
{
public String getDeltaId();
public Set<String> getDeletions();
public Set<String> getContainerDeletions();
public boolean getDeleteOnlyNodes();
public <R> R doReadOnly(IndexInfo.LockWork <R> lockWork);
}

View File

@ -72,6 +72,7 @@ import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.CorruptIndexException;
import org.apache.lucene.index.FilterIndexReader;
import org.apache.lucene.index.IndexReader;
@ -280,6 +281,11 @@ public class IndexInfo implements IndexMonitor
*/
private static String INDEX_INFO_DELETIONS = "IndexInfoDeletions";
/**
* The default name for the index container deletions file
*/
private static String INDEX_INFO_CONTAINER_DELETIONS = "IndexInfoContainerDeletions";
/**
* What to look for to detect the previous index implementation.
*/
@ -1188,6 +1194,18 @@ public class IndexInfo implements IndexMonitor
* @throws IOException
*/
public Set<String> getDeletions(String id) throws IOException
{
return getDeletions(id, INDEX_INFO_DELETIONS);
}
/**
* Get the deletions for a given index (there is no check if they should be applied that is up to the calling layer)
*
* @param id
* @return
* @throws IOException
*/
private Set<String> getDeletions(String id, String fileName) throws IOException
{
if (id == null)
{
@ -1196,7 +1214,7 @@ public class IndexInfo implements IndexMonitor
// Check state
Set<String> deletions = new HashSet<String>();
File location = new File(indexDirectory, id).getCanonicalFile();
File file = new File(location, INDEX_INFO_DELETIONS).getCanonicalFile();
File file = new File(location, fileName).getCanonicalFile();
if (!file.exists())
{
if (s_logger.isDebugEnabled())
@ -1234,32 +1252,22 @@ public class IndexInfo implements IndexMonitor
* should deletions on apply to nodes (ie not to containers)
* @throws IOException
*/
public void setPreparedState(String id, Set<String> toDelete, long documents, boolean deleteNodesOnly) throws IOException
public void setPreparedState(String id, Set<String> toDelete, Set<String> containersToDelete, long documents, boolean deleteNodesOnly) throws IOException
{
if (id == null)
{
throw new IndexerException("\"null\" is not a valid identifier for a transaction");
}
// Check state
if (toDelete.size() > 0)
int toDeleteSize = toDelete.size();
int containersToDeleteSize = containersToDelete.size();
if (toDeleteSize > 0)
{
File location = new File(indexDirectory, id).getCanonicalFile();
if (!location.exists())
{
if (!location.mkdirs())
{
throw new IndexerException("Failed to make index directory " + location);
}
}
// Write deletions
DataOutputStream os = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(new File(location, INDEX_INFO_DELETIONS).getCanonicalFile())));
os.writeInt(toDelete.size());
for (String ref : toDelete)
{
os.writeUTF(ref);
}
os.flush();
os.close();
persistDeletions(id, toDelete, INDEX_INFO_DELETIONS);
}
if (containersToDeleteSize > 0)
{
persistDeletions(id, containersToDelete, INDEX_INFO_CONTAINER_DELETIONS);
}
getWriteLock();
try
@ -1274,7 +1282,7 @@ public class IndexInfo implements IndexMonitor
throw new IndexerException("Deletes and doc count can only be set on a preparing index");
}
entry.setDocumentCount(documents);
entry.setDeletions(toDelete.size());
entry.setDeletions(toDeleteSize + containersToDeleteSize);
entry.setDeletOnlyNodes(deleteNodesOnly);
}
finally
@ -1283,6 +1291,33 @@ public class IndexInfo implements IndexMonitor
}
}
/**
* @param id
* @param toDelete
* @throws IOException
* @throws FileNotFoundException
*/
private void persistDeletions(String id, Set<String> toDelete, String fileName) throws IOException, FileNotFoundException
{
File location = new File(indexDirectory, id).getCanonicalFile();
if (!location.exists())
{
if (!location.mkdirs())
{
throw new IndexerException("Failed to make index directory " + location);
}
}
// Write deletions
DataOutputStream os = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(new File(location, fileName).getCanonicalFile())));
os.writeInt(toDelete.size());
for (String ref : toDelete)
{
os.writeUTF(ref);
}
os.flush();
os.close();
}
private void invalidateMainReadersFromFirst(Set<String> ids) throws IOException
{
boolean found = false;
@ -1413,7 +1448,7 @@ public class IndexInfo implements IndexMonitor
* @return
* @throws IOException
*/
public IndexReader getMainIndexReferenceCountingReadOnlyIndexReader(String id, Set<String> deletions, boolean deleteOnlyNodes) throws IOException
public IndexReader getMainIndexReferenceCountingReadOnlyIndexReader(String id, Set<String> deletions, Set<String> containerDeletions, boolean deleteOnlyNodes) throws IOException
{
if (id == null)
{
@ -1482,13 +1517,13 @@ public class IndexInfo implements IndexMonitor
IndexReader deltaReader = buildAndRegisterDeltaReader(id);
IndexReader reader = null;
if (deletions == null || deletions.size() == 0)
if ((deletions == null || deletions.size() == 0) && (containerDeletions == null || containerDeletions.size() == 0))
{
reader = new MultiReader(new IndexReader[] { mainIndexReader, deltaReader }, false);
}
else
{
IndexReader filterReader = new FilterIndexReaderByStringId("main+id", mainIndexReader, deletions, deleteOnlyNodes);
IndexReader filterReader = new FilterIndexReaderByStringId("main+id", mainIndexReader, deletions, containerDeletions, deleteOnlyNodes);
reader = new MultiReader(new IndexReader[] { filterReader, deltaReader }, false);
// Cancel out extra incRef made by MultiReader
filterReader.decRef();
@ -2254,7 +2289,7 @@ public class IndexInfo implements IndexMonitor
{
try
{
IndexReader filterReader = new FilterIndexReaderByStringId(id, oldReader, getDeletions(entry.getName()), entry.isDeletOnlyNodes());
IndexReader filterReader = new FilterIndexReaderByStringId(id, oldReader, getDeletions(entry.getName(), INDEX_INFO_DELETIONS), getDeletions(entry.getName(), INDEX_INFO_CONTAINER_DELETIONS), entry.isDeletOnlyNodes());
reader = new MultiReader(new IndexReader[] { filterReader, subReader }, false);
// Cancel out the incRef on the filter reader
filterReader.decRef();
@ -3843,7 +3878,8 @@ public class IndexInfo implements IndexMonitor
LinkedHashMap<String, IndexReader> readers = new LinkedHashMap<String, IndexReader>(size);
for (IndexEntry currentDelete : toDelete.values())
{
Set<String> deletions = getDeletions(currentDelete.getName());
Set<String> deletions = getDeletions(currentDelete.getName(), INDEX_INFO_DELETIONS);
Set<String> containerDeletions = getDeletions(currentDelete.getName(), INDEX_INFO_CONTAINER_DELETIONS);
if (!deletions.isEmpty())
{
for (String key : indexes.keySet())
@ -3873,7 +3909,7 @@ public class IndexInfo implements IndexMonitor
readers.put(key, writeableReader);
}
if (currentDelete.isDeletOnlyNodes())
if (currentDelete.isDeletOnlyNodes() && !containerDeletions.contains(stringRef))
{
Searcher writeableSearcher = new IndexSearcher(writeableReader);
hits = writeableSearcher.search(query);
@ -3882,7 +3918,9 @@ public class IndexInfo implements IndexMonitor
for (int i = 0; i < hits.length(); i++)
{
Document doc = hits.doc(i);
if (doc.getField("ISCONTAINER") == null)
// Exclude all containers except the root (which is also a node!)
Field path = doc.getField("PATH");
if (path == null || path.stringValue().length() == 0)
{
writeableReader.deleteDocument(hits.id(i));
invalidIndexes.add(key);
@ -3927,6 +3965,65 @@ public class IndexInfo implements IndexMonitor
}
}
}
if (!containerDeletions.isEmpty())
{
for (String key : indexes.keySet())
{
IndexReader reader = getReferenceCountingIndexReader(key);
Searcher searcher = new IndexSearcher(reader);
try
{
for (String stringRef : deletions)
{
TermQuery query = new TermQuery(new Term("ANCESTOR", stringRef));
Hits hits = searcher.search(query);
if (hits.length() > 0)
{
IndexReader writeableReader = readers.get(key);
if (writeableReader == null)
{
File location = new File(indexDirectory, key).getCanonicalFile();
if (IndexReader.indexExists(location))
{
writeableReader = IndexReader.open(location);
}
else
{
continue;
}
readers.put(key, writeableReader);
}
int deletedCount = 0;
try
{
deletedCount = writeableReader.deleteDocuments(new Term("ANCESTOR", stringRef));
}
catch (IOException ioe)
{
if (s_logger.isDebugEnabled())
{
s_logger.debug("IO Error for " + key);
throw ioe;
}
}
if (deletedCount > 0)
{
if (s_logger.isDebugEnabled())
{
s_logger.debug("Deleted " + deletedCount + " from " + key + " for id " + stringRef + " remaining docs " + writeableReader.numDocs());
}
invalidIndexes.add(key);
}
}
}
}
finally
{
searcher.close();
}
}
}
// The delta we have just processed now must be included when we process the deletions of its successor
indexes.put(currentDelete.getName(), currentDelete);
}

View File

@ -21,6 +21,7 @@ package org.alfresco.repo.search.impl.lucene.index;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashSet;
import junit.framework.TestCase;
@ -109,7 +110,7 @@ public static final String[] UPDATE_LIST_2 = { "alpha2", "bravo2", "charlie2", "
ii.closeDeltaIndexWriter(guid);
ii.setStatus(guid, TransactionStatus.PREPARING, null, null);
ii.setPreparedState(guid, deletions, 1, false);
ii.setPreparedState(guid, deletions, Collections.<String>emptySet(), 1, false);
ii.getDeletions(guid);
ii.setStatus(guid, TransactionStatus.PREPARED, null, null);
@ -131,7 +132,7 @@ public static final String[] UPDATE_LIST_2 = { "alpha2", "bravo2", "charlie2", "
}
reader.close();
reader = ii.getMainIndexReferenceCountingReadOnlyIndexReader(guid, deletions, false);
reader = ii.getMainIndexReferenceCountingReadOnlyIndexReader(guid, deletions, Collections.<String>emptySet(), false);
assertEquals(reader.numDocs(), i + 1);
for (int j = 0; j < WORD_LIST.length; j++)
{
@ -214,7 +215,7 @@ public static final String[] UPDATE_LIST_2 = { "alpha2", "bravo2", "charlie2", "
ii.closeDeltaIndexWriter(guid);
ii.setStatus(guid, TransactionStatus.PREPARING, null, null);
ii.setPreparedState(guid, new HashSet<String>(), 1, false);
ii.setPreparedState(guid, Collections.<String>emptySet(), Collections.<String>emptySet(), 1, false);
ii.getDeletions(guid);
ii.setStatus(guid, TransactionStatus.PREPARED, null, null);
@ -236,7 +237,7 @@ public static final String[] UPDATE_LIST_2 = { "alpha2", "bravo2", "charlie2", "
}
reader.close();
reader = ii.getMainIndexReferenceCountingReadOnlyIndexReader(guid, new HashSet<String>(), false);
reader = ii.getMainIndexReferenceCountingReadOnlyIndexReader(guid, Collections.<String>emptySet(), Collections.<String>emptySet(), false);
assertEquals(reader.numDocs(), i + 1);
for (int j = 0; j < CREATE_LIST.length; j++)
{
@ -290,7 +291,7 @@ public static final String[] UPDATE_LIST_2 = { "alpha2", "bravo2", "charlie2", "
ii.setStatus(guid, TransactionStatus.ACTIVE, null, null);
ii.closeDeltaIndexWriter(guid);
ii.setStatus(guid, TransactionStatus.PREPARING, null, null);
ii.setPreparedState(guid, deletions, 1, false);
ii.setPreparedState(guid, deletions, Collections.<String>emptySet(), 1, false);
ii.getDeletions(guid);
ii.setStatus(guid, TransactionStatus.PREPARED, null, null);
@ -314,7 +315,7 @@ public static final String[] UPDATE_LIST_2 = { "alpha2", "bravo2", "charlie2", "
}
reader.close();
reader = ii.getMainIndexReferenceCountingReadOnlyIndexReader(guid, deletions, false);
reader = ii.getMainIndexReferenceCountingReadOnlyIndexReader(guid, deletions, Collections.<String>emptySet(), false);
assertEquals(reader.numDocs(), UPDATE_LIST.length - i - 1);
lastDoc = -1;
for (int j = 0; j < CREATE_LIST.length; j++)
@ -409,7 +410,7 @@ public static final String[] UPDATE_LIST_2 = { "alpha2", "bravo2", "charlie2", "
ii.closeDeltaIndexWriter(guid);
ii.setStatus(guid, TransactionStatus.PREPARING, null, null);
ii.setPreparedState(guid, new HashSet<String>(), 1, false);
ii.setPreparedState(guid, Collections.<String>emptySet(), Collections.<String>emptySet(), 1, false);
ii.getDeletions(guid);
ii.setStatus(guid, TransactionStatus.PREPARED, null, null);
@ -431,7 +432,7 @@ public static final String[] UPDATE_LIST_2 = { "alpha2", "bravo2", "charlie2", "
}
reader.close();
reader = ii.getMainIndexReferenceCountingReadOnlyIndexReader(guid, new HashSet<String>(), false);
reader = ii.getMainIndexReferenceCountingReadOnlyIndexReader(guid, Collections.<String>emptySet(), Collections.<String>emptySet(), false);
assertEquals(reader.numDocs(), i + 1);
for (int j = 0; j < CREATE_LIST.length; j++)
{
@ -495,7 +496,7 @@ public static final String[] UPDATE_LIST_2 = { "alpha2", "bravo2", "charlie2", "
ii.closeDeltaIndexWriter(guid);
ii.setStatus(guid, TransactionStatus.PREPARING, null, null);
ii.setPreparedState(guid, deletions, 1, false);
ii.setPreparedState(guid, deletions, Collections.<String>emptySet(), 1, false);
ii.getDeletions(guid);
ii.setStatus(guid, TransactionStatus.PREPARED, null, null);
@ -534,7 +535,7 @@ public static final String[] UPDATE_LIST_2 = { "alpha2", "bravo2", "charlie2", "
}
reader.close();
reader = ii.getMainIndexReferenceCountingReadOnlyIndexReader(guid, deletions, false);
reader = ii.getMainIndexReferenceCountingReadOnlyIndexReader(guid, deletions, Collections.<String>emptySet(), false);
assertEquals(reader.numDocs(), UPDATE_LIST.length);
lastDoc = -1;
for (int j = 0; j < CREATE_LIST.length; j++)
@ -684,7 +685,7 @@ public static final String[] UPDATE_LIST_2 = { "alpha2", "bravo2", "charlie2", "
ii.closeDeltaIndexWriter(guid);
ii.setStatus(guid, TransactionStatus.PREPARING, null, null);
ii.setPreparedState(guid, new HashSet<String>(), 1, false);
ii.setPreparedState(guid, Collections.<String>emptySet(), Collections.<String>emptySet(), 1, false);
ii.getDeletions(guid);
ii.setStatus(guid, TransactionStatus.PREPARED, null, null);
@ -709,7 +710,7 @@ public static final String[] UPDATE_LIST_2 = { "alpha2", "bravo2", "charlie2", "
}
reader.close();
reader = ii.getMainIndexReferenceCountingReadOnlyIndexReader(guid, new HashSet<String>(), false);
reader = ii.getMainIndexReferenceCountingReadOnlyIndexReader(guid, Collections.<String>emptySet(), Collections.<String>emptySet(), false);
lastDoc = -1;
for (int j = 0; j < create.length; j++)
{
@ -775,7 +776,7 @@ public static final String[] UPDATE_LIST_2 = { "alpha2", "bravo2", "charlie2", "
ii.closeDeltaIndexWriter(guid);
ii.setStatus(guid, TransactionStatus.PREPARING, null, null);
ii.setPreparedState(guid, deletions, 1, false);
ii.setPreparedState(guid, deletions, Collections.<String>emptySet(), 1, false);
ii.getDeletions(guid);
ii.setStatus(guid, TransactionStatus.PREPARED, null, null);
@ -814,7 +815,7 @@ public static final String[] UPDATE_LIST_2 = { "alpha2", "bravo2", "charlie2", "
}
reader.close();
reader = ii.getMainIndexReferenceCountingReadOnlyIndexReader(guid, deletions, false);
reader = ii.getMainIndexReferenceCountingReadOnlyIndexReader(guid, deletions, Collections.<String>emptySet(), false);
lastDoc = -1;
for (int j = 0; j < create.length; j++)

View File

@ -124,6 +124,7 @@ public abstract class AbstractChainingAuthenticationComponent extends AbstractAu
@Override
public Authentication setCurrentUser(String userName)
{
Exception last = null;
for (AuthenticationComponent authComponent : getUsableAuthenticationComponents())
{
try
@ -132,10 +133,10 @@ public abstract class AbstractChainingAuthenticationComponent extends AbstractAu
}
catch (AuthenticationException e)
{
// Ignore and chain
last = e;
}
}
throw new AuthenticationException("Failed to set current user " + userName);
throw new AuthenticationException("Failed to set current user " + userName, last);
}
/**

View File

@ -25,6 +25,7 @@ import org.alfresco.query.PagingRequest;
import org.alfresco.query.PagingResults;
import org.alfresco.service.cmr.repository.NodeRef;
import org.alfresco.service.cmr.security.AuthorityType;
import org.alfresco.service.cmr.security.AuthorityService.AuthorityFilter;
public interface AuthorityDAO
{
@ -62,6 +63,8 @@ public interface AuthorityDAO
*/
Set<String> getContainedAuthorities(AuthorityType type, String parentName, boolean immediate);
public boolean isAuthorityContained(NodeRef authorityNodeRef, String authorityToFind);
/**
* Remove an authority.
*
@ -80,6 +83,20 @@ public interface AuthorityDAO
*/
Set<String> getContainingAuthorities(AuthorityType type, String name, boolean immediate);
/**
* Get a set of authorities with varying filter criteria
*
* @param type authority type or null for all types
* @param authority if non-null, only return those authorities who contain this authority
* @param zoneName if non-null, only include authorities in the named zone
* @param filter optional callback to apply further filter criteria or null
* @param size if greater than zero, the maximum results to return. The search strategy used is varied depending on this number.
* @return a set of authorities
*/
public Set<String> getContainingAuthoritiesInZone(AuthorityType type, String authority, final String zoneName, AuthorityFilter filter, int size);
/**
* Get authorities by type and/or zone
*

View File

@ -44,6 +44,8 @@ import org.alfresco.repo.node.NodeServicePolicies;
import org.alfresco.repo.policy.JavaBehaviour;
import org.alfresco.repo.policy.PolicyComponent;
import org.alfresco.repo.search.impl.lucene.AbstractLuceneQueryParser;
import org.alfresco.repo.security.authentication.AuthenticationUtil;
import org.alfresco.repo.security.authentication.AuthenticationUtil.RunAsWork;
import org.alfresco.repo.security.person.PersonServiceImpl;
import org.alfresco.repo.tenant.TenantService;
import org.alfresco.repo.transaction.AlfrescoTransactionSupport;
@ -61,6 +63,7 @@ import org.alfresco.service.cmr.security.AuthorityType;
import org.alfresco.service.cmr.security.NoSuchPersonException;
import org.alfresco.service.cmr.security.PersonService;
import org.alfresco.service.cmr.security.PersonService.PersonInfo;
import org.alfresco.service.cmr.security.AuthorityService.AuthorityFilter;
import org.alfresco.service.namespace.NamespacePrefixResolver;
import org.alfresco.service.namespace.NamespaceService;
import org.alfresco.service.namespace.QName;
@ -102,8 +105,14 @@ public class AuthorityDAOImpl implements AuthorityDAO, NodeServicePolicies.Befor
private SimpleCache<Pair<String, String>, NodeRef> authorityLookupCache;
private static final NodeRef NULL_NODEREF = new NodeRef("null", "null", "null");
private SimpleCache<String, Set<String>> userAuthorityCache;
private SimpleCache<Pair<String, String>, List<ChildAssociationRef>> zoneAuthorityCache;
private SimpleCache<NodeRef, List<ChildAssociationRef>> childAuthorityCache;
/** System Container ref cache (Tennant aware) */
private Map<String, NodeRef> systemContainerRefs = new ConcurrentHashMap<String, NodeRef>(4);
@ -111,6 +120,9 @@ public class AuthorityDAOImpl implements AuthorityDAO, NodeServicePolicies.Befor
private PolicyComponent policyComponent;
/** The number of authorities in a zone to pre-cache, allowing quick generation of 'first n' results. */
private int zoneAuthoritySampleSize = 10000;
private NamedObjectRegistry<CannedQueryFactory<AuthorityInfo>> cannedQueryRegistry;
public AuthorityDAOImpl()
@ -118,6 +130,19 @@ public class AuthorityDAOImpl implements AuthorityDAO, NodeServicePolicies.Befor
super();
}
/**
* Sets number of authorities in a zone to pre-cache, allowing quick generation of 'first n' results and adaption of
* search technique based on hit rate.
*
* @param zoneAuthoritySampleSize
* the zoneAuthoritySampleSize to set
*/
public void setZoneAuthoritySampleSize(int zoneAuthoritySampleSize)
{
this.zoneAuthoritySampleSize = zoneAuthoritySampleSize;
}
public void setStoreUrl(String storeUrl)
{
this.storeRef = new StoreRef(storeUrl);
@ -156,6 +181,16 @@ public class AuthorityDAOImpl implements AuthorityDAO, NodeServicePolicies.Befor
this.userAuthorityCache = userAuthorityCache;
}
public void setZoneAuthorityCache(SimpleCache<Pair<String, String>, List<ChildAssociationRef>> zoneAuthorityCache)
{
this.zoneAuthorityCache = zoneAuthorityCache;
}
public void setChildAuthorityCache(SimpleCache<NodeRef, List<ChildAssociationRef>> childAuthorityCache)
{
this.childAuthorityCache = childAuthorityCache;
}
public void setPersonService(PersonService personService)
{
this.personService = personService;
@ -208,6 +243,7 @@ public class AuthorityDAOImpl implements AuthorityDAO, NodeServicePolicies.Befor
throw new AlfrescoRuntimeException("Authorities of the type " + authorityType
+ " may not be added to other authorities");
}
childAuthorityCache.remove(parentRef);
parentRefs.add(parentRef);
}
NodeRef childRef = getAuthorityOrNull(childName);
@ -247,10 +283,13 @@ public class AuthorityDAOImpl implements AuthorityDAO, NodeServicePolicies.Befor
if (authorityZones != null)
{
Set<NodeRef> zoneRefs = new HashSet<NodeRef>(authorityZones.size() * 2);
String currentUserDomain = tenantService.getCurrentUserDomain();
for (String authorityZone : authorityZones)
{
zoneRefs.add(getOrCreateZone(authorityZone));
zoneAuthorityCache.remove(new Pair<String, String>(currentUserDomain, authorityZone));
}
zoneAuthorityCache.remove(new Pair<String, String>(currentUserDomain, null));
nodeService.addChild(zoneRefs, childRef, ContentModel.ASSOC_IN_ZONE, QName.createQName("cm", name, namespacePrefixResolver));
}
authorityLookupCache.put(cacheKey(name), childRef);
@ -269,9 +308,17 @@ public class AuthorityDAOImpl implements AuthorityDAO, NodeServicePolicies.Befor
{
throw new UnknownAuthorityException("An authority was not found for " + name);
}
nodeService.deleteNode(nodeRef);
String currentUserDomain = tenantService.getCurrentUserDomain();
for (String authorityZone : getAuthorityZones(name))
{
zoneAuthorityCache.remove(new Pair<String, String>(currentUserDomain, authorityZone));
}
zoneAuthorityCache.remove(new Pair<String, String>(currentUserDomain, null));
removeParentsFromChildAuthorityCache(nodeRef);
authorityLookupCache.remove(cacheKey(name));
userAuthorityCache.clear();
nodeService.deleteNode(nodeRef);
}
// Get authorities by type and/or zone (both cannot be null)
@ -626,6 +673,7 @@ public class AuthorityDAOImpl implements AuthorityDAO, NodeServicePolicies.Befor
throw new UnknownAuthorityException("An authority was not found for " + childName);
}
nodeService.removeChild(parentRef, childRef);
childAuthorityCache.remove(parentRef);
if (AuthorityType.getAuthorityType(childName) == AuthorityType.USER)
{
userAuthorityCache.remove(childName);
@ -671,6 +719,94 @@ public class AuthorityDAOImpl implements AuthorityDAO, NodeServicePolicies.Befor
}
}
public Set<String> getContainingAuthoritiesInZone(AuthorityType type, String authority, final String zoneName, AuthorityFilter filter, int size)
{
// Retrieved the cached 'sample' of authorities in the zone
String currentUserDomain = tenantService.getCurrentUserDomain();
Pair<String, String> cacheKey = new Pair<String, String>(currentUserDomain, zoneName);
List<ChildAssociationRef> zoneAuthorities = zoneAuthorityCache.get(cacheKey);
final int maxToProcess = Math.max(size, zoneAuthoritySampleSize);
if (zoneAuthorities == null)
{
zoneAuthorities = AuthenticationUtil.runAs(new RunAsWork<List<ChildAssociationRef>>()
{
@Override
public List<ChildAssociationRef> doWork() throws Exception
{
NodeRef root = zoneName == null ? getAuthorityContainer() : getZone(zoneName);
if (root == null)
{
return Collections.emptyList();
}
return nodeService.getChildAssocs(root, null, null, maxToProcess, false);
}
}, tenantService.getDomainUser(AuthenticationUtil.getSystemUserName(), currentUserDomain));
zoneAuthorityCache.put(cacheKey, zoneAuthorities);
}
// Now search each for the required authority. If the number of results is greater than or close to the size
// limit, then this will be the most efficient route
Set<String> result = new TreeSet<String>();
final int maxResults = size > 0 ? size : Integer.MAX_VALUE;
int hits = 0, processed = 0;
for (ChildAssociationRef groupAssoc : zoneAuthorities)
{
String containing = groupAssoc.getQName().getLocalName();
AuthorityType containingType = AuthorityType.getAuthorityType(containing);
processed++;
// Cache the authority by key, if appropriate
switch (containingType)
{
case USER:
case ADMIN:
case GUEST:
break;
default:
Pair <String, String> containingKey = cacheKey(containing);
if (!authorityLookupCache.contains(containingKey))
{
authorityLookupCache.put(containingKey, groupAssoc.getChildRef());
}
}
if ((type == null || containingType == type)
&& (authority == null || isAuthorityContained(groupAssoc.getChildRef(), authority))
&& (filter == null || filter.includeAuthority(containing)))
{
result.add(containing);
if (++hits == maxResults)
{
break;
}
}
// If this top down search is not providing an adequate hit count then resort to a naiive unlimited search
if (processed >= maxToProcess)
{
if (authority == null)
{
return new HashSet<String>(getAuthorities(type, zoneName, null, false, true, new PagingRequest(0, maxResults, null)).getPage());
}
Set<String> newResult = getContainingAuthorities(type, authority, false);
result.clear();
int i=0;
for (String container : newResult)
{
if ((filter == null || filter.includeAuthority(container)
&& (zoneName == null || getAuthorityZones(container).contains(zoneName))))
{
result.add(container);
if (++i >= maxResults)
{
break;
}
}
}
break;
}
}
return result;
}
public String getShortName(String name)
{
AuthorityType type = AuthorityType.getAuthorityType(name);
@ -805,6 +941,44 @@ public class AuthorityDAOImpl implements AuthorityDAO, NodeServicePolicies.Befor
}
}
// Take advantage of the fact that the authority name is on the child association
public boolean isAuthorityContained(NodeRef authorityNodeRef, String authorityToFind)
{
List<ChildAssociationRef> cars = childAuthorityCache.get(authorityNodeRef);
if (cars == null)
{
cars = nodeService.getChildAssocs(authorityNodeRef, RegexQNamePattern.MATCH_ALL,
RegexQNamePattern.MATCH_ALL, false);
childAuthorityCache.put(authorityNodeRef, cars);
}
// Loop over children recursively to find authorityToFind
for (ChildAssociationRef car : cars)
{
String authorityName = car.getQName().getLocalName();
if (authorityToFind.equals(authorityName)
|| AuthorityType.getAuthorityType(authorityName) != AuthorityType.USER
&& isAuthorityContained(car.getChildRef(), authorityToFind))
{
return true;
}
}
return false;
}
private void removeParentsFromChildAuthorityCache(NodeRef nodeRef)
{
for (ChildAssociationRef car: nodeService.getParentAssocs(nodeRef))
{
NodeRef parentRef = car.getParentRef();
if (dictionaryService.isSubClass(nodeService.getType(parentRef), ContentModel.TYPE_AUTHORITY_CONTAINER))
{
childAuthorityCache.remove(parentRef);
}
}
}
private NodeRef getAuthorityOrNull(String name)
{
try
@ -829,13 +1003,10 @@ public class AuthorityDAOImpl implements AuthorityDAO, NodeServicePolicies.Befor
{
List<ChildAssociationRef> results = nodeService.getChildAssocs(getAuthorityContainer(),
ContentModel.ASSOC_CHILDREN, QName.createQName("cm", name, namespacePrefixResolver), false);
if (!results.isEmpty())
{
result = results.get(0).getChildRef();
authorityLookupCache.put(cacheKey, result);
}
result = results.isEmpty() ? NULL_NODEREF :results.get(0).getChildRef();
authorityLookupCache.put(cacheKey, result);
}
return result;
return result == NULL_NODEREF ? null : result;
}
}
catch (NoSuchPersonException e)
@ -1084,6 +1255,7 @@ public class AuthorityDAOImpl implements AuthorityDAO, NodeServicePolicies.Befor
public void beforeDeleteNode(NodeRef nodeRef)
{
userAuthorityCache.remove(getAuthorityName(nodeRef));
removeParentsFromChildAuthorityCache(nodeRef);
}
public void onUpdateProperties(NodeRef nodeRef, Map<QName, Serializable> before, Map<QName, Serializable> after)
@ -1110,7 +1282,6 @@ public class AuthorityDAOImpl implements AuthorityDAO, NodeServicePolicies.Befor
aclDao.renameAuthority(authBefore, authAfter);
}
// Fix primary association local name
QName newAssocQName = QName.createQName("cm", authAfter, namespacePrefixResolver);
ChildAssociationRef assoc = nodeService.getPrimaryParent(nodeRef);
@ -1137,7 +1308,7 @@ public class AuthorityDAOImpl implements AuthorityDAO, NodeServicePolicies.Befor
{
userAuthorityCache.remove(authBefore);
}
removeParentsFromChildAuthorityCache(nodeRef);
}
else
{

View File

@ -18,12 +18,15 @@
*/
package org.alfresco.repo.security.authority;
import java.util.AbstractSet;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.HashSet;
import java.util.Iterator;
import java.util.List;
import java.util.Set;
import java.util.TreeSet;
import org.alfresco.query.PagingRequest;
import org.alfresco.query.PagingResults;
@ -123,6 +126,7 @@ public class AuthorityServiceImpl implements AuthorityService, InitializingBean
this.guestGroups = guestGroups;
}
@Override
public void afterPropertiesSet() throws Exception
{
// Fully qualify the admin group names
@ -199,6 +203,32 @@ public class AuthorityServiceImpl implements AuthorityService, InitializingBean
return getAuthoritiesForUser(canonicalName).contains(PermissionService.GUEST_AUTHORITY);
}
/**
* Checks if the {@code authority} (normally a username) is the same as or is contained
* within the {@code parentAuthority}.
* @param authority
* @param parentAuthority a normalized, case sensitive authority name
* @return {@code true} if does, {@code false} otherwise.
*/
private boolean hasAuthority(String authority, String parentAuthority)
{
if (parentAuthority.equals(authority))
{
return true;
}
// Even users are matched case sensitively in ACLs
if (AuthorityType.getAuthorityType(parentAuthority) == AuthorityType.USER)
{
return false;
}
NodeRef nodeRef = authorityDAO.getAuthorityNodeRefOrNull(parentAuthority);
if (nodeRef == null)
{
return false;
}
return authorityDAO.isAuthorityContained(nodeRef, authority);
}
/**
* {@inheritDoc}
*/
@ -214,16 +244,17 @@ public class AuthorityServiceImpl implements AuthorityService, InitializingBean
*/
public Set<String> getAuthoritiesForUser(String currentUserName)
{
Set<String> authorities = new HashSet<String>(64);
return new UserAuthoritySet(currentUserName);
}
authorities.addAll(getContainingAuthorities(null, currentUserName, false));
// Work out mapped roles
// Return mapped roles
private Set<String> getRoleAuthorities(String currentUserName)
{
Set<String> authorities = new TreeSet<String>();
// Check named guest and admin users
Set<String> adminUsers = this.authenticationService.getDefaultAdministratorUserNames();
Set<String> guestUsers = this.authenticationService.getDefaultGuestUserNames();
Set<String> adminUsers = authenticationService.getDefaultAdministratorUserNames();
Set<String> guestUsers = authenticationService.getDefaultGuestUserNames();
String defaultGuestName = AuthenticationUtil.getGuestUserName();
if (defaultGuestName != null && defaultGuestName.length() > 0)
@ -236,23 +267,32 @@ public class AuthorityServiceImpl implements AuthorityService, InitializingBean
boolean isGuestUser = containsMatch(guestUsers, currentUserName);
// Check if any of the user's groups are listed as admin groups
if (!isAdminUser && !adminGroups.isEmpty())
if (!isAdminUser)
{
for (String authority : authorities)
for (String authority : adminGroups)
{
if (adminGroups.contains(authority) || adminGroups.contains(tenantService.getBaseNameUser(authority)))
if (hasAuthority(currentUserName, authority) || hasAuthority(currentUserName, tenantService.getBaseNameUser(authority)))
{
isAdminUser = true;
break;
}
}
}
// Check if any of the user's groups are listed as guest groups
if (!isAdminUser && !isGuestUser && !guestGroups.isEmpty())
// Check if user name matches (ignore case) "ROLE_GUEST", if so its a guest. Code originally in PermissionService.
if (!isAdminUser && !isGuestUser &&
tenantService.getBaseNameUser(currentUserName).equalsIgnoreCase(AuthenticationUtil.getGuestUserName()))
{
for (String authority : authorities)
isGuestUser = true;
}
// Check if any of the user's groups are listed as guest groups
if (!isAdminUser && !isGuestUser)
{
for (String authority : guestGroups)
{
if (guestGroups.contains(authority) || guestGroups.contains(tenantService.getBaseNameUser(authority)))
if (hasAuthority(currentUserName, authority) || hasAuthority(currentUserName, tenantService.getBaseNameUser(authority)))
{
isGuestUser = true;
break;
@ -274,6 +314,7 @@ public class AuthorityServiceImpl implements AuthorityService, InitializingBean
{
authorities.addAll(guestSet);
}
return authorities;
}
@ -501,6 +542,12 @@ public class AuthorityServiceImpl implements AuthorityService, InitializingBean
/**
* {@inheritDoc}
*/
public Set<String> getContainingAuthoritiesInZone(AuthorityType type, String authority, final String zoneName, AuthorityFilter filter, int size)
{
return authorityDAO.getContainingAuthoritiesInZone(type, authority, zoneName, filter, size);
}
@Override
public void removeAuthority(String parentName, String childName)
{
authorityDAO.removeAuthority(parentName, childName);
@ -645,4 +692,118 @@ public class AuthorityServiceImpl implements AuthorityService, InitializingBean
{
return authorityDAO.getShortName(name);
}
/**
* Lazy load set of authorities. Try not to iterate or ask for the size. Needed for the case where there
* is a large number of sites/groups.
*
* @author David Ward, Alan Davis
*/
public final class UserAuthoritySet extends AbstractSet<String>
{
private final String username;
private Set<String> positiveHits;
private Set<String> negativeHits;
private boolean allAuthoritiesLoaded;
/**
* @param username
* @param auths
*/
public UserAuthoritySet(String username)
{
this.username = username;
positiveHits = getRoleAuthorities(username);
negativeHits = new TreeSet<String>();
}
// Try to avoid evaluating the full set unless we have to!
private Set<String> getAllAuthorities()
{
if (!allAuthoritiesLoaded)
{
allAuthoritiesLoaded = true;
Set<String> tmp = positiveHits; // must add role authorities back in.
positiveHits = getContainingAuthorities(null, username, false);
positiveHits.addAll(tmp);
negativeHits = null;
}
return positiveHits;
}
@Override
public boolean removeAll(Collection<?> c) {
throw new UnsupportedOperationException();
}
@Override
public boolean add(String e)
{
return positiveHits.add(e);
}
@Override
public void clear()
{
throw new UnsupportedOperationException();
}
@Override
public boolean contains(Object o)
{
if (!(o instanceof String))
{
return false;
}
if (positiveHits.contains(o))
{
return true;
}
if (allAuthoritiesLoaded || negativeHits.contains(o))
{
return false;
}
// Remember positive and negative hits for next time
if (hasAuthority(username, (String) o))
{
positiveHits.add((String) o);
return true;
}
else
{
negativeHits.add((String)o);
return false;
}
}
@Override
public boolean remove(Object o)
{
throw new UnsupportedOperationException();
}
@Override
public boolean retainAll(Collection<?> c)
{
throw new UnsupportedOperationException();
}
@Override
public Iterator<String> iterator()
{
return getAllAuthorities().iterator();
}
@Override
public int size()
{
return getAllAuthorities().size();
}
public Object getUsername()
{
return username;
}
}
}

View File

@ -39,6 +39,7 @@ import org.alfresco.repo.policy.JavaBehaviour;
import org.alfresco.repo.policy.PolicyComponent;
import org.alfresco.repo.security.authentication.AuthenticationUtil;
import org.alfresco.repo.security.authentication.AuthenticationUtil.RunAsWork;
import org.alfresco.repo.security.authority.AuthorityServiceImpl;
import org.alfresco.repo.security.permissions.ACLType;
import org.alfresco.repo.security.permissions.AccessControlEntry;
import org.alfresco.repo.security.permissions.AccessControlList;
@ -68,11 +69,11 @@ import org.alfresco.service.namespace.NamespaceService;
import org.alfresco.service.namespace.QName;
import org.alfresco.util.EqualsHelper;
import org.alfresco.util.Pair;
import org.alfresco.util.PropertyCheck;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.context.ApplicationEvent;
import org.springframework.extensions.surf.util.AbstractLifecycleBean;
import org.alfresco.util.PropertyCheck;
/**
* The Alfresco implementation of a permissions service against our APIs for the permissions model and permissions
@ -279,6 +280,26 @@ public class PermissionServiceImpl extends AbstractLifecycleBean implements Perm
accessCache.clear();
}
/**
* Cache clear on create of a child association from an authority container.
*
* @param childAssocRef
*/
public void onCreateChildAssociation(ChildAssociationRef childAssocRef)
{
accessCache.clear();
}
/**
* Cache clear on delete of a child association from an authority container.
*
* @param childAssocRef
*/
public void beforeDeleteChildAssociation(ChildAssociationRef childAssocRef)
{
accessCache.clear();
}
@Override
protected void onBootstrap(ApplicationEvent event)
{
@ -307,6 +328,9 @@ public class PermissionServiceImpl extends AbstractLifecycleBean implements Perm
public void init()
{
policyComponent.bindClassBehaviour(QName.createQName(NamespaceService.ALFRESCO_URI, "onMoveNode"), ContentModel.TYPE_BASE, new JavaBehaviour(this, "onMoveNode"));
policyComponent.bindClassBehaviour(QName.createQName(NamespaceService.ALFRESCO_URI, "onCreateChildAssociation"), ContentModel.TYPE_AUTHORITY_CONTAINER, new JavaBehaviour(this, "onCreateChildAssociation"));
policyComponent.bindClassBehaviour(QName.createQName(NamespaceService.ALFRESCO_URI, "beforeDeleteChildAssociation"), ContentModel.TYPE_AUTHORITY_CONTAINER, new JavaBehaviour(this, "beforeDeleteChildAssociation"));
}
//
@ -474,10 +498,13 @@ public class PermissionServiceImpl extends AbstractLifecycleBean implements Perm
PermissionContext context = new PermissionContext(typeQname);
context.getAspects().addAll(aspectQNames);
Authentication auth = AuthenticationUtil.getRunAsAuthentication();
String user = AuthenticationUtil.getRunAsUser();
for (String dynamicAuthority : getDynamicAuthorities(auth, nodeRef, perm))
if (auth != null)
{
context.addDynamicAuthorityAssignment(user, dynamicAuthority);
String user = AuthenticationUtil.getRunAsUser();
for (String dynamicAuthority : getDynamicAuthorities(auth, nodeRef, perm))
{
context.addDynamicAuthorityAssignment(user, dynamicAuthority);
}
}
return hasPermission(properties.getId(), context, perm);
}
@ -711,12 +738,43 @@ public class PermissionServiceImpl extends AbstractLifecycleBean implements Perm
{
LinkedHashSet<Serializable> key = new LinkedHashSet<Serializable>();
key.add(perm.toString());
key.addAll(auths);
// We will just have to key our dynamic sets by username. We wrap it so as not to be confused with a static set
if (auths instanceof AuthorityServiceImpl.UserAuthoritySet)
{
key.add((Serializable)Collections.singleton(((AuthorityServiceImpl.UserAuthoritySet)auths).getUsername()));
}
else
{
key.addAll(auths);
}
key.add(nodeRef);
key.add(type);
return key;
}
/**
* Get the core authorisations for this {@code auth}. If {@code null} this
* will be an empty set. Otherwise it will be a Lazy loaded Set of authorities
* from the authority node structure PLUS any granted authorities.
*/
private Set<String> getCoreAuthorisations(Authentication auth)
{
if (auth == null)
{
return Collections.<String>emptySet();
}
User user = (User) auth.getPrincipal();
String username = user.getUsername();
Set<String> auths = authorityService.getAuthoritiesForUser(username);
for (GrantedAuthority grantedAuthority : auth.getAuthorities())
{
auths.add(grantedAuthority.getAuthority());
}
return auths;
}
/**
* Get the authorisations for the currently authenticated user
*
@ -725,41 +783,17 @@ public class PermissionServiceImpl extends AbstractLifecycleBean implements Perm
*/
private Set<String> getAuthorisations(Authentication auth, NodeRef nodeRef, PermissionReference required)
{
HashSet<String> auths = new HashSet<String>();
// No authenticated user then no permissions
if (auth == null)
Set<String> auths = getCoreAuthorisations(auth);
if (auth != null)
{
return auths;
auths.addAll(getDynamicAuthorities(auth, nodeRef, required));
}
// TODO: Refactor and use the authentication service for this.
User user = (User) auth.getPrincipal();
String username = user.getUsername();
auths.add(username);
if (tenantService.getBaseNameUser(username).equalsIgnoreCase(AuthenticationUtil.getGuestUserName()))
{
auths.add(PermissionService.GUEST_AUTHORITY);
}
for (GrantedAuthority authority : auth.getAuthorities())
{
auths.add(authority.getAuthority());
}
auths.addAll(getDynamicAuthorities(auth, nodeRef, required));
auths.addAll(authorityService.getAuthoritiesForUser(username));
return auths;
}
private Set<String> getDynamicAuthorities(Authentication auth, NodeRef nodeRef, PermissionReference required)
{
HashSet<String> auths = new HashSet<String>(64);
if (auth == null)
{
return auths;
}
Set<String> dynAuths = new HashSet<String>(64);
User user = (User) auth.getPrincipal();
String username = user.getUsername();
@ -775,49 +809,44 @@ public class PermissionServiceImpl extends AbstractLifecycleBean implements Perm
{
if (da.hasAuthority(nodeRef, username))
{
auths.add(da.getAuthority());
dynAuths.add(da.getAuthority());
}
}
}
}
}
auths.addAll(authorityService.getAuthoritiesForUser(user.getUsername()));
return auths;
return dynAuths;
}
private Set<String> getAuthorisations(Authentication auth, PermissionContext context)
{
HashSet<String> auths = new HashSet<String>();
// No authenticated user then no permissions
if (auth == null)
Set<String> auths = getCoreAuthorisations(auth);
if (auth != null)
{
return auths;
}
// TODO: Refactor and use the authentication service for this.
User user = (User) auth.getPrincipal();
auths.add(user.getUsername());
for (GrantedAuthority authority : auth.getAuthorities())
{
auths.add(authority.getAuthority());
}
auths.addAll(authorityService.getAuthoritiesForUser(user.getUsername()));
if (context != null)
{
Map<String, Set<String>> dynamicAuthorityAssignments = context.getDynamicAuthorityAssignment();
HashSet<String> dynAuths = new HashSet<String>();
for (String current : auths)
if (context != null)
{
Set<String> dynos = dynamicAuthorityAssignments.get(current);
auths.addAll(getDynamicAuthorities(auth, context, auths));
}
}
return auths;
}
private Set<String> getDynamicAuthorities(Authentication auth, PermissionContext context, Set<String> auths)
{
Set<String> dynAuths = new HashSet<String>();
Map<String, Set<String>> dynamicAuthorityAssignments = context.getDynamicAuthorityAssignment();
for (String dynKey : dynamicAuthorityAssignments.keySet())
{
if (auths.contains(dynKey))
{
Set<String> dynos = dynamicAuthorityAssignments.get(dynKey);
if (dynos != null)
{
dynAuths.addAll(dynos);
}
}
auths.addAll(dynAuths);
}
return auths;
return dynAuths;
}
public NodePermissionEntry explainPermission(NodeRef nodeRef, PermissionReference perm)
@ -1161,25 +1190,11 @@ public class PermissionServiceImpl extends AbstractLifecycleBean implements Perm
// test acl readers
Set<String> aclReaders = getReaders(aclId);
// both lists are ordered so we can skip scan to find any overlap
if(authorities.size() < aclReaders.size())
for(String auth : aclReaders)
{
for(String auth : authorities)
if(authorities.contains(auth))
{
if(aclReaders.contains(auth))
{
return AccessStatus.ALLOWED;
}
}
}
else
{
for(String auth : aclReaders)
{
if(authorities.contains(auth))
{
return AccessStatus.ALLOWED;
}
return AccessStatus.ALLOWED;
}
}
@ -1641,29 +1656,6 @@ public class PermissionServiceImpl extends AbstractLifecycleBean implements Perm
// any deny denies
// if (false)
// {
// if (denied != null)
// {
// for (String auth : authorisations)
// {
// Pair<String, PermissionReference> specific = new Pair<String, PermissionReference>(auth, required);
// if (denied.contains(specific))
// {
// return false;
// }
// for (PermissionReference perm : granters)
// {
// specific = new Pair<String, PermissionReference>(auth, perm);
// if (denied.contains(specific))
// {
// return false;
// }
// }
// }
// }
// }
// If the permission has a match in both the authorities and
// granters list it is allowed
// It applies to the current user and it is granted
@ -1918,29 +1910,6 @@ public class PermissionServiceImpl extends AbstractLifecycleBean implements Perm
// any deny denies
// if (false)
// {
// if (denied != null)
// {
// for (String auth : authorisations)
// {
// Pair<String, PermissionReference> specific = new Pair<String, PermissionReference>(auth, required);
// if (denied.contains(specific))
// {
// return false;
// }
// for (PermissionReference perm : granters)
// {
// specific = new Pair<String, PermissionReference>(auth, perm);
// if (denied.contains(specific))
// {
// return false;
// }
// }
// }
// }
// }
// If the permission has a match in both the authorities and
// granters list it is allowed
// It applies to the current user and it is granted
@ -2336,34 +2305,19 @@ public class PermissionServiceImpl extends AbstractLifecycleBean implements Perm
public Set<String> getAuthorisations()
{
// Use TX cache
@SuppressWarnings("unchecked")
Set<String> auths = (Set<String>) AlfrescoTransactionSupport.getResource("MyAuthCache");
Authentication auth = AuthenticationUtil.getRunAsAuthentication();
User user = (User) auth.getPrincipal();
if(auths != null)
if (auths != null)
{
if(!auths.contains(user.getUsername()))
if (auth == null || !auths.contains(((User)auth.getPrincipal()).getUsername()))
{
auths = null;
}
}
if (auths == null)
{
auths = new HashSet<String>();
// No authenticated user then no permissions
if (auth != null)
{
auths.add(user.getUsername());
for (GrantedAuthority authority : auth.getAuthorities())
{
auths.add(authority.getAuthority());
}
auths.addAll(authorityService.getAuthoritiesForUser(user.getUsername()));
}
auths = getCoreAuthorisations(auth);
AlfrescoTransactionSupport.bindResource("MyAuthCache", auths);
}
return Collections.unmodifiableSet(auths);

View File

@ -0,0 +1,94 @@
/*
* Copyright (C) 2005-2011 Alfresco Software Limited.
*
* This file is part of Alfresco
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
*/
package org.alfresco.repo.site;
import java.util.List;
import org.alfresco.repo.security.authentication.AuthenticationUtil;
import org.alfresco.repo.security.authentication.AuthenticationUtil.RunAsWork;
import org.alfresco.repo.tenant.Tenant;
import org.alfresco.repo.tenant.TenantAdminService;
import org.alfresco.service.cmr.site.SiteService;
import org.springframework.context.ApplicationEvent;
import org.springframework.extensions.surf.util.AbstractLifecycleBean;
/**
* Warms up site zone / authority caches before the first access to a user dashboard
*
* @author dward
*/
public class SiteServiceBootstrap extends AbstractLifecycleBean
{
private SiteService siteService;
private TenantAdminService tenantAdminService;
public void setSiteService(SiteService siteService)
{
this.siteService = siteService;
}
public void setTenantAdminService(TenantAdminService tenantAdminService)
{
this.tenantAdminService = tenantAdminService;
}
/*
* (non-Javadoc)
* @seeorg.springframework.extensions.surf.util.AbstractLifecycleBean#onBootstrap(org.springframework.context.
* ApplicationEvent)
*/
@Override
protected void onBootstrap(ApplicationEvent event)
{
AuthenticationUtil.runAs(new RunAsWork<Object>()
{
public Object doWork() throws Exception
{
siteService.listSites("a");
return null;
}
}, AuthenticationUtil.getSystemUserName());
if (tenantAdminService.isEnabled())
{
List<Tenant> tenants = tenantAdminService.getAllTenants();
for (Tenant tenant : tenants)
{
AuthenticationUtil.runAs(new RunAsWork<Object>()
{
public Object doWork() throws Exception
{
siteService.listSites("a");
return null;
}
}, tenantAdminService.getDomainUser(AuthenticationUtil.getSystemUserName(), tenant.getTenantDomain()));
}
}
}
/*
* (non-Javadoc)
* @seeorg.springframework.extensions.surf.util.AbstractLifecycleBean#onShutdown(org.springframework.context.
* ApplicationEvent)
*/
@Override
protected void onShutdown(ApplicationEvent event)
{
}
}

View File

@ -69,7 +69,9 @@ import org.alfresco.service.cmr.repository.ChildAssociationRef;
import org.alfresco.service.cmr.repository.NodeRef;
import org.alfresco.service.cmr.repository.NodeService;
import org.alfresco.service.cmr.repository.StoreRef;
import org.alfresco.service.cmr.search.LimitBy;
import org.alfresco.service.cmr.search.ResultSet;
import org.alfresco.service.cmr.search.SearchParameters;
import org.alfresco.service.cmr.search.SearchService;
import org.alfresco.service.cmr.security.AccessPermission;
import org.alfresco.service.cmr.security.AccessStatus;
@ -78,6 +80,7 @@ import org.alfresco.service.cmr.security.AuthorityType;
import org.alfresco.service.cmr.security.NoSuchPersonException;
import org.alfresco.service.cmr.security.PermissionService;
import org.alfresco.service.cmr.security.PersonService;
import org.alfresco.service.cmr.security.AuthorityService.AuthorityFilter;
import org.alfresco.service.cmr.site.SiteInfo;
import org.alfresco.service.cmr.site.SiteService;
import org.alfresco.service.cmr.site.SiteVisibility;
@ -786,11 +789,16 @@ public class SiteServiceImpl extends AbstractLifecycleBean implements SiteServic
query.append(")");
}
ResultSet results = this.searchService.query(
siteRoot.getStoreRef(),
SearchService.LANGUAGE_LUCENE,
query.toString(),
null);
SearchParameters sp = new SearchParameters();
sp.addStore(siteRoot.getStoreRef());
sp.setLanguage(SearchService.LANGUAGE_LUCENE);
sp.setQuery(query.toString());
if (size != 0)
{
sp.setLimit(size);
sp.setLimitBy(LimitBy.FINAL_SIZE);
}
ResultSet results = this.searchService.query(sp);
try
{
result = new ArrayList<SiteInfo>(results.length());
@ -798,11 +806,9 @@ public class SiteServiceImpl extends AbstractLifecycleBean implements SiteServic
{
// Ignore any node type that is not a "site"
QName siteClassName = this.nodeService.getType(site);
if (this.dictionaryService.isSubClass(siteClassName, SiteModel.TYPE_SITE) == true)
if (this.dictionaryService.isSubClass(siteClassName, SiteModel.TYPE_SITE))
{
result.add(createSiteInfo(site));
// break on max size limit reached
if (result.size() == size) break;
}
}
}
@ -864,6 +870,14 @@ public class SiteServiceImpl extends AbstractLifecycleBean implements SiteServic
* @see org.alfresco.service.cmr.site.SiteService#listSites(java.lang.String)
*/
public List<SiteInfo> listSites(final String userName)
{
return listSites(userName, 0);
}
/**
* @see org.alfresco.service.cmr.site.SiteService#listSites(java.lang.String, int)
*/
public List<SiteInfo> listSites(final String userName, final int size)
{
// MT share - for activity service system callback
if (tenantService.isEnabled() && (AuthenticationUtil.SYSTEM_USER_NAME.equals(AuthenticationUtil.getRunAsUser())) && tenantService.isTenantUser(userName))
@ -874,13 +888,13 @@ public class SiteServiceImpl extends AbstractLifecycleBean implements SiteServic
{
public List<SiteInfo> doWork() throws Exception
{
return listSitesImpl(userName);
return listSitesImpl(userName, size);
}
}, tenantService.getDomainUser(AuthenticationUtil.getSystemUserName(), tenantDomain));
}
else
{
return listSitesImpl(userName);
return listSitesImpl(userName, size);
}
}
@ -961,69 +975,59 @@ public class SiteServiceImpl extends AbstractLifecycleBean implements SiteServic
* @param userName the username
* @return a list of {@link SiteInfo site infos}.
*/
private List<SiteInfo> listSitesImpl(String userName)
private String resolveSite(String group)
{
List<SiteInfo> result = null;
// get the Groups this user is contained within (at any level)
Set<String> groups = this.authorityService.getContainingAuthorities(null, userName, false);
Set<String> siteNames = new HashSet<String>(groups.size());
// purge non Site related Groups and strip the group name down to the site "shortName" it relates to
for (String group : groups)
// purge non Site related Groups and strip the group name down to the site "shortName" it relates too
if (group.startsWith(GROUP_SITE_PREFIX))
{
if (group.startsWith(GROUP_SITE_PREFIX))
int roleIndex = group.lastIndexOf('_');
if (roleIndex + 1 <= GROUP_SITE_PREFIX_LENGTH)
{
int roleIndex = group.lastIndexOf('_');
String siteName;
if (roleIndex + 1 <= GROUP_SITE_PREFIX_LENGTH)
{
// There is no role associated
siteName = group.substring(GROUP_SITE_PREFIX_LENGTH);
}
else
{
siteName = group.substring(GROUP_SITE_PREFIX_LENGTH, roleIndex);
}
siteNames.add(siteName);
// There is no role associated
return group.substring(GROUP_SITE_PREFIX_LENGTH);
}
else
{
return group.substring(GROUP_SITE_PREFIX_LENGTH, roleIndex);
}
}
return null;
}
// retrieve the site nodes based on the list from the containing site groups
NodeRef siteRoot = getSiteRoot();
if (siteRoot == null)
{
result = Collections.emptyList();
}
else
{
List<String> siteList = new ArrayList<String>(siteNames);
// ensure we do not trip over the getChildrenByName() 1000 item API limit!
//
// Note the implicit assumption here: that the specified user is not a member of > 1000 sites
// If the user IS a member of more than 1000 sites, then a truncated list of sites will be returned.
// Also, given that the siteNames are a Set<String>, there is no guarantee about which sites would be
// included in the truncated results and which would be excluded. HashSets are unordered.
if (siteList.size() > 1000)
private List<SiteInfo> listSitesImpl(final String userName, int size)
{
final int maxResults = size > 0 ? size : 1000;
final Set<String> siteNames = new TreeSet<String>();
authorityService.getContainingAuthoritiesInZone(AuthorityType.GROUP, userName, AuthorityService.ZONE_APP_SHARE, new AuthorityFilter(){
@Override
public boolean includeAuthority(String authority)
{
siteList = siteList.subList(0, 1000);
}
List<ChildAssociationRef> assocs = this.nodeService.getChildrenByName(
siteRoot,
ContentModel.ASSOC_CONTAINS,
siteList);
result = new ArrayList<SiteInfo>(assocs.size());
for (ChildAssociationRef assoc : assocs)
{
// Ignore any node that is not a "site" type
NodeRef site = assoc.getChildRef();
QName siteClassName = this.directNodeService.getType(site);
if (this.dictionaryService.isSubClass(siteClassName, SiteModel.TYPE_SITE))
String siteName = resolveSite(authority);
if (siteName == null)
{
result.add(createSiteInfo(site));
return false;
}
return siteNames.add(siteName);
}}, maxResults);
if (siteNames.isEmpty())
{
return Collections.emptyList();
}
List<ChildAssociationRef> assocs = this.nodeService.getChildrenByName(
getSiteRoot(),
ContentModel.ASSOC_CONTAINS,
siteNames);
List<SiteInfo> result = new ArrayList<SiteInfo>(assocs.size());
for (ChildAssociationRef assoc : assocs)
{
// Ignore any node that is not a "site" type
NodeRef site = assoc.getChildRef();
QName siteClassName = this.directNodeService.getType(site);
if (this.dictionaryService.isSubClass(siteClassName, SiteModel.TYPE_SITE))
{
result.add(createSiteInfo(site));
}
}
return result;
}
@ -1683,18 +1687,17 @@ public class SiteServiceImpl extends AbstractLifecycleBean implements SiteServic
Set<String> roles = this.permissionService.getSettablePermissions(siteType);
// First use the authority's cached recursive group memberships to answer the question quickly
Set<String> authorityGroups = this.authorityService.getContainingAuthorities(AuthorityType.GROUP,
authorityName, false);
Set<String> authorities = authorityService.getAuthoritiesForUser(authorityName);
for (String role : roles)
{
String roleGroup = getSiteRoleGroup(siteShortName, role, true);
if (authorityGroups.contains(roleGroup))
if (authorities.contains(roleGroup))
{
fullResult.add(roleGroup);
}
}
// Unfortunately, due to direct membership taking precendence, we can't answer the question quickly if more than one role has been inherited
// Unfortunately, due to direct membership taking precedence, we can't answer the question quickly if more than one role has been inherited
if (fullResult.size() <= 1)
{
return fullResult;
@ -1702,7 +1705,7 @@ public class SiteServiceImpl extends AbstractLifecycleBean implements SiteServic
// Check direct group memberships
List<String> result = new ArrayList<String>(5);
authorityGroups = this.authorityService.getContainingAuthorities(AuthorityType.GROUP,
Set <String> authorityGroups = this.authorityService.getContainingAuthorities(AuthorityType.GROUP,
authorityName, true);
for (String role : roles)
{

File diff suppressed because it is too large Load Diff

View File

@ -273,11 +273,12 @@ public class ScriptSiteService extends BaseScopableProcessorExtension
* List all the sites that the specified user has an explicit membership to.
*
* @param userName user name
* @param size maximum list size
* @return Site[] a list of sites the user has an explicit membership to
*/
public Site[] listUserSites(String userName)
public Site[] listUserSites(String userName, int size)
{
List<SiteInfo> siteInfos = this.siteService.listSites(userName);
List<SiteInfo> siteInfos = this.siteService.listSites(userName, size);
List<Site> sites = new ArrayList<Site>(siteInfos.size());
for (SiteInfo siteInfo : siteInfos)
{
@ -286,6 +287,17 @@ public class ScriptSiteService extends BaseScopableProcessorExtension
return sites.toArray(new Site[sites.size()]);
}
/**
* List all the sites that the specified user has an explicit membership to.
*
* @param userName user name
* @return Site[] a list of sites the user has an explicit membership to
*/
public Site[] listUserSites(String userName)
{
return listUserSites(userName, 0);
}
/**
* Get a site for a provided site short name.
* <p>

View File

@ -235,10 +235,10 @@ public class People extends BaseTemplateProcessorExtension implements Initializi
{
ParameterCheck.mandatory("Person", person);
List<TemplateNode> parents;
Set<String> authorities = this.authorityService.getContainingAuthorities(
Set<String> authorities = this.authorityService.getContainingAuthoritiesInZone(
AuthorityType.GROUP,
(String)person.getProperties().get(ContentModel.PROP_USERNAME),
false);
AuthorityService.ZONE_APP_DEFAULT, null, 1000);
parents = new ArrayList<TemplateNode>(authorities.size());
for (String authority : authorities)
{

View File

@ -189,6 +189,16 @@ public class NodeServiceImpl implements NodeService, VersionModel
return dbNodeService.getRootNode(storeRef);
}
/**
* Delegates to the <code>NodeService</code> used as the version store implementation
*/
@Override
public Set<NodeRef> getAllRootNodes(StoreRef storeRef)
{
return dbNodeService.getAllRootNodes(storeRef);
}
/**
* @throws UnsupportedOperationException always
*/
@ -557,6 +567,18 @@ public class NodeServiceImpl implements NodeService, VersionModel
return result;
}
@Override
public List<ChildAssociationRef> getChildAssocs(NodeRef nodeRef, QName typeQName, QName qname, int maxResults,
boolean preload) throws InvalidNodeRefException
{
List<ChildAssociationRef> result = getChildAssocs(nodeRef, typeQName, qname);
if (result.size() > maxResults)
{
return result.subList(0, maxResults);
}
return result;
}
/**
* @throws UnsupportedOperationException always
*/

View File

@ -699,7 +699,8 @@ public class WorkflowServiceImpl implements WorkflowService
// Expand authorities to include associated groups (and parent groups)
List<String> authorities = new ArrayList<String>();
authorities.add(authority);
Set<String> parents = authorityService.getContainingAuthorities(AuthorityType.GROUP, authority, false);
Set<String> parents = authorityService.getContainingAuthoritiesInZone(AuthorityType.GROUP, authority,
AuthorityService.ZONE_APP_DEFAULT, null, -1);
authorities.addAll(parents);
// Retrieve pooled tasks for authorities (from each of the registered

View File

@ -277,10 +277,14 @@ public interface AuthorityService
public Set<String> getContainedAuthorities(AuthorityType type, String name, boolean immediate);
/**
* Get the authorities that contain the given authority
* Get the authorities that contain the given authority,
* <b>but use {@code getAuthoritiesForUser(userName).contains(authority)}</b> rather than
* {@code getContainingAuthorities(type, userName, false).contains(authority)} or
* use {@link #getContainingAuthoritiesInZone(AuthorityType, String, AuthorityService.ZONE_APP_DEFAULT)}
* <b>as they will be much faster</b>.
*
* For example, this can be used find out all the authorities that contain a
* user.
* For example, this method can be used find out all the authorities that contain a
* group.
*
* @param type -
* if not null, limit to the type of authority specified
@ -294,6 +298,31 @@ public interface AuthorityService
@Auditable(parameters = {"type", "name", "immediate"})
public Set<String> getContainingAuthorities(AuthorityType type, String name, boolean immediate);
/**
* Get a set of authorities with varying filter criteria
*
* @param type
* authority type or null for all types
* @param name
* if non-null, only return those authorities who contain this authority
* @param zoneName
* if non-null, only include authorities in the named zone
* @param filter
* optional callback to apply further filter criteria or null
* @param size
* if greater than zero, the maximum results to return. The search strategy used is varied depending on
* this number.
* @return a set of authorities
*/
@Auditable(parameters = {"type", "name", "zoneName", "filter", "size"})
public Set<String> getContainingAuthoritiesInZone(AuthorityType type, String name, final String zoneName,
AuthorityFilter filter, int size);
public interface AuthorityFilter
{
boolean includeAuthority(String authority);
}
/**
* Extract the short name of an authority from its full identifier.
*

View File

@ -169,6 +169,16 @@ public interface SiteService
*/
PagingResults<SiteInfo> listSites(List<FilterProp> filterProps, List<Pair<QName, Boolean>> sortProps, PagingRequest pagingRequest);
/**
* List all the sites that the specified user has a explicit membership to.
*
* @param userName user name
* @param size list maximum size or zero for all
* @return List<SiteInfo> list of site information
*/
@NotAuditable
List<SiteInfo> listSites(String userName, int size);
/**
* Gets site information based on the short name of a site.
* <p>