mirror of
https://github.com/Alfresco/alfresco-community-repo.git
synced 2025-06-30 18:15:39 +00:00
41035: Fix for ALF-15225 - qt.length not performing as expected in search.lib.js 41047: RUSSIAN: Translation updates based on EN r40961 41049: GERMAN: Translation updates based on EN r41033. Fixes ALF-15749 and ALF-15720. 41050: SPANISH: Translation updates based on EN r41033. Fixes ALF-15749 and ALF-15720. 41051: FRENCH: Translation updates based on EN r41033. Fixes ALF-15749 and ALF-15720. 41052: ITALIAN: Translation updates based on EN r41033. Fixes ALF-15749 and ALF-15720. 41053: JAPANESE: Translation updates based on EN r41033. Fixes ALF-15749 and ALF-15720. 41054: DUTCH: Translation updates based on EN r41033. Fixes ALF-15749 and ALF-15720. 41055: RUSSIAN: Translation updates based on EN r41033. Fixes ALF-15749 and ALF-15720. 41061: ALF-11214 - IMAP subsystem is not successfully restarted after incorrect modification of IMAP properties via Admin Console 41063: RUSSIAN: Translation updates based on EN r41033 (encoding fixes) 41064: CHINESE: Translation updates based on EN r41033 41073: ALF-15760: Merged V4.0 to V4.1-BUG-FIX (another lost meta-inf revision) 34416: ALF-12992: Updated weblogic DD for SOLR 41074: ALF-15419 / ALF-14438: Merged V3.4-BUG-FIX to V4.1-BUG-FIX (PARTIAL) 37373: Merged PATCHES/V3.4.6 to V3.4-BUG-FIX 36821: ALF-13827 / ALF-14402: Make replicated caches recover from temporary comms failures by flushing when a change in peers is detected - We do not flush caches who replicate via copy (e.g. tickets cache) as these may not be recoverable 37122: ALF-13919 / ALF-14403: Merged DEV to PATCHES/V3.4.6 - Rework of Dmitry's implementation - Uses dynamic HQL query to retrieve JBPM workflow instances by specified query criteria - WorkflowInstancesGet web script no longer has to iterate over every workflow instance in the database! - DB index added to enable efficient querying by string variable - Hibernate tastic! 37188: ALF-13919 / ALF-14403: Worked around HQL polymorphism issues by using explicit variable subclass names in from clause 37204: ALF-13919 / ALF-14403: Fix to date range handling by Dmitry 41077: ALF-10730: Fixed object finder drop-down tree to show parent icon type when icon type not available 41089: ALF-13998: 'No items' error is highlighted in red, even that is not sever error. 41109: Part fix for ALF-11297 Disabled test: test-system-build-test has been removed from the test target dependency list 41118: Fixes: ALF-15765 and related issue ALF-15476: - Corrects merge of r34405 and r40159 - Adds people api to cloud proxy config - Cloud/Global Folder picker sites list now universally shows only those sites the user is a member of. 41119: ALF-15419 / ALF-14438: Merged DEV to V4.1-BUG-FIX 41117: ALF-15419 : CLONE Remove JBPM indexes present from upgrades The schema reference files were updated to contain JBPM_VARIABLEINSTANCE.IDX_VARINST_STRVAL index. 41130: ALF-15590: FSTR transfer of custom content type with parent cm:content fails - Fix by Dmitry 41131: ALF-15714: Bitrock Uninstaller: Uninstaller doesn't remove all files and folders in *nix - Fix provided by Bitrock 41132: Fix for ALF-14388 - Edit Online option is not supported for '.docm', 'dotm', '.xlsm' files - corrected mimetypes to be lower-case 41150: ALF-13287 Added the handling of UTC represented by "Z" within comparator 41154: Check for authentication errors when validating a ticket, if an error occurs re-authenticate. ALF-15394 41155: ALF-15569: User with '@' symbol in username cannot invite - note: since Ent 4.0.2(+) by default MT is pre-configured but not enabled 41173: Merged V4.1 to V4.1-BUG-FIX 41121: Merged BRANCHES/DEV/FEATURES/CLOUD1_CLOUDSYNC to BRANCHES/V4.1: 41003: CloudSync: ALF-15734 - force unsync (of last SSMN) on target causes repeating pull errors to appear in both logs 41026: CloudSync: ALF-15734 - force unsync (of last SSMN) on target causes repeating pull errors to appear in both logs 41039: CloudSync: ALF-15734 - force unsync (of last SSMN) on target causes repeating pull errors to appear in both logs 41086: CloudSync: ALF-15734 - force unsync (of last SSMN) on target causes repeating pull errors to appear in both logs 41123: Merged BRANCHES/DEV/FEATURES/CLOUD1_CLOUDSYNC to BRANCHES/V4.1: 41115: CloudSync: ALF-15734 - force unsync (of last SSMN) on target causes repeating pull errors to appear in both logs 41176: Attempt to debug unit test failure 41181: Store leak in AVMServiceTest.test_ETWOTWO_570() causing unit test failure 41184: ALF-15610: Copy Thai analyzer settings to its many SOLR locations 41194: ALF-11297: re-enable system build tests 41195: ALF-11297 ALF-15807: update activities system build tests after correction of ALF-4832 41201: Fix for ALF-15767 Group query using cm:authorityName 41202: Additional unit tests related to ALF-15731 TYPE:"..." queries no longer work for Lucene on 4.X 41203: Part 1 for ALF-15811 SOLR query increases DocBitSet inefficiently - check it makes any difference 41204: Merged BRANCHES/DEV/BELARUS/V4.1-BUG-FIX-2012_08_15 to BRANCHES/DEV/V4.1-BUG-FIX: (note: merging as-is ... refactor + unit test fix to follow in next commit) 40926: ALF-12586: Admin Console shows usage as zero - if user is deleted and then re-created (eg. re-synchronized via LDAP or manually) 40974: ALF-12586: Admin Console shows usage as zero - if user is deleted and then re-created (eg. re-synchronized via LDAP or manually) 41205: ALF-12586: Admin Console shows usage as zero - if user is deleted and then re-created (eg. re-synchronized via LDAP or manually) - review and refactor the proposed/merged fix (see previous commit) - fix unit test so that it accounts for previous content (as per the original issue) - also: add missing test to suite and fix that test to work with the new fix (ie. cleanup previous content, else need to account for it) 41210: Part 2: ALF-14861 SOLR to scale for non-admin users in 100k sites and a subgroup of each of 1000 independent groupings with 1000 subgroups - do not expand authorities for thoses with the ADMINISTRATOR_ROLE as they can read all anyway 41216: ALF-11297: system build tests need a database cleanup before running 41222: ALF-15740, ALF-14744: Update rule firing broken for content created in Explorer - Old code lurking around that used to use the inline editable aspect to detect events handled by the CreateNodeRuleTrigger was removed from OnContentUpdateRuleTrigger and replaced with a check for ASPECT_NO_CONTENT 41223: Added missing swf.languagedir setting to enterprise alfresco-global.properties 41230: GERMAN: Translation updates based on EN rev41099. 41232: SPANISH: Translation updates based on EN rev41099. 41233: FRENCH: Translation updates based on EN rev41099. 41234: ITALIAN: Translation updates based on EN rev41099. 41235: JAPANESE: Translation updates based on EN rev41099. 41236: DUTCH: Translation updates based on EN rev41099. 41237: RUSSIAN: Translation updates based on EN rev41099. 41239: CHINESE: Translation updates based on EN rev41099. 41254: ALF-15628: Avoid edit online (SPP, WRITE_LOCK) clashing with edit offline (CheckOutCheckInService, READ_ONLY_LOCK) - Rationalization of work by Alex Malinovsky - WebDAVMethod.checkNode() now properly checks whether nodes without WebDAV lock info are writeable - CheckOutCheckInService won't allow checkout of a node with an existing WRITE_LOCK by the same user - they must unlock first - Propagation of correct status codes 41264: ALF-15628: Fix CheckOutCheckInService test failures 41265: ALF-15699: Reverse merged the following, thus downgrading us back to swftools 0.9.1 40208: ALF-12831: Upgrade to swftools 0.9.2 41266: Rush'n in some translation updates from Gloria 41267: ALF-15628: Fix compilation problem 41269: Merged V3.4-BUG-FIX to V4.1-BUG-FIX (RECORD ONLY) 41224: ALF-14856: Merged V4.1-BUG-FIX to V3.4-BUG-FIX 41268: ALF-15459: Merged PATCHES/V4.0.2 to V3.4-BUG-FIX Merged V4.1-BUG-FIX to V3.4-BUG-FIX 41274: ALF-15608: Merged V3.4-BUG-FIX to V4.1-BUG-FIX 41272: ALF-15567: Allow links to be followed through WebDAV on port 80 using basic auth on XP 41277: ALF-12586: Admin Console shows usage as zero - if user is deleted and then re-created (eg. re-synchronized via LDAP or manually) - fix test fallout after merge/fix (note: failed for PostgreSQL but not MySQL - although fix was not DB-specific) 41278: ALF-15840 Error logged when "No thumbnail present in file" even though this is normal 41284: ALF-14875: Serialize direct permissions after inherited permissions so that they take precedence in any lookups in permission dialogs 41290: JAPANESE: Translation updates based on EN r41099 Fixes: ALF-14565 41296: ALF-15251 CIFS: Checked out document is not marked as locked in CIFS 41299: ALF-15714: Bitrock Uninstaller: Uninstaller doesn't remove all files and folders in *nix - Additional fix provided by Bitrock 41303: Fix for ALF-15799 Under high concurrency load balanced Solr throws an Antlr related NPE - do not skip IO Exceptions 41306: Incremented version revision for 4.1.2 41309: ALF-15827: Added FORMACTION, FORMMETHOD and ACTION HTML attributes to grey list to close security hole (updated Surf libs r1136) 41318: ALF-15857: Lucene FTS indexer opens streams to all documents to be indexed in a transaction simultaneously - Now stream opening is delayed until the point where the document is being written to the index 41322: Fix for ALF-15858 SOLR ACL tracking can stall or miss acls during tracking 41323: Chemistry client java to create test data for ALF-15858, ALF-15782, CLOUD-596, ALF-15753 etc 41326: ALF-15234: IE 8 or IE9 Download .pps as .ppt -Switched the order of the filename headers to better support non-conforming browsers (rfc 5987). 41330: ALF-14875: Reversed r41284 because it didn't solve the problem in the UI. Kev reviewing. 41332: ALF-10688: Can't deactivate an account when alfrescoNtlm follows another authentication subsystem in the authentication chain - Now, if a account is known to be 'mutable' then the enabled flag is read from the mutable authentication service 41337: Merged V3.4-BUG-FIX (3.4.11) to V4.1-BUG-FIX (4.1.2) 41336: TransformerDebug: Use debug rather than trace when there are transformers but they are all unavailable. 41339: ALF-15840 Error logged when "No thumbnail present in file" even though this is normal - Found some more cases where this is logged as an ERROR 41342: ALF-11087 (Missing icon file: components\images\filetypes\generic-tag-32.png) 41344: ALF-15863 (* search values): Merged HEAD to V4.1-BUG-FIX (4.1.2) 40849: ALF-12839 "Share - Inconsistency in adding a user or a group into a group" part 2 - Making the users console stop "*" searches, just like the groups console when the min search length is set to larger than 0. 41346: ALF-15237 - REST API Group children lists username for fullName and displayName 41350: Merge V3.4-BUG-FIX to V4.1-BUG-FIX: 41065: Disconnect existing CIFS sessions from the same client when a virtual circuit zero session is opened. ALF-13815 41280: Moved session cleanup config into the base authenticator, added support to passthru/base authentication. ALF-13815 41351: Ported database filesystem changes to fix session disconnect, from V3.4. 41352: Merge V3.4-BUG-FIX to V4.1-BUG-FIX: 41067: Added session disconnect support to the Alfresco CIFS authenticator. ALF-13815. 41281: Added session cleanup support to passthru authenticator, session cleanup config moved to base class. ALF-13815 41353: Update svn:mergeinfo 41355: Fix for ALF-15869 - "Site Content" dashlet shows all documents from all the sites in Alfresco Share - mistakenly did a record-only merge of this from 4.1->4.1.1 41363: Fix for ALF-14875 - Manage permissions shows the permission 'No privileges' for All Other Users - reworked the permissions dialog and permissions panels to correctly handle multiple permissions on a special permissions group such as GROUP_EVERYONE - now correctly gets/sets permissions for GROUP_EVERYONE - this also fixes ALF-12014 - in that it allows custom SiteXYZ permissions to work correctly again also (will need manual backport for 3.4.X though) - removed hacks related to previous attempts to fix the above issue - added lots of comments around relevant sections to add in future refactoring or understanding 41371: Merged BRANCHES/DEV/V3.4-BUG-FIX to BRANCHES/DEV/V4.1-BUG-FIX (RECORD ONLY) 41370: Fix for ALF-12014 - Share - Custom role causes incorrect display of permissions Manual merge of changes to 4.1.2 41399: ALF-13438: java.lang.OutOfMemoryError appears in alfresco log when trying to create few thousands of sites consistently. - The v4.0 parent assocs cache used to store every verion of every node's parent assocs, so as we added a node to 60,000 sites, we retained all previous versions of the user's parent association map - After creating 10,000 sites, the cache size was about 4GB, containing about 2 million parents - Now we use a specialized class that allows an upper limit to be set on the total number of cached parents as well as children. - Because the cache is keyed by node transaction ID, the cache can be non-clustered and non-transactional - Once the average number of parents is more than 8, the cache will drop its oldest entries - ParentAssocsInfo also now uses a compact TreeMap instead of a HashMap 41401: ALF-13438: Small correction 41406: ALF-13438: Fix failing unit tests - Removed unused node.parentAssocsSharedCache and corrected node.childByNameCache not to reference it! - Because parent assocs are cached by transaction ID, we must always invalidate them on an in-transaction version increment 41409: Logging of unexepected errors on FTP 41411: Fix possible FTP data session leak if client mixes PORT and PASV commands. ALF-15126 41412: ALF-15845 : Clone for Hotfix: Word document on Windows via CIFS becomes locked (Read Only) when network drops temporarily 41415: Fixes: ALF-15649: Removes country locale from files with it hard coded. 41419: ALF-14599: Removed ftp.ipv6.enabled from enterprise overlay and bundles 41426: ALF-15845 Clone for Hotfix: Word document on Windows via CIFS becomes locked (Read Only) when network drops temporarily Roll back changes to DiskDriver interface in favour of hacking NetworkFile. 41440: JAPANESE: Translation update based on EN r41099 41446: ALF-13091: Remove unecessary bean post processors from sub ssytem context and remove CXF's Jsr250BeanPostProcessor. 41458: RUSSIAN: Further translation updates following linguistic review. 41459: ALF-15897: Revert revision 41446, an attempted fix for ALF-13091 41487: Fix for ALF-15910 SOLR - Add index warming and filter pointless entries from the filter cache Fix for ALF-15851 Too many live instances of SolrIndexSearcher at one time resulting in OOM - Alfresco 4.1.1 - build 151 41506: Merged DEV to V4.1-BUG-FIX 41505: ALF-15879: PostgreSQL: upgrade 2.2.8 (577) -> 3.4.10 (703) -> 4.1.1 (159) failed. - Make dropping "store_id" index and "alf_node_store_id_key" constraint optional in 4.1.1 upgrade script because clean 3.4 has "store_id" index and doesn't have "alf_node_store_id_key" constraint, but 3.4 upgraded from 2.2 has "alf_node_store_id_key" constraint and doesn't have "store_id" index. 41531: Fixed ALF-15687, so that any user (except for Admin) won’t be able to retrieve any other user’s preferences via REST API. Also, updated the preferences controllers for the Post and Delete. 41539: ALF-15899: Inbound email does not support multiple recipient folders - Fix by Dmitry Vaserin 41540: Merged V3.4-BUG-FIX to V4.1-BUG-FIX 40794: Merged DEV to V3.4-BUG-FIX 40793: ALF-13752 Saving Word (mac 2011) documents via CIFS into a folder with Versionable rules on Mac OS X Lion (Fix for 3.4) In ContentDiskDriver.renameFile() was added a check whether a node in the archive. 40806: Fix for ALF-9787 - Hiding sites in Share with permissionsDefinitions.xml [creates a permissions error in the blog portion of the site] 40922: Merged DEV to V3.4-BUG-FIX (reviewed by Frederik) 40488: ALF-13357 : Empty outcome when a timer is invoked A check for transitionName was added to AlfrescoTimer to support custom transitions. 40940: ALF-15696: Remove svnkit.jar - makes the build fail if the version of installed command line svn is 1.7. Using commandline binding for <svn> Ant task to be consistent with other parts of the build. 41066: Various fixes to the database filesystem for session disconnect cleanup. 41068: Updates to the Alfresco filesystem for session cleanup. ALF-13815 41301: Merged PATCHES/V3.4.9 to V3.4-BUG-FIX 40966: ALF-15846 / ALF-15709: OOM on cascading reindex - Avoid buffering of all the affected PATH documents in memory - used the set of 'visited' paths to delay generation to the final flush. 41044: ALF-15847 / ALF-15748: Lucene indexer can make sub-optimal cascade reindex decisions during an LDAP sync. - When a user in 6 groups was removed from a massive group, the massive group was getting cascade reindexed rather than the user - Logic adjusted as follows: For nodes with 5 or less parents, we always cascade reindex the child node. For nodes with more than 5 parents, we cascade reindex the parent node if it has less children than the child has parents. 41395: ALF-15715: Unable to edit properties whilst transformation in progress - Delayed all changes that would potentially lock the parent node row while the thumbnail is generating. - Reorganized RenditionContext to lazily instantiate its destination node. - Changed render destination to use temporary ContentWriter until render is complete. 41396: ALF-15715: Fix for failing unit tests. 41413: Fix for issue where user calendar remote api was generating invalid date searches for user dashlet calendar. 41509: Merged DEV to V3.4-BUG-FIX 41507: ALF-12833: Issues installing Alfresco on WebSphere when the server doesn't have internet access Context-param which Sets "http://apache.org/xml/features/nonvalidating/load-external-dtd" feature on the SAXParser to false if this parameter is false 41510: ALF-15171: After addition of a secondary parent association to a container, not all index paths were being regenerated due to a logic error 41512: ALF-15919: Merged PATCHES/V3.4.10 to V3.4-BUG-FIX 41091: ALF-15723: Merged DEV to PATCHES/V3.4.10 26579: Switch the transformer to use Tika git-svn-id: https://svn.alfresco.com/repos/alfresco-enterprise/alfresco/HEAD/root@41543 c4b6b30b-aa2e-2d43-bbcb-ca4b014f7261
1408 lines
69 KiB
Java
1408 lines
69 KiB
Java
/*
|
|
* Copyright (C) 2005-2012 Alfresco Software Limited.
|
|
*
|
|
* This file is part of Alfresco
|
|
*
|
|
* Alfresco is free software: you can redistribute it and/or modify
|
|
* it under the terms of the GNU Lesser General Public License as published by
|
|
* the Free Software Foundation, either version 3 of the License, or
|
|
* (at your option) any later version.
|
|
*
|
|
* Alfresco is distributed in the hope that it will be useful,
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
* GNU Lesser General Public License for more details.
|
|
*
|
|
* You should have received a copy of the GNU Lesser General Public License
|
|
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
|
|
*/
|
|
package org.alfresco.repo.node;
|
|
|
|
import static org.mockito.Mockito.mock;
|
|
import static org.mockito.Mockito.verify;
|
|
|
|
import java.io.Serializable;
|
|
import java.util.ArrayList;
|
|
import java.util.Collection;
|
|
import java.util.Collections;
|
|
import java.util.HashMap;
|
|
import java.util.HashSet;
|
|
import java.util.List;
|
|
import java.util.Locale;
|
|
import java.util.Map;
|
|
import java.util.Random;
|
|
import java.util.Set;
|
|
|
|
import junit.framework.TestCase;
|
|
|
|
import org.alfresco.model.ContentModel;
|
|
import org.alfresco.repo.cache.SimpleCache;
|
|
import org.alfresco.repo.domain.node.Node;
|
|
import org.alfresco.repo.domain.node.NodeDAO;
|
|
import org.alfresco.repo.domain.node.NodeEntity;
|
|
import org.alfresco.repo.domain.node.NodeVersionKey;
|
|
import org.alfresco.repo.domain.qname.QNameDAO;
|
|
import org.alfresco.repo.domain.query.CannedQueryDAO;
|
|
import org.alfresco.repo.node.NodeServicePolicies.BeforeCreateNodePolicy;
|
|
import org.alfresco.repo.node.NodeServicePolicies.BeforeSetNodeTypePolicy;
|
|
import org.alfresco.repo.node.NodeServicePolicies.BeforeUpdateNodePolicy;
|
|
import org.alfresco.repo.node.NodeServicePolicies.OnCreateChildAssociationPolicy;
|
|
import org.alfresco.repo.node.NodeServicePolicies.OnCreateNodePolicy;
|
|
import org.alfresco.repo.node.NodeServicePolicies.OnSetNodeTypePolicy;
|
|
import org.alfresco.repo.node.NodeServicePolicies.OnUpdateNodePolicy;
|
|
import org.alfresco.repo.node.NodeServicePolicies.OnUpdatePropertiesPolicy;
|
|
import org.alfresco.repo.node.db.NodeHierarchyWalker;
|
|
import org.alfresco.repo.node.db.NodeHierarchyWalker.VisitedNode;
|
|
import org.alfresco.repo.policy.BehaviourFilter;
|
|
import org.alfresco.repo.policy.JavaBehaviour;
|
|
import org.alfresco.repo.policy.Policy;
|
|
import org.alfresco.repo.policy.PolicyComponent;
|
|
import org.alfresco.repo.security.authentication.AuthenticationUtil;
|
|
import org.alfresco.repo.transaction.RetryingTransactionHelper.RetryingTransactionCallback;
|
|
import org.alfresco.service.ServiceRegistry;
|
|
import org.alfresco.service.cmr.repository.AssociationRef;
|
|
import org.alfresco.service.cmr.repository.ChildAssociationRef;
|
|
import org.alfresco.service.cmr.repository.DuplicateChildNodeNameException;
|
|
import org.alfresco.service.cmr.repository.MLText;
|
|
import org.alfresco.service.cmr.repository.NodeRef;
|
|
import org.alfresco.service.cmr.repository.NodeRef.Status;
|
|
import org.alfresco.service.cmr.repository.NodeService;
|
|
import org.alfresco.service.cmr.repository.Path;
|
|
import org.alfresco.service.cmr.repository.StoreRef;
|
|
import org.alfresco.service.namespace.NamespaceService;
|
|
import org.alfresco.service.namespace.QName;
|
|
import org.alfresco.service.transaction.TransactionService;
|
|
import org.alfresco.util.ApplicationContextHelper;
|
|
import org.alfresco.util.GUID;
|
|
import org.alfresco.util.Pair;
|
|
import org.alfresco.util.PropertyMap;
|
|
import org.apache.commons.logging.Log;
|
|
import org.apache.commons.logging.LogFactory;
|
|
import org.springframework.context.ApplicationContext;
|
|
import org.springframework.extensions.surf.util.I18NUtil;
|
|
|
|
/**
|
|
* Tests basic {@link NodeService} functionality
|
|
*
|
|
* @author Derek Hulley
|
|
* @since 4.0
|
|
*/
|
|
public class NodeServiceTest extends TestCase
|
|
{
|
|
public static final String NAMESPACE = "http://www.alfresco.org/test/BaseNodeServiceTest";
|
|
public static final String TEST_PREFIX = "test";
|
|
public static final QName TYPE_QNAME_TEST = QName.createQName(NAMESPACE, "multiprop");
|
|
public static final QName PROP_QNAME_NAME = QName.createQName(NAMESPACE, "name");
|
|
public static final QName ASSOC_QNAME_CHILDREN = QName.createQName(NAMESPACE, "child");
|
|
|
|
private static ApplicationContext ctx = ApplicationContextHelper.getApplicationContext();
|
|
|
|
private static Log logger = LogFactory.getLog(NodeServiceTest.class);
|
|
|
|
protected ServiceRegistry serviceRegistry;
|
|
protected NodeService nodeService;
|
|
protected NodeDAO nodeDAO;
|
|
private TransactionService txnService;
|
|
private PolicyComponent policyComponent;
|
|
private CannedQueryDAO cannedQueryDAO;
|
|
private SimpleCache<Serializable, Serializable> nodesCache;
|
|
private SimpleCache<Serializable, Serializable> propsCache;
|
|
private SimpleCache<Serializable, Serializable> aspectsCache;
|
|
|
|
/** populated during setup */
|
|
protected NodeRef rootNodeRef;
|
|
|
|
@SuppressWarnings("unchecked")
|
|
@Override
|
|
protected void setUp() throws Exception
|
|
{
|
|
I18NUtil.setLocale(null);
|
|
|
|
serviceRegistry = (ServiceRegistry) ctx.getBean(ServiceRegistry.SERVICE_REGISTRY);
|
|
nodeService = serviceRegistry.getNodeService();
|
|
nodeDAO = (NodeDAO) ctx.getBean("nodeDAO");
|
|
txnService = serviceRegistry.getTransactionService();
|
|
policyComponent = (PolicyComponent) ctx.getBean("policyComponent");
|
|
cannedQueryDAO = (CannedQueryDAO) ctx.getBean("cannedQueryDAO");
|
|
|
|
// Get the caches for later testing
|
|
nodesCache = (SimpleCache<Serializable, Serializable>) ctx.getBean("node.nodesSharedCache");
|
|
propsCache = (SimpleCache<Serializable, Serializable>) ctx.getBean("node.propertiesSharedCache");
|
|
aspectsCache = (SimpleCache<Serializable, Serializable>) ctx.getBean("node.aspectsSharedCache");
|
|
|
|
// Clear the caches to remove fluff
|
|
nodesCache.clear();
|
|
propsCache.clear();
|
|
aspectsCache.clear();
|
|
|
|
AuthenticationUtil.setRunAsUserSystem();
|
|
|
|
// create a first store directly
|
|
RetryingTransactionCallback<NodeRef> createStoreWork = new RetryingTransactionCallback<NodeRef>()
|
|
{
|
|
public NodeRef execute()
|
|
{
|
|
StoreRef storeRef = nodeService.createStore(
|
|
StoreRef.PROTOCOL_WORKSPACE,
|
|
"Test_" + System.nanoTime());
|
|
return nodeService.getRootNode(storeRef);
|
|
}
|
|
};
|
|
rootNodeRef = txnService.getRetryingTransactionHelper().doInTransaction(createStoreWork);
|
|
}
|
|
|
|
/**
|
|
* Clean up the test thread
|
|
*/
|
|
@Override
|
|
protected void tearDown()
|
|
{
|
|
AuthenticationUtil.clearCurrentSecurityContext();
|
|
I18NUtil.setLocale(null);
|
|
}
|
|
|
|
public void testSetUp() throws Exception
|
|
{
|
|
assertNotNull(rootNodeRef);
|
|
}
|
|
|
|
public void testLocaleSupport() throws Exception
|
|
{
|
|
// Ensure that the root node has the default locale
|
|
Locale locale = (Locale) nodeService.getProperty(rootNodeRef, ContentModel.PROP_LOCALE);
|
|
assertNotNull("Locale property must occur on every node", locale);
|
|
assertEquals("Expected default locale on the root node", I18NUtil.getLocale(), locale);
|
|
assertTrue("Every node must have sys:localized", nodeService.hasAspect(rootNodeRef, ContentModel.ASPECT_LOCALIZED));
|
|
|
|
// Now switch to a specific locale and create a new node
|
|
I18NUtil.setLocale(Locale.CANADA_FRENCH);
|
|
|
|
// Create a node using an explicit locale
|
|
NodeRef nodeRef1 = nodeService.createNode(
|
|
rootNodeRef,
|
|
ContentModel.ASSOC_CHILDREN,
|
|
QName.createQName(NamespaceService.CONTENT_MODEL_1_0_URI, getName()),
|
|
ContentModel.TYPE_CONTAINER,
|
|
Collections.singletonMap(ContentModel.PROP_LOCALE, (Serializable)Locale.GERMAN)).getChildRef();
|
|
assertTrue("Every node must have sys:localized", nodeService.hasAspect(nodeRef1, ContentModel.ASPECT_LOCALIZED));
|
|
assertEquals(
|
|
"Didn't set the explicit locale during create. ",
|
|
Locale.GERMAN, nodeService.getProperty(nodeRef1, ContentModel.PROP_LOCALE));
|
|
|
|
// Create a node using the thread's locale
|
|
NodeRef nodeRef2 = nodeService.createNode(
|
|
rootNodeRef,
|
|
ContentModel.ASSOC_CHILDREN,
|
|
QName.createQName(NamespaceService.CONTENT_MODEL_1_0_URI, getName()),
|
|
ContentModel.TYPE_CONTAINER).getChildRef();
|
|
assertTrue("Every node must have sys:localized", nodeService.hasAspect(nodeRef2, ContentModel.ASPECT_LOCALIZED));
|
|
assertEquals(
|
|
"Didn't set the locale during create. ",
|
|
Locale.CANADA_FRENCH, nodeService.getProperty(nodeRef2, ContentModel.PROP_LOCALE));
|
|
|
|
// Switch Locale and modify ml:text property
|
|
I18NUtil.setLocale(Locale.CHINESE);
|
|
nodeService.setProperty(nodeRef2, ContentModel.PROP_DESCRIPTION, "Chinese description");
|
|
I18NUtil.setLocale(Locale.FRENCH);
|
|
nodeService.setProperty(nodeRef2, ContentModel.PROP_DESCRIPTION, "French description");
|
|
|
|
// Expect that we have MLText (if we are ML aware)
|
|
boolean wasMLAware = MLPropertyInterceptor.setMLAware(true);
|
|
try
|
|
{
|
|
MLText checkDescription = (MLText) nodeService.getProperty(nodeRef2, ContentModel.PROP_DESCRIPTION);
|
|
assertEquals("Chinese description", checkDescription.getValue(Locale.CHINESE));
|
|
assertEquals("French description", checkDescription.getValue(Locale.FRENCH));
|
|
}
|
|
finally
|
|
{
|
|
MLPropertyInterceptor.setMLAware(wasMLAware);
|
|
}
|
|
// But the node locale must not have changed
|
|
assertEquals(
|
|
"Node modification should not affect node locale. ",
|
|
Locale.CANADA_FRENCH, nodeService.getProperty(nodeRef2, ContentModel.PROP_LOCALE));
|
|
|
|
// Now explicitly set the node's locale
|
|
nodeService.setProperty(nodeRef2, ContentModel.PROP_LOCALE, Locale.ITALY);
|
|
assertEquals(
|
|
"Node locale must be settable. ",
|
|
Locale.ITALY, nodeService.getProperty(nodeRef2, ContentModel.PROP_LOCALE));
|
|
// But mltext must be unchanged
|
|
assertEquals(
|
|
"Canada-French must be closest to French. ",
|
|
"French description", nodeService.getProperty(nodeRef2, ContentModel.PROP_DESCRIPTION));
|
|
|
|
// Finally, ensure that setting Locale to 'null' is takes the node back to its original locale
|
|
nodeService.setProperty(nodeRef2, ContentModel.PROP_LOCALE, null);
|
|
assertEquals(
|
|
"Node locale set to 'null' does nothing. ",
|
|
Locale.ITALY, nodeService.getProperty(nodeRef2, ContentModel.PROP_LOCALE));
|
|
nodeService.removeProperty(nodeRef2, ContentModel.PROP_LOCALE);
|
|
assertEquals(
|
|
"Node locale removal does nothing. ",
|
|
Locale.ITALY, nodeService.getProperty(nodeRef2, ContentModel.PROP_LOCALE));
|
|
|
|
// Mass-set the properties, changing the locale in the process
|
|
Map<QName, Serializable> props = nodeService.getProperties(nodeRef2);
|
|
props.put(ContentModel.PROP_LOCALE, Locale.GERMAN);
|
|
nodeService.setProperties(nodeRef2, props);
|
|
assertEquals(
|
|
"Node locale not set in setProperties(). ",
|
|
Locale.GERMAN, nodeService.getProperty(nodeRef2, ContentModel.PROP_LOCALE));
|
|
}
|
|
|
|
/**
|
|
* Creates a string of parent-child nodes to fill the given array of nodes
|
|
*
|
|
* @param workspaceRootNodeRef the store to use
|
|
* @param liveNodeRefs the node array to fill
|
|
*/
|
|
private void buildNodeHierarchy(final NodeRef workspaceRootNodeRef, final NodeRef[] liveNodeRefs)
|
|
{
|
|
RetryingTransactionCallback<Void> setupCallback = new RetryingTransactionCallback<Void>()
|
|
{
|
|
@Override
|
|
public Void execute() throws Throwable
|
|
{
|
|
Map<QName, Serializable> props = new HashMap<QName, Serializable>(3);
|
|
props.put(ContentModel.PROP_NAME, "depth-" + 0 + "-" + GUID.generate());
|
|
liveNodeRefs[0] = nodeService.createNode(
|
|
workspaceRootNodeRef,
|
|
ContentModel.ASSOC_CHILDREN,
|
|
QName.createQName(NAMESPACE, "depth-" + 0),
|
|
ContentModel.TYPE_FOLDER,
|
|
props).getChildRef();
|
|
for (int i = 1; i < liveNodeRefs.length; i++)
|
|
{
|
|
props.put(ContentModel.PROP_NAME, "depth-" + i);
|
|
liveNodeRefs[i] = nodeService.createNode(
|
|
liveNodeRefs[i-1],
|
|
ContentModel.ASSOC_CONTAINS,
|
|
QName.createQName(NAMESPACE, "depth-" + i),
|
|
ContentModel.TYPE_FOLDER,
|
|
props).getChildRef();
|
|
}
|
|
return null;
|
|
}
|
|
};
|
|
txnService.getRetryingTransactionHelper().doInTransaction(setupCallback);
|
|
}
|
|
|
|
public void testRootAspect() throws Exception
|
|
{
|
|
final NodeRef workspaceRootNodeRef = nodeService.getRootNode(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE);
|
|
final NodeRef[] nodes = new NodeRef[6];
|
|
buildNodeHierarchy(workspaceRootNodeRef, nodes);
|
|
|
|
Set<NodeRef> allRootNodes = nodeService.getAllRootNodes(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE);
|
|
int initialNumRootNodes = allRootNodes.size();
|
|
|
|
nodeService.addAspect(nodes[1], ContentModel.ASPECT_ROOT, null);
|
|
nodeService.addAspect(nodes[3], ContentModel.ASPECT_ROOT, null);
|
|
nodeService.addAspect(nodes[4], ContentModel.ASPECT_ROOT, null);
|
|
|
|
allRootNodes = nodeService.getAllRootNodes(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE);
|
|
assertEquals("", 3, allRootNodes.size() - initialNumRootNodes);
|
|
List<Path> paths = nodeService.getPaths(nodes[5], false);
|
|
assertEquals("", 4, paths.size());
|
|
|
|
nodeService.removeAspect(nodes[3], ContentModel.ASPECT_ROOT);
|
|
allRootNodes = nodeService.getAllRootNodes(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE);
|
|
assertEquals("", 2, allRootNodes.size() - initialNumRootNodes);
|
|
paths = nodeService.getPaths(nodes[5], false);
|
|
for(Path path : paths)
|
|
{
|
|
System.out.println("Path = " + path.toString());
|
|
}
|
|
assertEquals("", 3, paths.size());
|
|
}
|
|
|
|
/**
|
|
* Tests that two separate node trees can be deleted concurrently at the database level.
|
|
* This is not a concurrent thread issue; instead we delete a hierarchy and hold the txn
|
|
* open while we delete another in a new txn, thereby testing that DB locks don't prevent
|
|
* concurrent deletes.
|
|
* <p/>
|
|
* See: <a href="https://issues.alfresco.com/jira/browse/ALF-5714">ALF-5714</a>
|
|
*
|
|
* Note: if this test hangs for MySQL then check if 'innodb_locks_unsafe_for_binlog = true' (and restart MySQL + test)
|
|
*/
|
|
public void testConcurrentArchive() throws Exception
|
|
{
|
|
final NodeRef workspaceRootNodeRef = nodeService.getRootNode(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE);
|
|
final NodeRef[] nodesPrimer = new NodeRef[1];
|
|
buildNodeHierarchy(workspaceRootNodeRef, nodesPrimer);
|
|
final NodeRef[] nodesOne = new NodeRef[10];
|
|
buildNodeHierarchy(workspaceRootNodeRef, nodesOne);
|
|
final NodeRef[] nodesTwo = new NodeRef[10];
|
|
buildNodeHierarchy(workspaceRootNodeRef, nodesTwo);
|
|
|
|
// Prime the root of the archive store (first child adds inherited ACL)
|
|
nodeService.deleteNode(nodesPrimer[0]);
|
|
|
|
RetryingTransactionCallback<Void> outerCallback = new RetryingTransactionCallback<Void>()
|
|
{
|
|
@Override
|
|
public Void execute() throws Throwable
|
|
{
|
|
// Delete the first hierarchy
|
|
nodeService.deleteNode(nodesOne[0]);
|
|
// Keep the txn hanging around to maintain DB locks
|
|
// and start a second transaction to delete another hierarchy
|
|
RetryingTransactionCallback<Void> innerCallback = new RetryingTransactionCallback<Void>()
|
|
{
|
|
@Override
|
|
public Void execute() throws Throwable
|
|
{
|
|
nodeService.deleteNode(nodesTwo[0]);
|
|
return null;
|
|
}
|
|
};
|
|
txnService.getRetryingTransactionHelper().doInTransaction(innerCallback, false, true);
|
|
return null;
|
|
}
|
|
};
|
|
txnService.getRetryingTransactionHelper().doInTransaction(outerCallback, false, true);
|
|
}
|
|
|
|
/**
|
|
* Tests archive and restore of simple hierarchy, checking that references and IDs are
|
|
* used correctly.
|
|
*/
|
|
public void testArchiveAndRestore()
|
|
{
|
|
// First create a node structure (a very simple one) and record the references and IDs
|
|
final NodeRef[] liveNodeRefs = new NodeRef[10];
|
|
final NodeRef[] archivedNodeRefs = new NodeRef[10];
|
|
|
|
final NodeRef workspaceRootNodeRef = nodeService.getRootNode(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE);
|
|
final NodeRef archiveRootNodeRef = nodeService.getRootNode(StoreRef.STORE_REF_ARCHIVE_SPACESSTORE);
|
|
|
|
buildNodeHierarchy(workspaceRootNodeRef, liveNodeRefs);
|
|
|
|
// Get the node status details
|
|
Long txnIdCreate = null;
|
|
for (int i = 0; i < liveNodeRefs.length; i++)
|
|
{
|
|
StoreRef archivedStoreRef = archiveRootNodeRef.getStoreRef();
|
|
archivedNodeRefs[i] = new NodeRef(archivedStoreRef, liveNodeRefs[i].getId());
|
|
|
|
Status liveStatus = nodeService.getNodeStatus(liveNodeRefs[i]);
|
|
Status archivedStatus = nodeService.getNodeStatus(archivedNodeRefs[i]);
|
|
|
|
// Check that live node statuses are correct
|
|
assertNotNull("'Live' node " + i + " status does not exist.", liveStatus);
|
|
assertFalse("'Live' node " + i + " should be node be deleted", liveStatus.isDeleted());
|
|
assertNull("'Archived' node " + i + " should not (yet) exist.", archivedStatus);
|
|
|
|
// Nodes in the hierarchy must be in the same txn
|
|
if (txnIdCreate == null)
|
|
{
|
|
txnIdCreate = liveStatus.getDbTxnId();
|
|
}
|
|
else
|
|
{
|
|
// Make sure that the DB Txn ID is the same
|
|
assertEquals(
|
|
"DB TXN ID should have been the same for the hierarchy. ",
|
|
txnIdCreate, liveStatus.getDbTxnId());
|
|
}
|
|
}
|
|
|
|
// Archive the top-level node
|
|
nodeService.deleteNode(liveNodeRefs[0]);
|
|
|
|
// Recheck the nodes and make sure that all the 'live' nodes are deleted
|
|
Long txnIdDelete = null;
|
|
for (int i = 0; i < liveNodeRefs.length; i++)
|
|
{
|
|
Status liveStatus = nodeService.getNodeStatus(liveNodeRefs[i]);
|
|
Status archivedStatus = nodeService.getNodeStatus(archivedNodeRefs[i]);
|
|
|
|
// Check that the ghosted nodes are marked as deleted and the archived nodes are not
|
|
assertNotNull("'Live' node " + i + " status does not exist.", liveStatus);
|
|
assertTrue("'Live' node " + i + " should be deleted (ghost entries)", liveStatus.isDeleted());
|
|
assertNotNull("'Archived' node " + i + " does not exist.", archivedStatus);
|
|
assertFalse("'Archived' node " + i + " should be undeleted", archivedStatus.isDeleted());
|
|
|
|
// Check that both old (ghosted deletes) and new nodes are in the same txn
|
|
if (txnIdDelete == null)
|
|
{
|
|
txnIdDelete = liveStatus.getDbTxnId();
|
|
}
|
|
else
|
|
{
|
|
// Make sure that the DB Txn ID is the same
|
|
assertEquals(
|
|
"DB TXN ID should have been the same for the deleted (ghost) nodes. ",
|
|
txnIdDelete, liveStatus.getDbTxnId());
|
|
}
|
|
assertEquals(
|
|
"DB TXN ID should be the same for deletes across the hierarchy",
|
|
txnIdDelete, archivedStatus.getDbTxnId());
|
|
}
|
|
|
|
// Restore the top-level node
|
|
nodeService.restoreNode(archivedNodeRefs[0], workspaceRootNodeRef, null, null);
|
|
|
|
// Recheck the nodes and make sure that all the 'archived' nodes are deleted and the 'live' nodes are back
|
|
Long txnIdRestore = null;
|
|
for (int i = 0; i < liveNodeRefs.length; i++)
|
|
{
|
|
Status liveStatus = nodeService.getNodeStatus(liveNodeRefs[i]);
|
|
StoreRef archivedStoreRef = archiveRootNodeRef.getStoreRef();
|
|
archivedNodeRefs[i] = new NodeRef(archivedStoreRef, liveNodeRefs[i].getId());
|
|
Status archivedStatus = nodeService.getNodeStatus(archivedNodeRefs[i]);
|
|
|
|
// Check that the ghosted nodes are marked as deleted and the archived nodes are not
|
|
assertNotNull("'Live' node " + i + " status does not exist.", liveStatus);
|
|
assertFalse("'Live' node " + i + " should not be deleted", liveStatus.isDeleted());
|
|
assertNotNull("'Archived' node " + i + " does not exist.", archivedStatus);
|
|
assertTrue("'Archived' node " + i + " should be deleted (ghost entry)", archivedStatus.isDeleted());
|
|
|
|
// Check that both old (ghosted deletes) and new nodes are in the same txn
|
|
if (txnIdRestore == null)
|
|
{
|
|
txnIdRestore = liveStatus.getDbTxnId();
|
|
}
|
|
else
|
|
{
|
|
// Make sure that the DB Txn ID is the same
|
|
assertEquals(
|
|
"DB TXN ID should have been the same for the restored nodes. ",
|
|
txnIdRestore, liveStatus.getDbTxnId());
|
|
}
|
|
assertEquals(
|
|
"DB TXN ID should be the same for the ex-archived (now-ghost) nodes. ",
|
|
txnIdRestore, archivedStatus.getDbTxnId());
|
|
}
|
|
}
|
|
|
|
public void testGetAssocById()
|
|
{
|
|
// Get a node association that doesn't exist
|
|
AssociationRef assocRef = nodeService.getAssoc(Long.MAX_VALUE);
|
|
assertNull("Should get null for missing ID of association. ", assocRef);
|
|
}
|
|
|
|
public void testDuplicateChildNodeName()
|
|
{
|
|
final NodeRef[] liveNodeRefs = new NodeRef[3];
|
|
final NodeRef workspaceRootNodeRef = nodeService.getRootNode(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE);
|
|
buildNodeHierarchy(workspaceRootNodeRef, liveNodeRefs);
|
|
|
|
// Get the name of the last node
|
|
final String lastName = (String) nodeService.getProperty(liveNodeRefs[2], ContentModel.PROP_NAME);
|
|
// Now create a node with the same name
|
|
RetryingTransactionCallback<NodeRef> newNodeCallback = new RetryingTransactionCallback<NodeRef>()
|
|
{
|
|
@Override
|
|
public NodeRef execute() throws Throwable
|
|
{
|
|
Map<QName, Serializable> props = new HashMap<QName, Serializable>(3);
|
|
props.put(ContentModel.PROP_NAME, lastName);
|
|
return nodeService.createNode(
|
|
liveNodeRefs[1],
|
|
ContentModel.ASSOC_CONTAINS,
|
|
QName.createQName(NAMESPACE, "duplicate"),
|
|
ContentModel.TYPE_FOLDER,
|
|
props).getChildRef();
|
|
}
|
|
};
|
|
try
|
|
{
|
|
txnService.getRetryingTransactionHelper().doInTransaction(newNodeCallback);
|
|
fail("Duplicate child node name not detected.");
|
|
}
|
|
catch (DuplicateChildNodeNameException e)
|
|
{
|
|
// Expected
|
|
}
|
|
}
|
|
|
|
public void testGetChildren_Limited()
|
|
{
|
|
// Create a node and loads of children
|
|
final NodeRef[] liveNodeRefs = new NodeRef[10];
|
|
final NodeRef workspaceRootNodeRef = nodeService.getRootNode(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE);
|
|
|
|
buildNodeHierarchy(workspaceRootNodeRef, liveNodeRefs);
|
|
|
|
// Hook 3rd and subsequent children into 1st child
|
|
for (int i = 2; i < liveNodeRefs.length; i++)
|
|
{
|
|
nodeService.addChild(
|
|
liveNodeRefs[0],
|
|
liveNodeRefs[i],
|
|
ContentModel.ASSOC_CONTAINS,
|
|
QName.createQName(NAMESPACE, "secondary"));
|
|
}
|
|
|
|
// Do limited queries each time
|
|
for (int i = 1; i < liveNodeRefs.length; i++)
|
|
{
|
|
List<ChildAssociationRef> childAssocRefs = nodeService.getChildAssocs(liveNodeRefs[0], null, null, i, true);
|
|
assertEquals("Expected exact number of child assocs", i, childAssocRefs.size());
|
|
}
|
|
|
|
// Repeat, but don't preload
|
|
for (int i = 1; i < liveNodeRefs.length; i++)
|
|
{
|
|
List<ChildAssociationRef> childAssocRefs = nodeService.getChildAssocs(liveNodeRefs[0], null, null, i, false);
|
|
assertEquals("Expected exact number of child assocs", i, childAssocRefs.size());
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Checks that the node caches react correctly when a node is deleted
|
|
*/
|
|
public void testCaches_DeleteNode()
|
|
{
|
|
final NodeRef[] liveNodeRefs = new NodeRef[10];
|
|
final NodeRef workspaceRootNodeRef = nodeService.getRootNode(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE);
|
|
|
|
buildNodeHierarchy(workspaceRootNodeRef, liveNodeRefs);
|
|
nodeService.addAspect(liveNodeRefs[3], ContentModel.ASPECT_TEMPORARY, null);
|
|
|
|
// Create a child under node 2
|
|
Map<QName, Serializable> props = new HashMap<QName, Serializable>(3);
|
|
props.put(ContentModel.PROP_NAME, "Secondary");
|
|
NodeRef secondaryNodeRef = nodeService.createNode(
|
|
liveNodeRefs[2],
|
|
ContentModel.ASSOC_CONTAINS,
|
|
QName.createQName(NAMESPACE, "secondary"),
|
|
ContentModel.TYPE_FOLDER,
|
|
props).getChildRef();
|
|
// Make it a child of node 3
|
|
nodeService.addChild(liveNodeRefs[3], secondaryNodeRef, ContentModel.ASSOC_CONTAINS, QName.createQName(NAMESPACE, "secondary"));
|
|
// Make it a child of node 4
|
|
nodeService.addChild(liveNodeRefs[4], secondaryNodeRef, ContentModel.ASSOC_CONTAINS, QName.createQName(NAMESPACE, "secondary"));
|
|
|
|
// Check
|
|
List<ChildAssociationRef> parentAssocsPre = nodeService.getParentAssocs(secondaryNodeRef);
|
|
assertEquals("Incorrect number of parent assocs", 3, parentAssocsPre.size());
|
|
|
|
// Delete node 3 (should affect 2 of the parent associations);
|
|
nodeService.deleteNode(liveNodeRefs[3]);
|
|
|
|
// Check
|
|
List<ChildAssociationRef> parentAssocsPost = nodeService.getParentAssocs(secondaryNodeRef);
|
|
assertEquals("Incorrect number of parent assocs", 1, parentAssocsPost.size());
|
|
}
|
|
|
|
/**
|
|
* Checks that file renames are handled when getting children
|
|
*/
|
|
public void testCaches_RenameNode()
|
|
{
|
|
final NodeRef[] nodeRefs = new NodeRef[2];
|
|
final NodeRef workspaceRootNodeRef = nodeService.getRootNode(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE);
|
|
buildNodeHierarchy(workspaceRootNodeRef, nodeRefs);
|
|
|
|
// What is the name of the first child?
|
|
String name = (String) nodeService.getProperty(nodeRefs[1], ContentModel.PROP_NAME);
|
|
// Now query for it
|
|
NodeRef nodeRefCheck = nodeService.getChildByName(nodeRefs[0], ContentModel.ASSOC_CONTAINS, name);
|
|
assertNotNull("Did not find node by name", nodeRefCheck);
|
|
assertEquals("Node found was not correct", nodeRefs[1], nodeRefCheck);
|
|
|
|
// Rename the node
|
|
nodeService.setProperty(nodeRefs[1], ContentModel.PROP_NAME, "New Name");
|
|
// Should find nothing
|
|
nodeRefCheck = nodeService.getChildByName(nodeRefs[0], ContentModel.ASSOC_CONTAINS, name);
|
|
assertNull("Should not have found anything", nodeRefCheck);
|
|
|
|
// Add another child with the same original name
|
|
NodeRef newChildNodeRef = nodeService.createNode(
|
|
nodeRefs[0],
|
|
ContentModel.ASSOC_CONTAINS,
|
|
QName.createQName(NAMESPACE, name),
|
|
ContentModel.TYPE_FOLDER,
|
|
Collections.singletonMap(ContentModel.PROP_NAME, (Serializable)name)).getChildRef();
|
|
// We should find this new node when looking for the name
|
|
nodeRefCheck = nodeService.getChildByName(nodeRefs[0], ContentModel.ASSOC_CONTAINS, name);
|
|
assertNotNull("Did not find node by name", nodeRefCheck);
|
|
assertEquals("Node found was not correct", newChildNodeRef, nodeRefCheck);
|
|
}
|
|
|
|
/**
|
|
* Looks for a key that contains the toString() of the value
|
|
*/
|
|
private Object findCacheValue(SimpleCache<Serializable, Serializable> cache, Serializable key)
|
|
{
|
|
Collection<Serializable> keys = cache.getKeys();
|
|
for (Serializable keyInCache : keys)
|
|
{
|
|
String keyInCacheStr = keyInCache.toString();
|
|
String keyStr = key.toString();
|
|
if (keyInCacheStr.endsWith(keyStr))
|
|
{
|
|
Object value = cache.get(keyInCache);
|
|
return value;
|
|
}
|
|
}
|
|
return null;
|
|
}
|
|
|
|
private static final QName PROP_RESIDUAL = QName.createQName(NamespaceService.CONTENT_MODEL_1_0_URI, GUID.generate());
|
|
/**
|
|
* Check that simple node property modifications advance the node caches correctly
|
|
*/
|
|
@SuppressWarnings("unchecked")
|
|
public void testCaches_ImmutableNodeCaches() throws Exception
|
|
{
|
|
final NodeRef[] nodeRefs = new NodeRef[2];
|
|
final NodeRef workspaceRootNodeRef = nodeService.getRootNode(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE);
|
|
buildNodeHierarchy(workspaceRootNodeRef, nodeRefs);
|
|
final NodeRef nodeRef = nodeRefs[1];
|
|
|
|
// Get the current node cache key
|
|
Long nodeId = (Long) findCacheValue(nodesCache, nodeRef);
|
|
assertNotNull("Node not found in cache", nodeId);
|
|
Node nodeOne = (Node) findCacheValue(nodesCache, nodeId);
|
|
assertNotNull("Node not found in cache", nodeOne);
|
|
NodeVersionKey nodeKeyOne = nodeOne.getNodeVersionKey();
|
|
|
|
// Get the node cached values
|
|
Map<QName, Serializable> nodePropsOne = (Map<QName, Serializable>) findCacheValue(propsCache, nodeKeyOne);
|
|
Set<QName> nodeAspectsOne = (Set<QName>) findCacheValue(aspectsCache, nodeKeyOne);
|
|
|
|
// Check the values
|
|
assertEquals("The node version is incorrect", Long.valueOf(1L), nodeKeyOne.getVersion());
|
|
assertNotNull("No cache entry for properties", nodePropsOne);
|
|
assertNotNull("No cache entry for aspects", nodeAspectsOne);
|
|
assertEquals("Property count incorrect", 1, nodePropsOne.size());
|
|
assertNotNull("Expected a cm:name property", nodePropsOne.get(ContentModel.PROP_NAME));
|
|
assertEquals("Aspect count incorrect", 1, nodeAspectsOne.size());
|
|
assertTrue("Expected a cm:auditable aspect", nodeAspectsOne.contains(ContentModel.ASPECT_AUDITABLE));
|
|
|
|
// Add a property
|
|
nodeService.setProperty(nodeRef, PROP_RESIDUAL, GUID.generate());
|
|
|
|
// Get the values for the previous version
|
|
Map<QName, Serializable> nodePropsOneCheck = (Map<QName, Serializable>) findCacheValue(propsCache, nodeKeyOne);
|
|
Set<QName> nodeAspectsOneCheck = (Set<QName>) findCacheValue(aspectsCache, nodeKeyOne);
|
|
assertTrue("Previous cache entries must be left alone", nodePropsOneCheck == nodePropsOne);
|
|
assertTrue("Previous cache entries must be left alone", nodeAspectsOneCheck == nodeAspectsOne);
|
|
|
|
// Get the current node cache key
|
|
Node nodeTwo = (Node) findCacheValue(nodesCache, nodeId);
|
|
assertNotNull("Node not found in cache", nodeTwo);
|
|
NodeVersionKey nodeKeyTwo = nodeTwo.getNodeVersionKey();
|
|
|
|
// Get the node cached values
|
|
Map<QName, Serializable> nodePropsTwo = (Map<QName, Serializable>) findCacheValue(propsCache, nodeKeyTwo);
|
|
Set<QName> nodeAspectsTwo = (Set<QName>) findCacheValue(aspectsCache, nodeKeyTwo);
|
|
|
|
// Check the values
|
|
assertEquals("The node version is incorrect", Long.valueOf(2L), nodeKeyTwo.getVersion());
|
|
assertNotNull("No cache entry for properties", nodePropsTwo);
|
|
assertNotNull("No cache entry for aspects", nodeAspectsTwo);
|
|
assertTrue("Properties must have moved on", nodePropsTwo != nodePropsOne);
|
|
assertEquals("Property count incorrect", 2, nodePropsTwo.size());
|
|
assertNotNull("Expected a cm:name property", nodePropsTwo.get(ContentModel.PROP_NAME));
|
|
assertNotNull("Expected a residual property", nodePropsTwo.get(PROP_RESIDUAL));
|
|
assertTrue("Aspects must be carried", nodeAspectsTwo == nodeAspectsOne);
|
|
|
|
// Remove a property
|
|
nodeService.removeProperty(nodeRef, PROP_RESIDUAL);
|
|
|
|
// Get the values for the previous version
|
|
Map<QName, Serializable> nodePropsTwoCheck = (Map<QName, Serializable>) findCacheValue(propsCache, nodeKeyTwo);
|
|
Set<QName> nodeAspectsTwoCheck = (Set<QName>) findCacheValue(aspectsCache, nodeKeyTwo);
|
|
assertTrue("Previous cache entries must be left alone", nodePropsTwoCheck == nodePropsTwo);
|
|
assertTrue("Previous cache entries must be left alone", nodeAspectsTwoCheck == nodeAspectsTwo);
|
|
|
|
// Get the current node cache key
|
|
Node nodeThree = (Node) findCacheValue(nodesCache, nodeId);
|
|
assertNotNull("Node not found in cache", nodeThree);
|
|
NodeVersionKey nodeKeyThree = nodeThree.getNodeVersionKey();
|
|
|
|
// Get the node cached values
|
|
Map<QName, Serializable> nodePropsThree = (Map<QName, Serializable>) findCacheValue(propsCache, nodeKeyThree);
|
|
Set<QName> nodeAspectsThree = (Set<QName>) findCacheValue(aspectsCache, nodeKeyThree);
|
|
|
|
// Check the values
|
|
assertEquals("The node version is incorrect", Long.valueOf(3L), nodeKeyThree.getVersion());
|
|
assertNotNull("No cache entry for properties", nodePropsThree);
|
|
assertNotNull("No cache entry for aspects", nodeAspectsThree);
|
|
assertTrue("Properties must have moved on", nodePropsThree != nodePropsTwo);
|
|
assertEquals("Property count incorrect", 1, nodePropsThree.size());
|
|
assertNotNull("Expected a cm:name property", nodePropsThree.get(ContentModel.PROP_NAME));
|
|
assertNull("Expected no residual property", nodePropsThree.get(PROP_RESIDUAL));
|
|
assertTrue("Aspects must be carried", nodeAspectsThree == nodeAspectsTwo);
|
|
|
|
// Add an aspect
|
|
nodeService.addAspect(nodeRef, ContentModel.ASPECT_TITLED, null);
|
|
|
|
// Get the values for the previous version
|
|
Map<QName, Serializable> nodePropsThreeCheck = (Map<QName, Serializable>) findCacheValue(propsCache, nodeKeyThree);
|
|
Set<QName> nodeAspectsThreeCheck = (Set<QName>) findCacheValue(aspectsCache, nodeKeyThree);
|
|
assertTrue("Previous cache entries must be left alone", nodePropsThreeCheck == nodePropsThree);
|
|
assertTrue("Previous cache entries must be left alone", nodeAspectsThreeCheck == nodeAspectsThree);
|
|
|
|
// Get the current node cache key
|
|
Node nodeFour = (Node) findCacheValue(nodesCache, nodeId);
|
|
assertNotNull("Node not found in cache", nodeFour);
|
|
NodeVersionKey nodeKeyFour = nodeFour.getNodeVersionKey();
|
|
|
|
// Get the node cached values
|
|
Map<QName, Serializable> nodePropsFour = (Map<QName, Serializable>) findCacheValue(propsCache, nodeKeyFour);
|
|
Set<QName> nodeAspectsFour = (Set<QName>) findCacheValue(aspectsCache, nodeKeyFour);
|
|
|
|
// Check the values
|
|
assertEquals("The node version is incorrect", Long.valueOf(4L), nodeKeyFour.getVersion());
|
|
assertNotNull("No cache entry for properties", nodePropsFour);
|
|
assertNotNull("No cache entry for aspects", nodeAspectsFour);
|
|
assertTrue("Properties must be carried", nodePropsFour == nodePropsThree);
|
|
assertTrue("Aspects must have moved on", nodeAspectsFour != nodeAspectsThree);
|
|
assertTrue("Expected cm:titled aspect", nodeAspectsFour.contains(ContentModel.ASPECT_TITLED));
|
|
|
|
// Remove an aspect
|
|
nodeService.removeAspect(nodeRef, ContentModel.ASPECT_TITLED);
|
|
|
|
// Get the values for the previous version
|
|
Map<QName, Serializable> nodePropsFourCheck = (Map<QName, Serializable>) findCacheValue(propsCache, nodeKeyFour);
|
|
Set<QName> nodeAspectsFourCheck = (Set<QName>) findCacheValue(aspectsCache, nodeKeyFour);
|
|
assertTrue("Previous cache entries must be left alone", nodePropsFourCheck == nodePropsFour);
|
|
assertTrue("Previous cache entries must be left alone", nodeAspectsFourCheck == nodeAspectsFour);
|
|
|
|
// Get the current node cache key
|
|
Node nodeFive = (Node) findCacheValue(nodesCache, nodeId);
|
|
assertNotNull("Node not found in cache", nodeFive);
|
|
NodeVersionKey nodeKeyFive = nodeFive.getNodeVersionKey();
|
|
|
|
// Get the node cached values
|
|
Map<QName, Serializable> nodePropsFive = (Map<QName, Serializable>) findCacheValue(propsCache, nodeKeyFive);
|
|
Set<QName> nodeAspectsFive = (Set<QName>) findCacheValue(aspectsCache, nodeKeyFive);
|
|
|
|
// Check the values
|
|
assertEquals("The node version is incorrect", Long.valueOf(5L), nodeKeyFive.getVersion());
|
|
assertNotNull("No cache entry for properties", nodePropsFive);
|
|
assertNotNull("No cache entry for aspects", nodeAspectsFive);
|
|
assertTrue("Properties must be carried", nodePropsFive == nodePropsFour);
|
|
assertTrue("Aspects must have moved on", nodeAspectsFive != nodeAspectsFour);
|
|
assertFalse("Expected no cm:titled aspect ", nodeAspectsFive.contains(ContentModel.ASPECT_TITLED));
|
|
|
|
// Add an aspect, some properties and secondary association
|
|
RetryingTransactionCallback<Void> nodeSixWork = new RetryingTransactionCallback<Void>()
|
|
{
|
|
@Override
|
|
public Void execute() throws Throwable
|
|
{
|
|
Map<QName, Serializable> props = new HashMap<QName, Serializable>();
|
|
props.put(ContentModel.PROP_TITLE, "some title");
|
|
nodeService.addAspect(nodeRef, ContentModel.ASPECT_TITLED, props);
|
|
nodeService.setProperty(nodeRef, ContentModel.PROP_DESCRIPTION, "Some description");
|
|
nodeService.addChild(
|
|
Collections.singletonList(workspaceRootNodeRef),
|
|
nodeRef,
|
|
ContentModel.ASSOC_CHILDREN,
|
|
QName.createQName(TEST_PREFIX, "secondary"));
|
|
return null;
|
|
}
|
|
};
|
|
txnService.getRetryingTransactionHelper().doInTransaction(nodeSixWork);
|
|
|
|
// Get the values for the previous version
|
|
Map<QName, Serializable> nodePropsFiveCheck = (Map<QName, Serializable>) findCacheValue(propsCache, nodeKeyFive);
|
|
Set<QName> nodeAspectsFiveCheck = (Set<QName>) findCacheValue(aspectsCache, nodeKeyFive);
|
|
assertTrue("Previous cache entries must be left alone", nodePropsFiveCheck == nodePropsFive);
|
|
assertTrue("Previous cache entries must be left alone", nodeAspectsFiveCheck == nodeAspectsFive);
|
|
|
|
// Get the current node cache key
|
|
Node nodeSix = (Node) findCacheValue(nodesCache, nodeId);
|
|
assertNotNull("Node not found in cache", nodeSix);
|
|
NodeVersionKey nodeKeySix = nodeSix.getNodeVersionKey();
|
|
|
|
// Get the node cached values
|
|
Map<QName, Serializable> nodePropsSix = (Map<QName, Serializable>) findCacheValue(propsCache, nodeKeySix);
|
|
Set<QName> nodeAspectsSix = (Set<QName>) findCacheValue(aspectsCache, nodeKeySix);
|
|
|
|
// Check the values
|
|
assertEquals("The node version is incorrect", Long.valueOf(6L), nodeKeySix.getVersion());
|
|
assertNotNull("No cache entry for properties", nodePropsSix);
|
|
assertNotNull("No cache entry for aspects", nodeAspectsSix);
|
|
assertTrue("Properties must have moved on", nodePropsSix != nodePropsFive);
|
|
assertEquals("Property count incorrect", 3, nodePropsSix.size());
|
|
assertNotNull("Expected a cm:name property", nodePropsSix.get(ContentModel.PROP_NAME));
|
|
assertNotNull("Expected a cm:title property", nodePropsSix.get(ContentModel.PROP_TITLE));
|
|
assertNotNull("Expected a cm:description property", nodePropsSix.get(ContentModel.PROP_DESCRIPTION));
|
|
assertTrue("Aspects must have moved on", nodeAspectsSix != nodeAspectsFive);
|
|
assertTrue("Expected cm:titled aspect ", nodeAspectsSix.contains(ContentModel.ASPECT_TITLED));
|
|
|
|
// Remove an aspect, some properties and a secondary association
|
|
RetryingTransactionCallback<Void> nodeSevenWork = new RetryingTransactionCallback<Void>()
|
|
{
|
|
@Override
|
|
public Void execute() throws Throwable
|
|
{
|
|
nodeService.removeAspect(nodeRef, ContentModel.ASPECT_TITLED);
|
|
nodeService.removeChild(workspaceRootNodeRef, nodeRef);
|
|
return null;
|
|
}
|
|
};
|
|
txnService.getRetryingTransactionHelper().doInTransaction(nodeSevenWork);
|
|
|
|
// Get the values for the previous version
|
|
Map<QName, Serializable> nodePropsSixCheck = (Map<QName, Serializable>) findCacheValue(propsCache, nodeKeySix);
|
|
Set<QName> nodeAspectsSixCheck = (Set<QName>) findCacheValue(aspectsCache, nodeKeySix);
|
|
assertTrue("Previous cache entries must be left alone", nodePropsSixCheck == nodePropsSix);
|
|
assertTrue("Previous cache entries must be left alone", nodeAspectsSixCheck == nodeAspectsSix);
|
|
|
|
// Get the current node cache key
|
|
Node nodeSeven = (Node) findCacheValue(nodesCache, nodeId);
|
|
assertNotNull("Node not found in cache", nodeSeven);
|
|
NodeVersionKey nodeKeySeven = nodeSeven.getNodeVersionKey();
|
|
|
|
// Get the node cached values
|
|
Map<QName, Serializable> nodePropsSeven = (Map<QName, Serializable>) findCacheValue(propsCache, nodeKeySeven);
|
|
Set<QName> nodeAspectsSeven = (Set<QName>) findCacheValue(aspectsCache, nodeKeySeven);
|
|
|
|
// Check the values
|
|
assertEquals("The node version is incorrect", Long.valueOf(7L), nodeKeySeven.getVersion());
|
|
assertNotNull("No cache entry for properties", nodePropsSeven);
|
|
assertNotNull("No cache entry for aspects", nodeAspectsSeven);
|
|
assertTrue("Properties must have moved on", nodePropsSeven != nodePropsSix);
|
|
assertEquals("Property count incorrect", 1, nodePropsSeven.size());
|
|
assertNotNull("Expected a cm:name property", nodePropsSeven.get(ContentModel.PROP_NAME));
|
|
assertTrue("Aspects must have moved on", nodeAspectsSeven != nodeAspectsSix);
|
|
assertFalse("Expected no cm:titled aspect ", nodeAspectsSeven.contains(ContentModel.ASPECT_TITLED));
|
|
|
|
// Modify cm:auditable
|
|
RetryingTransactionCallback<Void> nodeEightWork = new RetryingTransactionCallback<Void>()
|
|
{
|
|
@Override
|
|
public Void execute() throws Throwable
|
|
{
|
|
BehaviourFilter behaviourFilter = (BehaviourFilter) ctx.getBean("policyBehaviourFilter");
|
|
// Disable behaviour for txn
|
|
behaviourFilter.disableBehaviour(nodeRef, ContentModel.ASPECT_AUDITABLE);
|
|
nodeService.setProperty(nodeRef, ContentModel.PROP_MODIFIER, "Fred");
|
|
return null;
|
|
}
|
|
};
|
|
txnService.getRetryingTransactionHelper().doInTransaction(nodeEightWork);
|
|
|
|
// Get the values for the previous version
|
|
Map<QName, Serializable> nodePropsSevenCheck = (Map<QName, Serializable>) findCacheValue(propsCache, nodeKeySeven);
|
|
Set<QName> nodeAspectsSevenCheck = (Set<QName>) findCacheValue(aspectsCache, nodeKeySeven);
|
|
assertTrue("Previous cache entries must be left alone", nodePropsSevenCheck == nodePropsSeven);
|
|
assertTrue("Previous cache entries must be left alone", nodeAspectsSevenCheck == nodeAspectsSeven);
|
|
|
|
// Get the current node cache key
|
|
Node nodeEight = (Node) findCacheValue(nodesCache, nodeId);
|
|
assertNotNull("Node not found in cache", nodeEight);
|
|
NodeVersionKey nodeKeyEight = nodeEight.getNodeVersionKey();
|
|
|
|
// Get the node cached values
|
|
Map<QName, Serializable> nodePropsEight = (Map<QName, Serializable>) findCacheValue(propsCache, nodeKeyEight);
|
|
Set<QName> nodeAspectsEight = (Set<QName>) findCacheValue(aspectsCache, nodeKeyEight);
|
|
|
|
// Check the values
|
|
assertEquals("The node version is incorrect", Long.valueOf(8L), nodeKeyEight.getVersion());
|
|
assertNotNull("No cache entry for properties", nodePropsEight);
|
|
assertNotNull("No cache entry for aspects", nodeAspectsEight);
|
|
assertEquals("Expected change to cm:modifier", "Fred", nodeEight.getAuditableProperties().getAuditModifier());
|
|
assertTrue("Properties must be carried", nodePropsEight == nodePropsSeven);
|
|
assertTrue("Aspects be carried", nodeAspectsEight == nodeAspectsSeven);
|
|
}
|
|
|
|
public void testCreateNodePolicies()
|
|
{
|
|
// Create and bind the mock behaviours...
|
|
OnCreateNodePolicy onCreateNodePolicy = createClassPolicy(
|
|
OnCreateNodePolicy.class,
|
|
OnCreateNodePolicy.QNAME,
|
|
ContentModel.TYPE_CONTENT);
|
|
|
|
BeforeCreateNodePolicy beforeCreateNodePolicy = createClassPolicy(
|
|
BeforeCreateNodePolicy.class,
|
|
BeforeCreateNodePolicy.QNAME,
|
|
ContentModel.TYPE_CONTENT);
|
|
|
|
OnCreateChildAssociationPolicy onCreateChildAssociationPolicy = createAssocPolicy(
|
|
OnCreateChildAssociationPolicy.class,
|
|
OnCreateChildAssociationPolicy.QNAME,
|
|
ContentModel.TYPE_STOREROOT);
|
|
|
|
OnUpdatePropertiesPolicy onUpdatePropertiesPolicy = createClassPolicy(
|
|
OnUpdatePropertiesPolicy.class,
|
|
OnUpdatePropertiesPolicy.QNAME,
|
|
ContentModel.TYPE_CONTENT);
|
|
|
|
// Create a node - this should result in the behaviours firing.
|
|
NodeRef newNodeRef = nodeService.createNode(
|
|
this.rootNodeRef,
|
|
ContentModel.ASSOC_CHILDREN,
|
|
ContentModel.ASSOC_CHILDREN,
|
|
ContentModel.TYPE_CONTENT,
|
|
PropertyMap.EMPTY_MAP).getChildRef();
|
|
|
|
Map<QName, Serializable> propsAfter = nodeService.getProperties(newNodeRef);
|
|
ChildAssociationRef childAssocRef = nodeService.getPrimaryParent(newNodeRef);
|
|
|
|
// Check the behaviours fired as expected...
|
|
verify(beforeCreateNodePolicy).beforeCreateNode(
|
|
rootNodeRef,
|
|
ContentModel.ASSOC_CHILDREN,
|
|
ContentModel.ASSOC_CHILDREN,
|
|
ContentModel.TYPE_CONTENT);
|
|
verify(onCreateNodePolicy).onCreateNode(childAssocRef);
|
|
verify(onCreateChildAssociationPolicy).onCreateChildAssociation(childAssocRef, true);
|
|
verify(onUpdatePropertiesPolicy).onUpdateProperties(newNodeRef, PropertyMap.EMPTY_MAP, propsAfter);
|
|
}
|
|
|
|
public void testSetNodeTypePolicies()
|
|
{
|
|
// Create a node (before behaviours are attached)
|
|
NodeRef nodeRef = nodeService.createNode(
|
|
this.rootNodeRef,
|
|
ContentModel.ASSOC_CHILDREN,
|
|
ContentModel.ASSOC_CHILDREN,
|
|
ContentModel.TYPE_CONTENT,
|
|
new HashMap<QName, Serializable>(0)).getChildRef();
|
|
|
|
// Create and bind the mock behaviours...
|
|
BeforeUpdateNodePolicy beforeUpdatePolicy = createClassPolicy(
|
|
BeforeUpdateNodePolicy.class,
|
|
BeforeUpdateNodePolicy.QNAME,
|
|
ContentModel.TYPE_CONTENT);
|
|
|
|
OnUpdateNodePolicy onUpdatePolicy = createClassPolicy(
|
|
OnUpdateNodePolicy.class,
|
|
OnUpdateNodePolicy.QNAME,
|
|
ContentModel.TYPE_FOLDER);
|
|
|
|
BeforeSetNodeTypePolicy beforeSetNodeTypePolicy = createClassPolicy(
|
|
BeforeSetNodeTypePolicy.class,
|
|
BeforeSetNodeTypePolicy.QNAME,
|
|
ContentModel.TYPE_CONTENT);
|
|
|
|
OnSetNodeTypePolicy onSetNodeTypePolicy = createClassPolicy(
|
|
OnSetNodeTypePolicy.class,
|
|
OnSetNodeTypePolicy.QNAME,
|
|
ContentModel.TYPE_FOLDER);
|
|
|
|
// Set the type of the new node - this should trigger the correct behaviours.
|
|
nodeService.setType(nodeRef, ContentModel.TYPE_FOLDER);
|
|
|
|
// Check the behaviours fired as expected...
|
|
verify(beforeUpdatePolicy).beforeUpdateNode(nodeRef);
|
|
verify(onUpdatePolicy).onUpdateNode(nodeRef);
|
|
verify(beforeSetNodeTypePolicy).beforeSetNodeType(nodeRef, ContentModel.TYPE_CONTENT, ContentModel.TYPE_FOLDER);
|
|
verify(onSetNodeTypePolicy).onSetNodeType(nodeRef, ContentModel.TYPE_CONTENT, ContentModel.TYPE_FOLDER);
|
|
}
|
|
|
|
private <T extends Policy> T createClassPolicy(Class<T> policyInterface, QName policyQName, QName triggerOnClass)
|
|
{
|
|
T policy = mock(policyInterface);
|
|
policyComponent.bindClassBehaviour(
|
|
policyQName,
|
|
triggerOnClass,
|
|
new JavaBehaviour(policy, policyQName.getLocalName()));
|
|
return policy;
|
|
}
|
|
|
|
|
|
private <T extends Policy> T createAssocPolicy(Class<T> policyInterface, QName policyQName, QName triggerOnClass)
|
|
{
|
|
T policy = mock(policyInterface);
|
|
policyComponent.bindAssociationBehaviour(
|
|
policyQName,
|
|
triggerOnClass,
|
|
new JavaBehaviour(policy, policyQName.getLocalName()));
|
|
return policy;
|
|
}
|
|
|
|
/**
|
|
* Ensure that nodes cannot be linked to deleted nodes.
|
|
* <p/>
|
|
* Conditions that <i>might</i> cause this are:<br/>
|
|
* <ul>
|
|
* <li>Node created within a parent node that is being deleted</li>
|
|
* <li>The node cache is temporarily incorrect when the association is made</li>
|
|
* </ul>
|
|
* <p/>
|
|
* <a href="https://issues.alfresco.com/jira/browse/ALF-12358">Concurrency: Possible to create association references to deleted nodes</a>
|
|
*/
|
|
public void testConcurrentLinkToDeletedNode() throws Throwable
|
|
{
|
|
QNameDAO qnameDAO = (QNameDAO) ctx.getBean("qnameDAO");
|
|
Long deletedTypeQNameId = qnameDAO.getOrCreateQName(ContentModel.TYPE_DELETED).getFirst();
|
|
// First find any broken links to start with
|
|
final NodeEntity params = new NodeEntity();
|
|
params.setId(0L);
|
|
params.setTypeQNameId(deletedTypeQNameId);
|
|
|
|
// Find all 'at risk' nodes before the test
|
|
final List<Long> attachedToDeletedIdsBefore = getChildNodesWithDeletedParentNode(params, 0);
|
|
logger.debug("Found child nodes with deleted parent node (before): " + attachedToDeletedIdsBefore);
|
|
final List<Long> orphanedNodeIdsBefore = getChildNodesWithNoParentNode(params, 0);
|
|
logger.debug("Found child nodes without parent (before): " + orphanedNodeIdsBefore);
|
|
|
|
final NodeRef[] nodeRefs = new NodeRef[10];
|
|
final NodeRef workspaceRootNodeRef = nodeService.getRootNode(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE);
|
|
buildNodeHierarchy(workspaceRootNodeRef, nodeRefs);
|
|
|
|
// Fire off a bunch of threads that create random nodes within the hierarchy created above
|
|
final RetryingTransactionCallback<NodeRef> createChildCallback = new RetryingTransactionCallback<NodeRef>()
|
|
{
|
|
@Override
|
|
public NodeRef execute() throws Throwable
|
|
{
|
|
String randomName = getName() + "-" + GUID.generate();
|
|
QName randomQName = QName.createQName(NamespaceService.CONTENT_MODEL_1_0_URI, randomName);
|
|
Map<QName, Serializable> props = new HashMap<QName, Serializable>();
|
|
props.put(ContentModel.PROP_NAME, randomName);
|
|
// Choose a random parent node from the hierarchy
|
|
int random = new Random().nextInt(10);
|
|
return nodeService.createNode(
|
|
nodeRefs[random],
|
|
ContentModel.ASSOC_CONTAINS,
|
|
randomQName,
|
|
ContentModel.TYPE_CONTAINER,
|
|
props).getChildRef();
|
|
}
|
|
};
|
|
final Runnable[] runnables = new Runnable[20];
|
|
final List<NodeRef> nodesAtRisk = Collections.synchronizedList(new ArrayList<NodeRef>(100));
|
|
|
|
final List<Thread> threads = new ArrayList<Thread>();
|
|
for (int i = 0; i < runnables.length; i++)
|
|
{
|
|
runnables[i] = new Runnable()
|
|
{
|
|
@Override
|
|
public synchronized void run()
|
|
{
|
|
AuthenticationUtil.setRunAsUserSystem();
|
|
try
|
|
{
|
|
wait(1000L); // A short wait before we kick off (should be notified)
|
|
for (int i = 0; i < 200; i++)
|
|
{
|
|
NodeRef nodeRef = txnService.getRetryingTransactionHelper().doInTransaction(createChildCallback);
|
|
// Store the node for later checks
|
|
nodesAtRisk.add(nodeRef);
|
|
// Wait to give other threads a chance
|
|
wait(1L);
|
|
}
|
|
}
|
|
catch (Throwable e)
|
|
{
|
|
// This is expected i.e. we'll just keep doing it until failure
|
|
logger.debug("Got exception adding child node: ", e);
|
|
}
|
|
}
|
|
};
|
|
Thread thread = new Thread(runnables[i]);
|
|
threads.add(thread);
|
|
thread.start();
|
|
}
|
|
|
|
final RetryingTransactionCallback<NodeRef> deleteWithNestedCallback = new RetryingTransactionCallback<NodeRef>()
|
|
{
|
|
@Override
|
|
public NodeRef execute() throws Throwable
|
|
{
|
|
// Notify the threads to kick off
|
|
for (int i = 0; i < runnables.length; i++)
|
|
{
|
|
// Notify the threads to stop waiting
|
|
synchronized(runnables[i])
|
|
{
|
|
runnables[i].notify();
|
|
}
|
|
// Short wait to give thread a chance to run
|
|
synchronized(this) { try { wait(10L); } catch (Throwable e) {} };
|
|
}
|
|
// Delete the parent node
|
|
nodeService.deleteNode(nodeRefs[0]);
|
|
return null;
|
|
}
|
|
};
|
|
txnService.getRetryingTransactionHelper().doInTransaction(deleteWithNestedCallback);
|
|
|
|
// Wait for the threads to finish
|
|
for (Thread t : threads)
|
|
{
|
|
t.join();
|
|
}
|
|
|
|
logger.info("All threads should have finished");
|
|
|
|
// Find all 'at risk' nodes after the test
|
|
final List<Long> attachedToDeletedIdsAfter = getChildNodesWithDeletedParentNode(params, attachedToDeletedIdsBefore.size());
|
|
logger.debug("Found child nodes with deleted parent node (after): " + attachedToDeletedIdsAfter);
|
|
final List<Long> orphanedNodeIdsAfter = getChildNodesWithNoParentNode(params, orphanedNodeIdsBefore.size());
|
|
logger.debug("Found child nodes without parent (after): " + attachedToDeletedIdsAfter);
|
|
// Now need to identify the problem nodes
|
|
|
|
if (attachedToDeletedIdsAfter.isEmpty() && orphanedNodeIdsAfter.isEmpty())
|
|
{
|
|
// nothing more to test
|
|
return;
|
|
}
|
|
|
|
// We are already in a failed state, but check if the orphan cleanup works
|
|
|
|
// workaround recovery: force collection of any orphan nodes (ALF-12358 + ALF-13066)
|
|
for (final NodeRef nodeRef : nodesAtRisk)
|
|
{
|
|
txnService.getRetryingTransactionHelper().doInTransaction(new RetryingTransactionCallback<Void>()
|
|
{
|
|
@Override
|
|
public Void execute() throws Throwable
|
|
{
|
|
if (nodeService.exists(nodeRef))
|
|
{
|
|
nodeService.getPath(nodeRef); // ignore return
|
|
}
|
|
return null;
|
|
}
|
|
});
|
|
}
|
|
|
|
// Find all 'at risk' nodes after the test
|
|
final List<Long> attachedToDeletedIdsCleaned = getChildNodesWithDeletedParentNode(params, attachedToDeletedIdsBefore.size());
|
|
logger.debug("Found child nodes with deleted parent node (cleaned): " + attachedToDeletedIdsAfter);
|
|
final List<Long> orphanedNodeIdsCleaned = getChildNodesWithNoParentNode(params, orphanedNodeIdsBefore.size());
|
|
logger.debug("Found child nodes without parent (cleaned): " + attachedToDeletedIdsAfter);
|
|
|
|
// Check
|
|
assertTrue(
|
|
"Expected full cleanup of nodes referencing deleted nodes: " + attachedToDeletedIdsCleaned,
|
|
attachedToDeletedIdsCleaned.isEmpty());
|
|
assertTrue(
|
|
"Expected full cleanup of nodes referencing without parents: " + orphanedNodeIdsCleaned,
|
|
orphanedNodeIdsCleaned.isEmpty());
|
|
|
|
// check lost_found ...
|
|
List<NodeRef> lostAndFoundNodeRefs = getLostAndFoundNodes();
|
|
assertFalse(lostAndFoundNodeRefs.isEmpty());
|
|
|
|
Set<Long> lostAndFoundNodeIds = new HashSet<Long>(lostAndFoundNodeRefs.size());
|
|
for (NodeRef nodeRef : lostAndFoundNodeRefs)
|
|
{
|
|
lostAndFoundNodeIds.add((Long)nodeService.getProperty(nodeRef, ContentModel.PROP_NODE_DBID));
|
|
}
|
|
|
|
assertTrue("Nodes linked to deleted parent nodes not handled.", lostAndFoundNodeIds.containsAll(attachedToDeletedIdsAfter));
|
|
assertTrue("Orphaned nodes not all handled.", lostAndFoundNodeIds.containsAll(orphanedNodeIdsAfter));
|
|
|
|
// Now fail because we allowed the situation in the first place
|
|
fail("We allowed orphaned nodes or nodes with deleted parents.");
|
|
}
|
|
|
|
/**
|
|
* Pending repeatable test - force issue ALF-ALF-13066 (non-root node with no parent)
|
|
*/
|
|
public void testForceNonRootNodeWithNoParentNode() throws Throwable
|
|
{
|
|
QNameDAO qnameDAO = (QNameDAO) ctx.getBean("qnameDAO");
|
|
Long deletedTypeQNameId = qnameDAO.getOrCreateQName(ContentModel.TYPE_DELETED).getFirst();
|
|
// First find any broken links to start with
|
|
final NodeEntity params = new NodeEntity();
|
|
params.setId(0L);
|
|
params.setTypeQNameId(deletedTypeQNameId);
|
|
|
|
List<Long> ids = getChildNodesWithNoParentNode(params, 0);
|
|
logger.debug("Found child nodes with deleted parent node (before): " + ids);
|
|
|
|
final int idsToSkip = ids.size();
|
|
|
|
final NodeRef[] nodeRefs = new NodeRef[10];
|
|
final NodeRef workspaceRootNodeRef = nodeService.getRootNode(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE);
|
|
buildNodeHierarchy(workspaceRootNodeRef, nodeRefs);
|
|
|
|
int cnt = 5;
|
|
List<NodeRef> childNodeRefs = new ArrayList<NodeRef>(cnt);
|
|
|
|
final NodeDAO nodeDAO = (NodeDAO)ctx.getBean("nodeDAO");
|
|
|
|
for (int i = 0; i < cnt; i++)
|
|
{
|
|
// create some pseudo- thumnails
|
|
String randomName = getName() + "-" + System.nanoTime();
|
|
QName randomQName = QName.createQName(NamespaceService.CONTENT_MODEL_1_0_URI, randomName);
|
|
Map<QName, Serializable> props = new HashMap<QName, Serializable>();
|
|
props.put(ContentModel.PROP_NAME, randomName);
|
|
|
|
// Choose a random parent node from the hierarchy
|
|
int random = new Random().nextInt(10);
|
|
NodeRef parentNodeRef = nodeRefs[random];
|
|
|
|
NodeRef childNodeRef = nodeService.createNode(
|
|
parentNodeRef,
|
|
ContentModel.ASSOC_CONTAINS,
|
|
randomQName,
|
|
ContentModel.TYPE_THUMBNAIL,
|
|
props).getChildRef();
|
|
|
|
childNodeRefs.add(childNodeRef);
|
|
|
|
// forcefully remove the primary parent assoc
|
|
final Long childNodeId = (Long)nodeService.getProperty(childNodeRef, ContentModel.PROP_NODE_DBID);
|
|
txnService.getRetryingTransactionHelper().doInTransaction(new RetryingTransactionCallback<Void>()
|
|
{
|
|
@Override
|
|
public Void execute() throws Throwable
|
|
{
|
|
Pair<Long, ChildAssociationRef> assocPair = nodeDAO.getPrimaryParentAssoc(childNodeId);
|
|
nodeDAO.deleteChildAssoc(assocPair.getFirst());
|
|
return null;
|
|
}
|
|
});
|
|
}
|
|
|
|
// Now need to identify the problem nodes
|
|
final List<Long> childNodeIds = getChildNodesWithNoParentNode(params, idsToSkip);
|
|
assertFalse(childNodeIds.isEmpty());
|
|
logger.debug("Found child nodes with deleted parent node (after): " + childNodeIds);
|
|
|
|
// workaround recovery: force collection of any orphan nodes (ALF-12358 + ALF-13066)
|
|
for (final NodeRef nodeRef : childNodeRefs)
|
|
{
|
|
txnService.getRetryingTransactionHelper().doInTransaction(new RetryingTransactionCallback<Void>()
|
|
{
|
|
@Override
|
|
public Void execute() throws Throwable
|
|
{
|
|
if (nodeService.exists(nodeRef))
|
|
{
|
|
nodeService.getPath(nodeRef); // ignore return
|
|
}
|
|
return null;
|
|
}
|
|
});
|
|
}
|
|
|
|
// check again ...
|
|
ids = getChildNodesWithNoParentNode(params, idsToSkip);
|
|
assertTrue("The following child nodes have no parent node: " + ids, ids.isEmpty());
|
|
|
|
// check lost_found ...
|
|
List<NodeRef> lostAndFoundNodeRefs = getLostAndFoundNodes();
|
|
assertFalse(lostAndFoundNodeRefs.isEmpty());
|
|
|
|
List<Long> lostAndFoundNodeIds = new ArrayList<Long>(lostAndFoundNodeRefs.size());
|
|
for (NodeRef nodeRef : lostAndFoundNodeRefs)
|
|
{
|
|
lostAndFoundNodeIds.add((Long)nodeService.getProperty(nodeRef, ContentModel.PROP_NODE_DBID));
|
|
}
|
|
|
|
for (Long childNodeId : childNodeIds)
|
|
{
|
|
assertTrue("Not found: "+childNodeId, lostAndFoundNodeIds.contains(childNodeId));
|
|
}
|
|
}
|
|
|
|
private List<Long> getChildNodesWithDeletedParentNode(NodeEntity params, int idsToSkip)
|
|
{
|
|
return cannedQueryDAO.executeQuery(
|
|
"alfresco.query.test",
|
|
"select_NodeServiceTest_testConcurrentLinkToDeletedNode_GetChildNodesWithDeletedParentNodeCannedQuery",
|
|
params,
|
|
idsToSkip,
|
|
Integer.MAX_VALUE);
|
|
}
|
|
|
|
private List<Long> getChildNodesWithNoParentNode(NodeEntity params, int idsToSkip)
|
|
{
|
|
return cannedQueryDAO.executeQuery(
|
|
"alfresco.query.test",
|
|
"select_NodeServiceTest_testForceNonRootNodeWithNoParentNode_GetChildNodesWithNoParentNodeCannedQuery",
|
|
params,
|
|
idsToSkip,
|
|
Integer.MAX_VALUE);
|
|
}
|
|
|
|
private List<NodeRef> getLostAndFoundNodes()
|
|
{
|
|
Set<QName> childNodeTypeQNames = new HashSet<QName>(1);
|
|
childNodeTypeQNames.add(ContentModel.TYPE_LOST_AND_FOUND);
|
|
|
|
List<ChildAssociationRef> childAssocRefs = nodeService.getChildAssocs(nodeService.getRootNode(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE), childNodeTypeQNames);
|
|
|
|
List<NodeRef> lostNodeRefs = null;
|
|
|
|
if (childAssocRefs.size() > 0)
|
|
{
|
|
List<ChildAssociationRef> lostNodeChildAssocRefs = nodeService.getChildAssocs(childAssocRefs.get(0).getChildRef());
|
|
lostNodeRefs = new ArrayList<NodeRef>(lostNodeChildAssocRefs.size());
|
|
for(ChildAssociationRef lostNodeChildAssocRef : lostNodeChildAssocRefs)
|
|
{
|
|
lostNodeRefs.add(lostNodeChildAssocRef.getChildRef());
|
|
}
|
|
}
|
|
else
|
|
{
|
|
lostNodeRefs = Collections.emptyList();
|
|
}
|
|
|
|
return lostNodeRefs;
|
|
}
|
|
|
|
/**
|
|
* @see NodeHierarchyWalker
|
|
*/
|
|
public void testNodeHierarchyWalker() throws Exception
|
|
{
|
|
final NodeRef workspaceRootNodeRef = nodeService.getRootNode(StoreRef.STORE_REF_WORKSPACE_SPACESSTORE);
|
|
final NodeRef[] nodes = new NodeRef[6];
|
|
buildNodeHierarchy(workspaceRootNodeRef, nodes);
|
|
Pair<Long, NodeRef> parentNodePair = nodeDAO.getNodePair(nodes[0]);
|
|
Pair<Long, ChildAssociationRef> parentAssocPair = nodeDAO.getPrimaryParentAssoc(parentNodePair.getFirst());
|
|
// Hook up some associations
|
|
nodeService.addAspect(nodes[1], ContentModel.ASPECT_COPIEDFROM, null);
|
|
nodeService.createAssociation(nodes[1], nodes[0], ContentModel.ASSOC_ORIGINAL); // Peer n1-n0
|
|
nodeService.addChild( // Secondary child n0-n2
|
|
nodes[0],
|
|
nodes[2],
|
|
ContentModel.ASSOC_CONTAINS,
|
|
QName.createQName(NamespaceService.ALFRESCO_URI, "testNodeHierarchyWalker"));
|
|
|
|
// Walk the hierarchy
|
|
NodeHierarchyWalker walker = new NodeHierarchyWalker(nodeDAO);
|
|
walker.walkHierarchy(parentNodePair, parentAssocPair);
|
|
|
|
List<VisitedNode> nodesLeafFirst = walker.getNodes(true);
|
|
assertEquals("Unexpected number of nodes visited", 6, nodesLeafFirst.size());
|
|
assertEquals("Incorrect order ", nodesLeafFirst.get(0).nodeRef, nodes[5]);
|
|
assertEquals("Incorrect order ", nodesLeafFirst.get(5).nodeRef, nodes[0]);
|
|
List<VisitedNode> nodesParentFirst = walker.getNodes(false);
|
|
assertEquals("Unexpected number of nodes visited", 6, nodesParentFirst.size());
|
|
assertEquals("Incorrect order ", nodesParentFirst.get(0).nodeRef, nodes[0]);
|
|
assertEquals("Incorrect order ", nodesParentFirst.get(5).nodeRef, nodes[5]);
|
|
|
|
// Check primary parent links
|
|
assertEquals(workspaceRootNodeRef, nodesParentFirst.get(0).primaryParentAssocPair.getSecond().getParentRef());
|
|
assertEquals(nodes[0], nodesParentFirst.get(1).primaryParentAssocPair.getSecond().getParentRef());
|
|
assertEquals(nodes[4], nodesParentFirst.get(5).primaryParentAssocPair.getSecond().getParentRef());
|
|
|
|
// Check secondary parent links
|
|
assertEquals(0, nodesParentFirst.get(0).secondaryParentAssocs.size());
|
|
assertEquals(nodes[0], nodesParentFirst.get(2).secondaryParentAssocs.get(0).getSecond().getParentRef());
|
|
assertEquals(0, nodesParentFirst.get(1).secondaryParentAssocs.size());
|
|
assertEquals(1, nodesParentFirst.get(2).secondaryParentAssocs.size());
|
|
assertEquals(0, nodesParentFirst.get(3).secondaryParentAssocs.size());
|
|
|
|
// Check secondary child links
|
|
assertEquals(1, nodesParentFirst.get(0).secondaryChildAssocs.size());
|
|
assertEquals(nodes[2], nodesParentFirst.get(0).secondaryChildAssocs.get(0).getSecond().getChildRef());
|
|
assertEquals(0, nodesParentFirst.get(1).secondaryChildAssocs.size());
|
|
|
|
// Check target assocs
|
|
assertEquals(0, nodesParentFirst.get(0).targetAssocs.size());
|
|
assertEquals(1, nodesParentFirst.get(1).targetAssocs.size());
|
|
assertEquals(nodes[0], nodesParentFirst.get(1).targetAssocs.get(0).getSecond().getTargetRef());
|
|
|
|
// Check source assocs
|
|
assertEquals(1, nodesParentFirst.get(0).sourceAssocs.size());
|
|
assertEquals(nodes[1], nodesParentFirst.get(0).sourceAssocs.get(0).getSecond().getSourceRef());
|
|
assertEquals(0, nodesParentFirst.get(1).sourceAssocs.size());
|
|
}
|
|
}
|