Merged V4.0-BUG-FIX to HEAD

36311: BDE-69: filter long tests if minimal.testing property is defined
   36314: Merged V4.0 to V4.0-BUG-FIX (RECORD ONLY)
      36247: ALF-11027: temporarily remove import of maven.xml, since it makes ant calls fail from enterpriseprojects
   36331: ALF-12447: Further changes required to fix lower case meta-inf folder name
   36333: Revert ALF-12447.
   36334: ALF-14115: Merged V3.4-BUG-FIX to V4.0-BUG-FIX
      36318: ALF-12447: Fix case on META-INF folder for SDK
      36332: ALF-12447: Further changes required to fix lower case meta-inf folder name
   36337: ALF-14115: Merged V3.4-BUG-FIX to V4.0-BUG-FIX
      36332: ALF-12447: Yet more meta-inf case changes needed.
   36342: ALF-14120: fix only completed tasks returned
   36343: ALF-13898: starting workflow from IMAP now using workflowDefs with engine name included, fallback to appending $jbpm when not present, to preserve backwards compatibility.
   36345: Fix for ALF-12730 - Email Space Users fails if template is used
   36346: Fix for ALF-9466 - We can search contents sorted by categories in Advanced search in Share, but saved search will not be shown in UI.
   36364: Switch version to 4.0.3
   36375: Merged BRANCHES/DEV/CLOUDSYNCLOCAL2 to BRANCHES/DEV/V4.0-BUG-FIX:
      36366: Tweak to implementation to ensure that on-authentication-failed, the status is updated within a r/w transaction.
      36374: Provide more specific exceptions from the Remote Connector Service for client and server errors
   36376: Fix ALF-14121 - Alfresco fails to start if using "replicating-content-services-context.xml"
   36393: Final part of ALF-13723 SOLR does not include the same query unit tests as lucene
   - CMIS typed query and ordering tests
   36432: ALF-14133: Merged V3.4-BUG-FIX (3.4.10) to V4.0-BUG-FIX (4.0.3)
      << 4.0.x specific change: Changed transformer.complex.OOXML.Image into transformer.complex.Any.Image >>
      << allowing any transformer to be selected for the conversion to JPEG >>
      36427: ALF-14131 Complex transformers fail if a lower level transformer fails even though there is another transformer that could do the transformation
         - Added a base spring bean for all complex transformers
      36362: ALF-14131 Complex transformers fail if a lower level transformer fails even though there is another transformer that could do the transformation
   36434: Test fix for ALF-13723 SOLR does not include the same query unit tests as lucene
   - CMIS test data change broke AFTS ID ordering
   36503: Removed thousands of compiler warnings (CMIS query test code)
   36518: Fix for ALF-13778 - Links on Share Repository search page show incorrect link name; do not work when root-node is defined.
   Fix now means that Share search correctly handles overridden Repository root node setting. Original work by Vasily Olhin.
   36520: BDE-69: filter all repo tests if minimal.testing property is defined
   36534: ALF-14116: Latest Surf libs (r1075) - ensure that i18n extensions can process browser sent short locales
   36563: Merged V3.4-BUG-FIX to V4.0-BUG-FIX
      36336: ALF-12447: Yet more meta-inf case changes needed.
      36347: Fix for ALF-13920 - Error occurred when try to edit/delete category
      36352: Fix for ALF-13123 - Invalid JSON format from Get Node Tags Webscript - strings not double-quoted. Also fixed POST webscript with same issue.
      36399: ALL LANG: translation updates based on EN r36392
      36421: Fix for Mac Lion versioning issue. ALF-12792 (Part 1 of 2)
      Enable the InfoPassthru and Level2Oplocks server capability flags, InfoPassthru is the flag that fixes the Mac Lion versioning error.
      Added support for filesystems that do not implement the NTFS streams interface in the CIFS transact rename processing, for the Alfresco repo filesystem.
      36422: Fix for Mac Lion versioning issue. ALF-12792 (Part 2 of 2)
      Enable the InfoPassthru and Level2Oplocks server capability flags, InfoPassthru is the flag that fixes the Mac Lion versioning error.
      36423: Add support for file size tracking in the file state. ALF-13616 (Part 1 of 2)
      36424: Fix for Mac MS Word file save issue. ALF-13616 (Part 2 of 2)
      Added live file size tracking to file writing/folder searches so the correct file size is returned before the file is closed.
      36444: Merged DEV to V3.4-BUG-FIX
         36419: ALF-12666 Search against simple-search-additional-attributes doesn't work properly
            SearchContext.buildQuery(int) method was changed.
      36446: Fix for ALF-13404 - Performance: 'Content I'm Editing' dashlet is slow to render when there is lots of data/sites
       - Effectively removed all PATH based queries using the pattern /companyhome/sites/*/container//* as they are a non-optimized case
       - Replaced the "all sites" doclist query using the above pattern with /companyhome/sites//* plus post query resultset processing based on documentLibrary container matching regex
       - Optimized favorite document query to remove need for a PATH
       - Optimized Content I'm Editing discussion PATH query to use /*/* instead of /*//*
       - Fixed issue where Content I'm Editing discussion results would not always show the root topics that a user has edited
       - Added some addition doclist.get.js query scriptlogger debugging output
      36449: ALF-13404 - Fix for issue where favoriates for all sites would be shown in each site document library in the My Favorites filter.
      36475: ALF-14131 Complex transformers fail if a lower level transformer fails even though there is another transformer that could do the transformation
         - Change base spring bean on example config file
      36480: 36453: ALF-3881 : ldap sync deletion behaviour not flexible enough
         - synchronization.allowDeletions parameter introduced
         - default value is true (existing behaviour)
         - when false, no missing users or groups are deleted from the repository
         - instead they are cleared of their zones and missing groups are cleared of all their members
         - colliding users and groups from different zones are also 'moved' rather than recreated
         - unit test added
      36491: Added CIFS transact2 NT passthru levels for set end of file/set allocation size. ALF-13616.
      Also updated FileInfoLevel with the latest list of NT passthru information levels.
      36497: Fixed ALF-14163: JavaScript Behaviour broken: Node properties cannot be cast to java.io.Serializable
       - Fallout from ALF-12855
       - Made class Serializable (like HashMap would have been)
       - Fixed line endings, too
      36531: ALF-13769: Merged BELARUS/V3.4-BUG-FIX-2012_04_05 to V3.4-BUG-FIX (3.4.10)
         35150: ALF-2645 : 3.2+ ldap sync debug information is too scarce 
            - Improved LDAP logging.
      36532: ALF-13769: BRANCHES/DEV/BELARUS/V3.4-BUG-FIX-2012_01_26 to V3.4-BUG-FIX (3.4.10)
         36461: ALF-237: WCM: File conflicts cause file order not to be consistent
            - It is reasonable set values for checkboxes using the indexes from the list, which are not changed. So when we submit the window, the getSelectedNodes method is invoked and 
              it takes selected nodes by checkbox values from "paths" list. 
      36535: Merged DEV to V3.4-BUG-FIX
         36479: ALF-8918 : Cannot "edit offline" a web quick start publication
            A check in TaggableAspect.onUpdatePropertiesOnCommit() was extended to skip the update, if no tags were changed.
      36555: Merged V3.4 to V3.4-BUG-FIX
         36294: ALF-14039: Merged HEAD to V3.4
            31732: ALF-10934: Prevent potential start/stop ping-pong of subsystems across a cluster
               - When a cluster boots up or receives a reinit message it shouldn't be sending out any start messages
   36566: Merged V3.4-BUG-FIX to V4.0-BUG-FIX (RECORD ONLY)
      36172: Merged BRANCHES/DEV/V4.0-BUG-FIX to BRANCHES/DEV/V3.4-BUG-FIX:
         36169: ALF-8755: After renaming content / space by Contributor via WebDAV new items are created
   36572: Merged V4.0 to V4.0-BUG-FIX
      36388: ALF-14025: Updated Surf libs (1071). Fixes to checksum-disabled dependency handling
      36392: ALF-14129 Failed to do upgrade from 3.4.8 to 4.0.2
         << Committed change for Frederik Heremans >>
         - Moved actual activiti-tables creation to before the upgrade
      36409: Fix for ALF-14124 Solr is not working - Errors occur during the startup
      36466: Fix for ALF-12770 - Infinite loop popup alert in TinyMCE after XSS injection in Alfresco Explorer online edit.
      36501: Merged DEV to V4.0
         36496: ALF-14063 : CLONE - Internet Explorer hangs when using the object picker with a larger number of documents
            YUI 2.9.0 library was modified to use chunked unloading of listeners via a series of setTimeout() functions in event.js for IE 6,7,8.
      36502: ALF-14105: Share Advanced search issue with the form values
      - Fix by David We
      36538: ALF-13986: Updated web.xml and index.jsp redirect to ensure that SSO works with proper surf site-configuration customization
      36539: Fix for ALF-14167 Filtering by Tags/Categories doen't findes any content in Repository/DocumentLibrary
      - fix default namespace back to "" -> "" and fix the specific SOLR tests that require otherwise.
      36541: ALF-14082: Input stream leaks in thumbnail rendering webscripts
      36560: Correctly size content length header after HTML stripping process (ALF-9365)
   36574: Merged V4.0 to V4.0-BUG-FIX (RECORD ONLY)
      36316: Merged V4.0-BUG-FIX to V4.0 (4.0.2)
      36391: Merged V4.0-BUG-FIX to V4.0
         36376: Fix ALF-14121 - Alfresco fails to start if using "replicating-content-services-context.xml"


git-svn-id: https://svn.alfresco.com/repos/alfresco-enterprise/alfresco/HEAD/root@36576 c4b6b30b-aa2e-2d43-bbcb-ca4b014f7261
This commit is contained in:
Dave Ward
2012-05-18 17:00:53 +00:00
parent e585e44874
commit d437d5105d
39 changed files with 1855 additions and 1001 deletions

View File

@@ -299,6 +299,17 @@
</property>
</bean>
<!-- Abstract bean definition defining base definition for all complex transformers -->
<bean id="baseComplexContentTransformer"
class="org.alfresco.repo.content.transform.ComplexContentTransformer"
abstract="true"
init-method="register"
parent="baseContentTransformer">
<property name="contentService">
<ref bean="contentService" />
</property>
</bean>
<!-- These transformers are not used alone but only as part of a failover sequence -->
<!-- For this reason they do not extend the baseContentTransformer bean and so will not be registered. -->
<bean id="failover.transformer.PdfRenderer.PdfToImage"
@@ -383,7 +394,7 @@
<bean id="transformer.complex.PDF.Image"
class="org.alfresco.repo.content.transform.ComplexContentTransformer"
parent="baseContentTransformer" >
parent="baseComplexContentTransformer" >
<property name="explicitTransformations">
<list>
<bean class="org.alfresco.repo.content.transform.ExplictTransformationDetails" >
@@ -477,12 +488,14 @@
</property>
</bean>
<bean id="transformer.complex.OOXML.Image"
<!-- This was called transformer.complex.OOXML.Image, but now the first stage
is any transformer to allow fail over when there is no embedded thumbnail. -->
<bean id="transformer.complex.Any.Image"
class="org.alfresco.repo.content.transform.ComplexContentTransformer"
parent="baseContentTransformer">
parent="baseComplexContentTransformer">
<property name="transformers">
<list>
<ref bean="transformer.OOXMLThumbnail" />
<null />
<ref bean="transformer.ImageMagick" />
</list>
</property>
@@ -495,7 +508,7 @@
<bean id="transformer.complex.OpenOffice.Image"
class="org.alfresco.repo.content.transform.ComplexContentTransformer"
parent="baseContentTransformer" >
parent="baseComplexContentTransformer" >
<property name="transformers">
<list>
<ref bean="transformer.OpenOffice" />
@@ -535,7 +548,7 @@
<bean id="transformer.complex.Text.Image"
class="org.alfresco.repo.content.transform.ComplexContentTransformer"
parent="baseContentTransformer" >
parent="baseComplexContentTransformer" >
<property name="transformers">
<list>
<ref bean="transformer.PdfBox.TextToPdf" />
@@ -565,7 +578,7 @@
class="org.alfresco.repo.content.transform.PoiContentTransformer"
parent="baseContentTransformer" />
<!-- This one handles the newer ooxml office formats, such as .xlsx and .docx -->
<!-- This one handles the ooxml office formats, such as .xlsx and .docx -->
<bean id="transformer.OOXML"
class="org.alfresco.repo.content.transform.PoiOOXMLContentTransformer"
parent="baseContentTransformer" />
@@ -692,7 +705,7 @@
<bean id="transformer.complex.OpenOffice.PdfBox"
class="org.alfresco.repo.content.transform.ComplexContentTransformer"
parent="baseContentTransformer" >
parent="baseComplexContentTransformer" >
<property name="transformers">
<list>
<ref bean="transformer.OpenOffice" />

View File

@@ -8,7 +8,7 @@
<!-- -->
<bean id="fileContentStore" class="org.alfresco.repo.tenant.TenantRoutingFileContentStore" parent="baseTenantRoutingContentStore">
<property name="defaultRootDir" value="${dir.contentstore}" />
<property name="rootLocation" value="${dir.contentstore}" />
</bean>
</beans>

View File

@@ -74,6 +74,9 @@
<property name="workerThreads">
<value>${synchronization.workerThreads}</value>
</property>
<property name="allowDeletions">
<value>${synchronization.allowDeletions}</value>
</property>
</bean>

View File

@@ -27,4 +27,7 @@ synchronization.autoCreatePeopleOnLogin=true
synchronization.loggingInterval=100
# The number of threads to use when doing a batch (scheduled or startup) sync
synchronization.workerThreads=2
synchronization.workerThreads=2
# Synchronization with deletions
synchronization.allowDeletions=true

View File

@@ -28,7 +28,7 @@
<bean id="transformer.complex.OpenOffice.Pdf2swf"
class="org.alfresco.repo.content.transform.ComplexContentTransformer"
parent="baseContentTransformer" >
parent="baseComplexContentTransformer" >
<property name="transformers">
<list>
<ref bean="transformer.OpenOffice" />
@@ -84,7 +84,7 @@
<bean id="transformer.complex.iWorks.Pdf2swf"
class="org.alfresco.repo.content.transform.ComplexContentTransformer"
parent="baseContentTransformer" >
parent="baseComplexContentTransformer" >
<property name="transformers">
<list>
<ref bean="transformer.iWorksQuicklooks" />
@@ -100,7 +100,7 @@
<bean id="transformer.complex.Text.Pdf2swf"
class="org.alfresco.repo.content.transform.ComplexContentTransformer"
parent="baseContentTransformer" >
parent="baseComplexContentTransformer" >
<property name="transformers">
<list>
<ref bean="transformer.PdfBox.TextToPdf" />
@@ -123,7 +123,7 @@
<!-- This transformer allows for the webpreviewing of zip archive files. -->
<bean id="transformer.complex.Archive.Pdf2swf"
class="org.alfresco.repo.content.transform.ComplexContentTransformer"
parent="baseContentTransformer" >
parent="baseComplexContentTransformer" >
<property name="transformers">
<list>
<ref bean="transformer.Archive" />
@@ -147,7 +147,7 @@
<!-- This transformer allows for the webpreviewing of outlook msg files. -->
<bean id="transformer.complex.Msg2swf"
class="org.alfresco.repo.content.transform.ComplexContentTransformer"
parent="baseContentTransformer" >
parent="baseComplexContentTransformer" >
<property name="transformers">
<list>
<ref bean="transformer.OutlookMsg" />

View File

@@ -609,7 +609,7 @@ public class EnterpriseCifsAuthenticator extends CifsAuthenticatorBase implement
{
return Capability.Unicode + Capability.RemoteAPIs + Capability.NTSMBs + Capability.NTFind +
Capability.NTStatus + Capability.LargeFiles + Capability.LargeRead + Capability.LargeWrite +
Capability.ExtendedSecurity;
Capability.ExtendedSecurity + Capability.InfoPassthru + Capability.Level2Oplocks;
}
/**

View File

@@ -104,8 +104,11 @@ public class CacheLookupSearchContext extends DotDotContentSearchContext {
if ( fstate.hasModifyDateTime())
info.setModifyDateTime( fstate.getModifyDateTime());
// File allocation size
// File used/allocation size
if ( fstate.hasFileSize())
info.setFileSize( fstate.getFileSize());
if ( fstate.hasAllocationSize() && fstate.getAllocationSize() > info.getSize())
info.setAllocationSize( fstate.getAllocationSize());

View File

@@ -1353,8 +1353,12 @@ public class ContentDiskDriver extends AlfrescoTxDiskDriver implements DiskInter
}
}
}
else
searchCtx = new ContentSearchContext(cifsHelper, results, searchFileSpec, pseudoList, paths[0]);
else {
if ( ctx.hasStateCache())
searchCtx = new CacheLookupSearchContext(cifsHelper, results, searchFileSpec, pseudoList, paths[0], ctx.getStateCache());
else
searchCtx = new ContentSearchContext(cifsHelper, results, searchFileSpec, pseudoList, paths[0]);
}
// Debug

View File

@@ -661,10 +661,12 @@ public class ContentNetworkFile extends NodeRefNetworkFile
setFileSize(channel.size());
// Update the modification date/time
// Update the modification date/time and live file size
if ( getFileState() != null)
if ( getFileState() != null) {
getFileState().updateModifyDateTime();
getFileState().setFileSize( getFileSize());
}
// DEBUG

View File

@@ -738,6 +738,15 @@ public class ContentServiceImpl implements ContentService, ApplicationContextAwa
* @see org.alfresco.service.cmr.repository.ContentService#getTransformer(String, java.lang.String, long, java.lang.String, org.alfresco.service.cmr.repository.TransformationOptions)
*/
public ContentTransformer getTransformer(String sourceUrl, String sourceMimetype, long sourceSize, String targetMimetype, TransformationOptions options)
{
List<ContentTransformer> transformers = getTransformers(sourceUrl, sourceMimetype, sourceSize, targetMimetype, options);
return (transformers == null) ? null : transformers.get(0);
}
/**
* @see org.alfresco.service.cmr.repository.ContentService#getTransformers(String, java.lang.String, long, java.lang.String, org.alfresco.service.cmr.repository.TransformationOptions)
*/
public List<ContentTransformer> getTransformers(String sourceUrl, String sourceMimetype, long sourceSize, String targetMimetype, TransformationOptions options)
{
try
{
@@ -745,7 +754,7 @@ public class ContentServiceImpl implements ContentService, ApplicationContextAwa
transformerDebug.pushAvailable(sourceUrl, sourceMimetype, targetMimetype, options);
List<ContentTransformer> transformers = getActiveTransformers(sourceMimetype, sourceSize, targetMimetype, options);
transformerDebug.availableTransformers(transformers, sourceSize, "ContentService.getTransformer(...)");
return (transformers.isEmpty()) ? null : transformers.get(0);
return transformers.isEmpty() ? null : transformers;
}
finally
{

View File

@@ -1,5 +1,5 @@
/*
* Copyright (C) 2005-2010 Alfresco Software Limited.
* Copyright (C) 2005-2012 Alfresco Software Limited.
*
* This file is part of Alfresco
*
@@ -213,6 +213,15 @@ public class ReplicatingContentStore extends AbstractContentStore
{
return primaryStore.isContentUrlSupported(contentUrl);
}
/**
* @return Return the primary store root location
*/
@Override
public String getRootLocation()
{
return primaryStore.getRootLocation();
}
/**
* Forwards the call directly to the first store in the list of stores.

View File

@@ -62,7 +62,7 @@ public abstract class AbstractContentTransformerLimits extends ContentTransforme
* Indicates if 'page' limits are supported.
* @return false by default.
*/
protected boolean isPageLimitSupported()
protected boolean isPageLimitSupported(String sourceMimetype, String targetMimetype, TransformationOptions options)
{
return pageLimitsSupported;
}
@@ -98,6 +98,10 @@ public abstract class AbstractContentTransformerLimits extends ContentTransforme
@Override
public boolean isTransformable(String sourceMimetype, long sourceSize, String targetMimetype, TransformationOptions options)
{
// To make TransformerDebug output clearer, check the mimetypes and then the sizes.
// If not done, 'unavailable' transformers due to size might be reported even
// though they cannot transform the source to the target mimetype.
return
isTransformableMimetype(sourceMimetype, targetMimetype, options) &&
isTransformableSize(sourceMimetype, sourceSize, targetMimetype, options);
@@ -152,7 +156,7 @@ public abstract class AbstractContentTransformerLimits extends ContentTransforme
// of icons. Note the readLimitKBytes value is not checked as the combined limits
// only have the max or limit KBytes value set (the smaller value is returned).
TransformationOptionLimits limits = getLimits(sourceMimetype, targetMimetype, options);
if (!isPageLimitSupported() || limits.getPageLimit() <= 0)
if (!isPageLimitSupported(sourceMimetype, targetMimetype, options) || limits.getPageLimit() <= 0)
{
maxSourceSizeKBytes = limits.getMaxSourceSizeKBytes();
}

View File

@@ -22,7 +22,10 @@ import java.beans.PropertyDescriptor;
import java.io.File;
import java.io.Serializable;
import java.lang.reflect.InvocationTargetException;
import java.util.ArrayDeque;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Deque;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
@@ -32,6 +35,7 @@ import javax.faces.el.MethodNotFoundException;
import org.alfresco.error.AlfrescoRuntimeException;
import org.alfresco.repo.content.filestore.FileContentWriter;
import org.alfresco.service.cmr.repository.ContentReader;
import org.alfresco.service.cmr.repository.ContentService;
import org.alfresco.service.cmr.repository.ContentWriter;
import org.alfresco.service.cmr.repository.NodeRef;
import org.alfresco.service.cmr.repository.TransformationOptionLimits;
@@ -57,20 +61,37 @@ public class ComplexContentTransformer extends AbstractContentTransformer2 imple
*/
private static Log logger = LogFactory.getLog(ComplexContentTransformer.class);
/**
* Complex transformers contain lower level transformers. In order to find dynamic
* (defined as null) child transformers to use, they recursively check available
* transformers. It makes no sense to have a transformer that is its own child.
*/
static final ThreadLocal<Deque<ContentTransformer>> parentTransformers = new ThreadLocal<Deque<ContentTransformer>>() {
@Override
protected Deque<ContentTransformer> initialValue() {
return new ArrayDeque<ContentTransformer>();
}
};
private List<ContentTransformer> transformers;
private List<String> intermediateMimetypes;
private Map<String,Serializable> transformationOptionOverrides;
private ContentService contentService;
public ComplexContentTransformer()
{
}
/**
* The list of transformers to use.
* The list of transformers to use. If any element is null
* all possible transformers will be considered. If any element
* is null, the contentService property must be set.
* <p>
* If a single transformer is supplied, then it will still be used.
*
* @param transformers list of <b>at least one</b> transformer
*
* @see #setContentService(ContentService)
*/
public void setTransformers(List<ContentTransformer> transformers)
{
@@ -107,6 +128,16 @@ public class ComplexContentTransformer extends AbstractContentTransformer2 imple
this.transformationOptionOverrides = transformationOptionOverrides;
}
/**
* Sets the ContentService. Only required if {@code null} transformers
* are provided to {@link #setTransformers(List).
* @param contentService
*/
public void setContentService(ContentService contentService)
{
this.contentService = contentService;
}
/**
* Ensures that required properties have been set
*/
@@ -125,25 +156,35 @@ public class ComplexContentTransformer extends AbstractContentTransformer2 imple
{
throw new AlfrescoRuntimeException("'mimetypeService' is a required property");
}
for (ContentTransformer transformer: transformers)
{
if (transformer == null)
{
if (contentService == null)
{
throw new AlfrescoRuntimeException("'contentService' is a required property if " +
"there are any null (dynamic) transformers");
}
break;
}
}
}
/**
* Overrides this method to avoid calling
* {@link #isTransformableMimetype(String, String, TransformationOptions)}
* twice on each transformer in the list, as
* {@link #isTransformableSize(String, long, String, TransformationOptions)}
* in this class must check the mimetype too.
*/
@Override
public boolean isTransformable(String sourceMimetype, long sourceSize, String targetMimetype,
TransformationOptions options)
{
// Don't allow transformer to be its own child.
if (parentTransformers.get().contains(this))
{
return false;
}
overrideTransformationOptions(options);
// To make TransformerDebug output clearer, check the mimetypes and then the sizes.
// If not done, 'unavailable' transformers due to size might be reported even
// though they cannot transform the source to the target mimetype.
// Can use super isTransformableSize as it indirectly calls getLimits in this class
// which combines the limits from the first transformer. Other transformer in the chain
// are no checked as sizes are unknown.
return
isTransformableMimetype(sourceMimetype, targetMimetype, options) &&
isTransformableSize(sourceMimetype, sourceSize, targetMimetype, options);
@@ -200,73 +241,42 @@ public class ComplexContentTransformer extends AbstractContentTransformer2 imple
@Override
public boolean isTransformableMimetype(String sourceMimetype, String targetMimetype, TransformationOptions options)
{
return isTransformableMimetypeAndSize(sourceMimetype, -1, targetMimetype, options);
}
@Override
public boolean isTransformableSize(String sourceMimetype, long sourceSize, String targetMimetype, TransformationOptions options)
{
return (sourceSize < 0) ||
super.isTransformableSize(sourceMimetype, sourceSize, targetMimetype, options) &&
isTransformableMimetypeAndSize(sourceMimetype, sourceSize, targetMimetype, options);
}
private boolean isTransformableMimetypeAndSize(String sourceMimetype, long sourceSize,
String targetMimetype, TransformationOptions options)
{
boolean result = true;
String currentSourceMimetype = sourceMimetype;
Iterator<ContentTransformer> transformerIterator = transformers.iterator();
Iterator<String> intermediateMimetypeIterator = intermediateMimetypes.iterator();
while (transformerIterator.hasNext())
{
ContentTransformer transformer = transformerIterator.next();
// determine the target mimetype. This is the final target if we are on the last transformation
String currentTargetMimetype = null;
if (!transformerIterator.hasNext())
// determine the target mimetype. This is the final target if we are on the last transformation
String currentTargetMimetype = transformerIterator.hasNext() ? intermediateMimetypeIterator.next() : targetMimetype;
if (transformer == null)
{
currentTargetMimetype = targetMimetype;
}
else
{
// use an intermediate transformation mimetype
currentTargetMimetype = intermediateMimetypeIterator.next();
}
if (sourceSize < 0)
{
// check we can transform the current stage's mimetypes
if (transformer.isTransformableMimetype(currentSourceMimetype, currentTargetMimetype, options) == false)
{
result = false;
break;
}
}
else
{
// check we can transform the current stage's sizes
try
{
transformerDebug.pushIsTransformableSize(this);
// (using -1 if not the first stage as we can't know the size)
if (transformer.isTransformableSize(currentSourceMimetype, sourceSize, currentTargetMimetype, options) == false)
parentTransformers.get().push(this);
@SuppressWarnings("deprecation")
List<ContentTransformer> firstTansformers = contentService.getActiveTransformers(
currentSourceMimetype, -1, currentTargetMimetype, options);
if (firstTansformers.isEmpty())
{
result = false;
break;
}
// As the size is unknown for the next stages stop.
// In future we might guess sizes such as excl to pdf
// is about 110% of the original size, in which case
// we would continue.
break;
// sourceSize += sourceSize * 10 / 100;
}
finally
{
transformerDebug.popIsTransformableSize();
parentTransformers.get().pop();
}
}
else
{
if (transformer.isTransformableMimetype(currentSourceMimetype, currentTargetMimetype, options) == false)
{
result = false;
break;
}
}
@@ -279,30 +289,111 @@ public class ComplexContentTransformer extends AbstractContentTransformer2 imple
/**
* Indicates if 'page' limits are supported by the first transformer in the chain.
* If the first transformer is dynamic, all possible first transformers must support it.
* @return true if the first transformer supports them.
*/
protected boolean isPageLimitSupported()
@Override
protected boolean isPageLimitSupported(String sourceMimetype, String targetMimetype,
TransformationOptions options)
{
ContentTransformer firstTransformer = transformers.iterator().next();
return (firstTransformer instanceof AbstractContentTransformerLimits)
? ((AbstractContentTransformerLimits)firstTransformer).isPageLimitSupported()
boolean pageLimitSupported;
ContentTransformer firstTransformer = transformers.get(0);
String firstTargetMimetype = intermediateMimetypes.get(0);
if (firstTransformer == null)
{
try
{
parentTransformers.get().push(this);
@SuppressWarnings("deprecation")
List<ContentTransformer> firstTansformers = contentService.getActiveTransformers(
sourceMimetype, -1, firstTargetMimetype, options);
pageLimitSupported = !firstTansformers.isEmpty();
if (pageLimitSupported)
{
for (ContentTransformer transformer: firstTansformers)
{
if (!isPageLimitSupported(transformer, sourceMimetype, targetMimetype, options))
{
pageLimitSupported = false;
break;
}
}
}
}
finally
{
parentTransformers.get().pop();
}
}
else
{
pageLimitSupported = isPageLimitSupported(firstTransformer, sourceMimetype, targetMimetype, options);
}
return pageLimitSupported;
}
private boolean isPageLimitSupported(ContentTransformer transformer, String sourceMimetype,
String targetMimetype, TransformationOptions options)
{
return (transformer instanceof AbstractContentTransformerLimits)
? ((AbstractContentTransformerLimits)transformer).isPageLimitSupported(sourceMimetype, targetMimetype, options)
: false;
}
/**
* Returns the limits from this transformer combined with those of the first transformer in the chain.
* If the first transformer is dynamic, the lowest common denominator between all possible first transformers
* are combined.
*/
protected TransformationOptionLimits getLimits(String sourceMimetype, String targetMimetype,
TransformationOptions options)
{
TransformationOptionLimits firstTransformerLimits = null;
TransformationOptionLimits limits = super.getLimits(sourceMimetype, targetMimetype, options);
ContentTransformer firstTransformer = transformers.get(0);
if (firstTransformer instanceof AbstractContentTransformerLimits)
String firstTargetMimetype = intermediateMimetypes.get(0);
if (firstTransformer == null)
{
String firstTargetMimetype = intermediateMimetypes.get(0);
limits = limits.combine(((AbstractContentTransformerLimits) firstTransformer).
getLimits(sourceMimetype, firstTargetMimetype, options));
try
{
parentTransformers.get().push(this);
@SuppressWarnings("deprecation")
List<ContentTransformer> firstTansformers = contentService.getActiveTransformers(
sourceMimetype, -1, firstTargetMimetype, options);
if (!firstTansformers.isEmpty())
{
for (ContentTransformer transformer: firstTansformers)
{
if (transformer instanceof AbstractContentTransformerLimits)
{
TransformationOptionLimits transformerLimits = ((AbstractContentTransformerLimits)transformer).
getLimits(sourceMimetype, firstTargetMimetype, options);
firstTransformerLimits = (firstTransformerLimits == null)
? transformerLimits
: firstTransformerLimits.combineUpper(transformerLimits);
}
}
}
}
finally
{
parentTransformers.get().pop();
}
}
else
{
if (firstTransformer instanceof AbstractContentTransformerLimits)
{
firstTransformerLimits = ((AbstractContentTransformerLimits)firstTransformer).
getLimits(sourceMimetype, firstTargetMimetype, options);
}
}
if (firstTransformerLimits != null)
{
limits = limits.combine(firstTransformerLimits);
}
return limits;
}
@@ -345,7 +436,22 @@ public class ComplexContentTransformer extends AbstractContentTransformer2 imple
}
// transform
transformer.transform(currentReader, currentWriter, options);
if (transformer == null)
{
try
{
parentTransformers.get().push(this);
contentService.transform(currentReader, currentWriter, options);
}
finally
{
parentTransformers.get().pop();
}
}
else
{
transformer.transform(currentReader, currentWriter, options);
}
// Must clear the sourceNodeRef after the first transformation to avoid later
// transformers thinking the intermediate file is the original node. However as

View File

@@ -109,20 +109,11 @@ public class ContentTransformerRegistry
{
// Get the list of transformers
List<ContentTransformer> transformers = findTransformers(sourceMimetype, sourceSize, targetMimetype, options);
final Map<ContentTransformer,Long> activeTransformers = new HashMap<ContentTransformer, Long>();
// identify the performance of all the transformers
for (ContentTransformer transformer : transformers)
{
// Transformability can be dynamic, i.e. it may have become unusable
// Don't know why we do this test as it has already been done by findTransformers(...)
if (transformer.isTransformable(sourceMimetype, sourceSize, targetMimetype, options) == false)
{
// It is unreliable now.
continue;
}
long transformationTime = transformer.getTransformationTime();
activeTransformers.put(transformer, transformationTime);
}
@@ -151,34 +142,6 @@ public class ContentTransformerRegistry
*/
private List<ContentTransformer> findTransformers(String sourceMimetype, long sourceSize, String targetMimetype, TransformationOptions options)
{
// search for a simple transformer that can do the job
List<ContentTransformer> transformers = findDirectTransformers(sourceMimetype, sourceSize, targetMimetype, options);
// get the complex transformers that can do the job
List<ContentTransformer> complexTransformers = findComplexTransformer(sourceMimetype, targetMimetype, options);
transformers.addAll(complexTransformers);
// done
if (logger.isDebugEnabled())
{
logger.debug("Searched for transformer: \n" +
" source mimetype: " + sourceMimetype + "\n" +
" target mimetype: " + targetMimetype + "\n" +
" transformers: " + transformers);
}
return transformers;
}
/**
* Loops through the content transformers and picks the ones with the highest reliabilities.
* <p>
* Where there are several transformers that are equally reliable, they are all returned.
*
* @return Returns the most reliable transformers for the translation - empty list if there
* are none.
*/
private List<ContentTransformer> findDirectTransformers(String sourceMimetype, long sourceSize, String targetMimetype, TransformationOptions options)
{
//double maxReliability = 0.0;
List<ContentTransformer> transformers = new ArrayList<ContentTransformer>(2);
boolean foundExplicit = false;
@@ -206,19 +169,16 @@ public class ContentTransformerRegistry
}
}
// done
if (logger.isDebugEnabled())
{
logger.debug("Searched for transformer: \n" +
" source mimetype: " + sourceMimetype + "\n" +
" target mimetype: " + targetMimetype + "\n" +
" transformers: " + transformers);
}
return transformers;
}
/**
* Uses a list of known mimetypes to build transformations from several direct transformations.
*/
private List<ContentTransformer> findComplexTransformer(String sourceMimetype, String targetMimetype, TransformationOptions options)
{
// get a complete list of mimetypes
// TODO: Build complex transformers by searching for transformations by mimetype
return Collections.emptyList();
}
/**
* Recursive method to build up a list of content transformers
*/

View File

@@ -783,6 +783,9 @@ public class SchemaBootstrap extends AbstractLifecycleBean
final Dialect dialect = Dialect.getDialect(cfg.getProperties());
String dialectStr = dialect.getClass().getSimpleName();
// Initialise Activiti DB, using an unclosable connection.
initialiseActivitiDBSchema(new UnclosableConnection(connection));
if (create)
{
// execute pre-create scripts (not patches)
@@ -865,9 +868,6 @@ public class SchemaBootstrap extends AbstractLifecycleBean
checkSchemaPatchScripts(cfg, connection, postUpdateScriptPatches, true);
}
// Initialise Activiti DB, using an unclosable connection
initialiseActivitiDBSchema(new UnclosableConnection(connection));
return create;
}

View File

@@ -42,6 +42,7 @@ public class RemoteConnectorResponseImpl implements RemoteConnectorResponse
private String contentType;
private String charset;
private int status;
private Header[] headers;
private InputStream bodyStream;
@@ -53,21 +54,28 @@ public class RemoteConnectorResponseImpl implements RemoteConnectorResponse
* InputStream shouldn't be used as cleanup is needed
*/
public RemoteConnectorResponseImpl(RemoteConnectorRequest request, String contentType,
String charset, Header[] headers, InputStream response)
String charset, int status, Header[] headers, InputStream response)
{
this.request = request;
this.contentType = contentType;
this.charset = charset;
this.headers = headers;
this.status = status;
this.bodyStream = response;
this.bodyBytes = null;
}
public RemoteConnectorResponseImpl(RemoteConnectorRequest request, String contentType,
String charset, Header[] headers, byte[] response)
String charset, int status, Header[] headers, byte[] response)
{
this(request, contentType, charset, headers, new ByteArrayInputStream(response));
this(request, contentType, charset, status, headers, new ByteArrayInputStream(response));
this.bodyBytes = response;
}
@Override
public int getStatus()
{
return status;
}
@Override
public String getCharset()

View File

@@ -24,8 +24,10 @@ import java.io.InputStream;
import org.alfresco.repo.content.MimetypeMap;
import org.alfresco.repo.security.authentication.AuthenticationException;
import org.alfresco.service.cmr.remoteconnector.RemoteConnectorClientException;
import org.alfresco.service.cmr.remoteconnector.RemoteConnectorRequest;
import org.alfresco.service.cmr.remoteconnector.RemoteConnectorResponse;
import org.alfresco.service.cmr.remoteconnector.RemoteConnectorServerException;
import org.alfresco.service.cmr.remoteconnector.RemoteConnectorService;
import org.alfresco.util.HttpClientHelper;
import org.apache.commons.httpclient.Header;
@@ -79,7 +81,8 @@ public class RemoteConnectorServiceImpl implements RemoteConnectorService
/**
* Executes the specified request, and return the response
*/
public RemoteConnectorResponse executeRequest(RemoteConnectorRequest request) throws IOException, AuthenticationException
public RemoteConnectorResponse executeRequest(RemoteConnectorRequest request) throws IOException, AuthenticationException,
RemoteConnectorClientException, RemoteConnectorServerException
{
RemoteConnectorRequestImpl reqImpl = (RemoteConnectorRequestImpl)request;
HttpMethodBase httpRequest = reqImpl.getMethodInstance();
@@ -134,13 +137,13 @@ public class RemoteConnectorServiceImpl implements RemoteConnectorService
// Now build the response
response = new RemoteConnectorResponseImpl(request, responseContentType, responseCharSet,
responseHdrs, wrappedStream);
status, responseHdrs, wrappedStream);
}
else
{
// Fairly small response, just keep the bytes and make life simple
response = new RemoteConnectorResponseImpl(request, responseContentType, responseCharSet,
responseHdrs, httpRequest.getResponseBody());
status, responseHdrs, httpRequest.getResponseBody());
// Now we have the bytes, we can close the HttpClient resources
httpRequest.releaseConnection();
@@ -164,26 +167,42 @@ public class RemoteConnectorServiceImpl implements RemoteConnectorService
logger.debug("Response was " + status + " " + statusText);
// Decide if we should throw an exception
if (status == Status.STATUS_FORBIDDEN)
if (status >= 300)
{
// Tidy if needed
if (httpRequest != null)
httpRequest.releaseConnection();
// Then report the error
throw new AuthenticationException(statusText);
// Specific exceptions
if (status == Status.STATUS_FORBIDDEN ||
status == Status.STATUS_UNAUTHORIZED)
{
throw new AuthenticationException(statusText);
}
// Server side exceptions
if (status >= 500 && status <= 599)
{
throw new RemoteConnectorServerException(status, statusText);
}
else
{
// Client request exceptions
if (httpRequest != null)
{
// Response wasn't too big and is available, supply it
throw new RemoteConnectorClientException(status, statusText, response);
}
else
{
// Response was too large, report without it
throw new RemoteConnectorClientException(status, statusText, null);
}
}
}
if (status == Status.STATUS_INTERNAL_SERVER_ERROR)
{
// Tidy if needed
if (httpRequest != null)
httpRequest.releaseConnection();
// Then report the error
throw new IOException(statusText);
}
// TODO Handle the rest of the different status codes
// Return our created response
// If we get here, then the request/response was all fine
// So, return our created response
return response;
}

View File

@@ -404,7 +404,16 @@ public class RemoteAlfrescoTicketServiceImpl implements RemoteAlfrescoTicketServ
// If the credentials indicate the previous attempt failed, record as now working
if (! credentials.getLastAuthenticationSucceeded())
{
remoteCredentialsService.updateCredentialsAuthenticationSucceeded(true, credentials);
retryingTransactionHelper.doInTransaction(
new RetryingTransactionCallback<Void>()
{
public Void execute()
{
remoteCredentialsService.updateCredentialsAuthenticationSucceeded(true, credentials);
return null;
}
}, false, true
);
}
// Wrap and return

View File

@@ -1,5 +1,5 @@
/*
* Copyright (C) 2005-2011 Alfresco Software Limited.
* Copyright (C) 2005-2012 Alfresco Software Limited.
*
* This file is part of Alfresco
*
@@ -18,7 +18,10 @@
*/
package org.alfresco.repo.security.sync;
import java.io.IOException;
import java.io.Serializable;
import java.io.UnsupportedEncodingException;
import java.net.URLDecoder;
import java.text.DateFormat;
import java.util.Collection;
import java.util.Collections;
@@ -38,6 +41,17 @@ import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.ScheduledThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import javax.management.AttributeNotFoundException;
import javax.management.InstanceNotFoundException;
import javax.management.IntrospectionException;
import javax.management.MBeanAttributeInfo;
import javax.management.MBeanException;
import javax.management.MBeanInfo;
import javax.management.MBeanServerConnection;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import javax.management.ReflectionException;
import org.alfresco.model.ContentModel;
import org.alfresco.repo.batch.BatchProcessor;
import org.alfresco.repo.batch.BatchProcessor.BatchProcessWorker;
@@ -161,6 +175,11 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
/** The number of worker threads. */
private int workerThreads = 2;
private MBeanServerConnection mbeanServer;
/** Allow a full sync to perform deletions? */
private boolean allowDeletions = true;
/**
* Sets the application context manager.
@@ -315,13 +334,51 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
{
this.workerThreads = workerThreads;
}
/**
* Fullsync is run with deletions. By default is set to true.
*
* @param allowDeletions
*/
public void setAllowDeletions(boolean allowDeletions)
{
this.allowDeletions = allowDeletions;
}
/*
* (non-Javadoc)
* @see org.alfresco.repo.security.sync.UserRegistrySynchronizer#synchronize(boolean, boolean, boolean)
*/
public void synchronize(boolean forceUpdate, boolean allowDeletions, final boolean splitTxns)
public void synchronize(boolean forceUpdate, boolean isFullSync, final boolean splitTxns)
{
if (ChainingUserRegistrySynchronizer.logger.isDebugEnabled())
{
if (forceUpdate)
{
ChainingUserRegistrySynchronizer.logger.debug("Running a full sync.");
}
else
{
ChainingUserRegistrySynchronizer.logger.debug("Running a differential sync.");
}
if (allowDeletions)
{
ChainingUserRegistrySynchronizer.logger.debug("deletions are allowed");
}
else
{
ChainingUserRegistrySynchronizer.logger.debug("deletions are not allowed");
}
// Don't proceed with the sync if the repository is read only
if (this.transactionService.isReadOnly())
{
ChainingUserRegistrySynchronizer.logger
.warn("Unable to proceed with user registry synchronization. Repository is read only.");
return;
}
}
// Don't proceed with the sync if the repository is read only
if (this.transactionService.isReadOnly())
{
@@ -414,17 +471,112 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
UserRegistry plugin = (UserRegistry) context.getBean(this.sourceBeanName);
if (!(plugin instanceof ActivateableBean) || ((ActivateableBean) plugin).isActive())
{
if (ChainingUserRegistrySynchronizer.logger.isDebugEnabled())
{
mbeanServer = (MBeanServerConnection) getApplicationContext().getBean("alfrescoMBeanServer");
try
{
StringBuilder nameBuff = new StringBuilder(200).append("Alfresco:Type=Configuration,Category=Authentication,id1=managed,id2=").append(
URLDecoder.decode(id, "UTF-8"));
ObjectName name = new ObjectName(nameBuff.toString());
if (mbeanServer != null && mbeanServer.isRegistered(name))
{
MBeanInfo info = mbeanServer.getMBeanInfo(name);
MBeanAttributeInfo[] attributes = info.getAttributes();
ChainingUserRegistrySynchronizer.logger.debug(id + " attributes:");
for (MBeanAttributeInfo attribute : attributes)
{
Object value = mbeanServer.getAttribute(name, attribute.getName());
ChainingUserRegistrySynchronizer.logger.debug(attribute.getName() + " = " + value);
}
}
}
catch(UnsupportedEncodingException e)
{
if (ChainingUserRegistrySynchronizer.logger.isWarnEnabled())
{
ChainingUserRegistrySynchronizer.logger
.warn("Exception during logging", e);
}
}
catch (MalformedObjectNameException e)
{
if (ChainingUserRegistrySynchronizer.logger.isWarnEnabled())
{
ChainingUserRegistrySynchronizer.logger
.warn("Exception during logging", e);
}
}
catch (InstanceNotFoundException e)
{
if (ChainingUserRegistrySynchronizer.logger.isWarnEnabled())
{
ChainingUserRegistrySynchronizer.logger
.warn("Exception during logging", e);
}
}
catch (IntrospectionException e)
{
if (ChainingUserRegistrySynchronizer.logger.isWarnEnabled())
{
ChainingUserRegistrySynchronizer.logger
.warn("Exception during logging", e);
}
}
catch (AttributeNotFoundException e)
{
if (ChainingUserRegistrySynchronizer.logger.isWarnEnabled())
{
ChainingUserRegistrySynchronizer.logger
.warn("Exception during logging", e);
}
}
catch (ReflectionException e)
{
if (ChainingUserRegistrySynchronizer.logger.isWarnEnabled())
{
ChainingUserRegistrySynchronizer.logger
.warn("Exception during logging", e);
}
}
catch (MBeanException e)
{
if (ChainingUserRegistrySynchronizer.logger.isWarnEnabled())
{
ChainingUserRegistrySynchronizer.logger
.warn("Exception during logging", e);
}
}
catch (IOException e)
{
if (ChainingUserRegistrySynchronizer.logger.isWarnEnabled())
{
ChainingUserRegistrySynchronizer.logger
.warn("Exception during logging", e);
}
}
}
if (ChainingUserRegistrySynchronizer.logger.isInfoEnabled())
{
ChainingUserRegistrySynchronizer.logger
.info("Synchronizing users and groups with user registry '" + id + "'");
}
if (allowDeletions && ChainingUserRegistrySynchronizer.logger.isWarnEnabled())
if (isFullSync && ChainingUserRegistrySynchronizer.logger.isWarnEnabled())
{
ChainingUserRegistrySynchronizer.logger
.warn("Full synchronization with user registry '"
+ id
+ "'; some users and groups previously created by synchronization with this user registry may be removed.");
+ id + "'");
if (allowDeletions)
{
ChainingUserRegistrySynchronizer.logger
.warn("Some users and groups previously created by synchronization with this user registry may be removed.");
}
else
{
ChainingUserRegistrySynchronizer.logger
.warn("Deletions are disabled. Users and groups removed from this registry will be logged only and will remain in the repository. Users previously found in a different registry will be moved in the repository rather than recreated.");
}
}
// Work out whether we should do the work in a separate transaction (it's most performant if we
// bunch it into small transactions, but if we are doing a sync on login, it has to be the same
@@ -432,13 +584,14 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
boolean requiresNew = splitTxns
|| AlfrescoTransactionSupport.getTransactionReadState() == TxnReadState.TXN_READ_ONLY;
syncWithPlugin(id, plugin, forceUpdate, allowDeletions, requiresNew, visitedZoneIds, allZoneIds);
syncWithPlugin(id, plugin, forceUpdate, isFullSync, requiresNew, visitedZoneIds, allZoneIds);
}
}
catch (NoSuchBeanDefinitionException e)
{
// Ignore and continue
}
}
}
catch (RuntimeException e)
@@ -583,7 +736,7 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
* the user registry and updated locally. When <code>false</code> then each source is only queried for
* those users and groups modified since the most recent modification date of all the objects last
* queried from that same source.
* @param allowDeletions
* @param isFullSync
* Should a complete set of user and group IDs be queried from the user registries in order to determine
* deletions? This parameter is independent of <code>force</code> as a separate query is run to process
* updates.
@@ -602,7 +755,7 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
* or group needs to be 're-zoned'.
*/
private void syncWithPlugin(final String zone, UserRegistry userRegistry, boolean forceUpdate,
boolean allowDeletions, boolean splitTxns, final Set<String> visitedZoneIds, final Set<String> allZoneIds)
boolean isFullSync, boolean splitTxns, final Set<String> visitedZoneIds, final Set<String> allZoneIds)
{
// Create a prefixed zone ID for use with the authority service
final String zoneId = AuthorityService.ZONE_AUTH_EXT_PREFIX + zone;
@@ -685,10 +838,24 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
// Check whether the group is in any of the authentication chain zones
Set<String> intersection = new TreeSet<String>(groupZones);
intersection.retainAll(allZoneIds);
if (intersection.isEmpty())
// Check whether the group is in any of the higher priority authentication chain zones
Set<String> visited = new TreeSet<String>(intersection);
visited.retainAll(visitedZoneIds);
if (groupZones.contains(zoneId))
{
// The group exists, but not in a zone that's in the authentication chain. May be due to
// upgrade or zone changes. Let's re-zone them
// The group already existed in this zone: update the group
updateGroup(group, true);
}
else if (!visited.isEmpty())
{
// A group that exists in a different zone with higher precedence
return;
}
else if (!allowDeletions || intersection.isEmpty())
{
// Deletions are disallowed or the group exists, but not in a zone that's in the authentication
// chain. May be due to upgrade or zone changes. Let's re-zone them
if (ChainingUserRegistrySynchronizer.logger.isWarnEnabled())
{
ChainingUserRegistrySynchronizer.logger.warn("Updating group '" + groupShortName
@@ -698,21 +865,12 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
ChainingUserRegistrySynchronizer.this.authorityService.removeAuthorityFromZones(groupName,
groupZones);
ChainingUserRegistrySynchronizer.this.authorityService.addAuthorityToZones(groupName, zoneSet);
}
if (groupZones.contains(zoneId) || intersection.isEmpty())
{
// The group already existed in this zone or no valid zone: update the group
// The group now exists in this zone: update the group
updateGroup(group, true);
}
else
{
// Check whether the group is in any of the higher priority authentication chain zones
intersection.retainAll(visitedZoneIds);
if (!intersection.isEmpty())
{
// A group that exists in a different zone with higher precedence
return;
}
// The group existed, but in a zone with lower precedence
if (ChainingUserRegistrySynchronizer.logger.isWarnEnabled())
{
@@ -824,8 +982,6 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
{
if (!newChildPersons.remove(child))
{
// Make sure each person with association changes features as a key in the creation map
recordParentAssociationCreation(child, null);
recordParentAssociationDeletion(child, groupName);
}
}
@@ -849,10 +1005,14 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
// Create new associations
for (String child : newChildPersons)
{
// Make sure each person with association changes features as a key in the deletion map
recordParentAssociationDeletion(child, null);
recordParentAssociationCreation(child, groupName);
}
for (String child : newChildGroups)
{
// Make sure each group with association changes features as a key in the deletion map
recordParentAssociationDeletion(child, null);
recordParentAssociationCreation(child, groupName);
}
}
@@ -1094,11 +1254,11 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
}
}
public void processGroups(UserRegistry userRegistry, boolean allowDeletions, boolean splitTxns)
public void processGroups(UserRegistry userRegistry, boolean isFullSync, boolean splitTxns)
{
// If we got back some groups, we have to cross reference them with the set of known authorities
if (allowDeletions || !this.groupParentAssocsToCreate.isEmpty()
|| !this.personParentAssocsToCreate.isEmpty())
if (isFullSync || !this.groupParentAssocsToDelete.isEmpty()
|| !this.groupParentAssocsToDelete.isEmpty())
{
final Set<String> allZonePersons = newPersonSet();
final Set<String> allZoneGroups = new TreeSet<String>();
@@ -1117,17 +1277,19 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
}
}, true, splitTxns);
final Set<String> personDeletionCandidates = newPersonSet();
personDeletionCandidates.addAll(allZonePersons);
final Set<String> groupDeletionCandidates = new TreeSet<String>();
groupDeletionCandidates.addAll(allZoneGroups);
allZoneGroups.addAll(this.groupsToCreate.keySet());
// Prune our set of authorities according to deletions
if (allowDeletions)
if (isFullSync)
{
final Set<String> personDeletionCandidates = newPersonSet();
personDeletionCandidates.addAll(allZonePersons);
final Set<String> groupDeletionCandidates = new TreeSet<String>();
groupDeletionCandidates.addAll(allZoneGroups);
this.deletionCandidates = new TreeSet<String>();
for (String person : userRegistry.getPersonNames())
{
personDeletionCandidates.remove(person);
@@ -1141,14 +1303,80 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
this.deletionCandidates = new TreeSet<String>();
this.deletionCandidates.addAll(personDeletionCandidates);
this.deletionCandidates.addAll(groupDeletionCandidates);
if (allowDeletions)
{
allZonePersons.removeAll(personDeletionCandidates);
allZoneGroups.removeAll(groupDeletionCandidates);
}
else
{
if (!personDeletionCandidates.isEmpty())
{
ChainingUserRegistrySynchronizer.logger.warn("The following missing users are not being deleted as allowDeletions == false");
for (String person : personDeletionCandidates)
{
ChainingUserRegistrySynchronizer.logger.warn(" " + person);
}
}
if (!groupDeletionCandidates.isEmpty())
{
ChainingUserRegistrySynchronizer.logger.warn("The following missing groups are not being deleted as allowDeletions == false");
for (String group : groupDeletionCandidates)
{
ChainingUserRegistrySynchronizer.logger.warn(" " + group);
}
}
// Complete association deletion information by scanning deleted groups
BatchProcessor<String> groupScanner = new BatchProcessor<String>(zone
+ " Missing Authority Scanning",
ChainingUserRegistrySynchronizer.this.transactionService
.getRetryingTransactionHelper(), this.deletionCandidates,
ChainingUserRegistrySynchronizer.this.workerThreads, 20,
ChainingUserRegistrySynchronizer.this.applicationEventPublisher,
ChainingUserRegistrySynchronizer.logger,
ChainingUserRegistrySynchronizer.this.loggingInterval);
groupScanner.process(new BaseBatchProcessWorker<String>()
{
allZonePersons.removeAll(personDeletionCandidates);
allZoneGroups.removeAll(groupDeletionCandidates);
@Override
public String getIdentifier(String entry)
{
return entry;
}
@Override
public void process(String authority) throws Throwable
{
// Disassociate it from this zone, allowing it to be reclaimed by something further down the chain
ChainingUserRegistrySynchronizer.this.authorityService.removeAuthorityFromZones(authority,
Collections.singleton(zoneId));
// For groups, remove all members
if (AuthorityType.getAuthorityType(authority) != AuthorityType.USER)
{
String groupShortName = ChainingUserRegistrySynchronizer.this.authorityService
.getShortName(authority);
String groupDisplayName = ChainingUserRegistrySynchronizer.this.authorityService
.getAuthorityDisplayName(authority);
NodeDescription dummy = new NodeDescription(groupShortName + " (Deleted)");
PropertyMap dummyProperties = dummy.getProperties();
dummyProperties.put(ContentModel.PROP_AUTHORITY_NAME, authority);
if (groupDisplayName != null)
{
dummyProperties.put(ContentModel.PROP_AUTHORITY_DISPLAY_NAME, groupDisplayName);
}
updateGroup(dummy, true);
}
}
}, splitTxns);
}
}
// Prune the group associations now that we have complete information
this.groupParentAssocsToCreate.keySet().retainAll(allZoneGroups);
logRetainParentAssociations(this.groupParentAssocsToDelete, allZoneGroups);
logRetainParentAssociations(this.groupParentAssocsToCreate, allZoneGroups);
this.finalGroupChildAssocs.keySet().retainAll(allZoneGroups);
// Pruning person associations will have to wait until we have passed over all persons and built up
@@ -1234,17 +1462,17 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
}
// Remove all the associations we have already dealt with
this.personParentAssocsToCreate.keySet().removeAll(this.personsProcessed);
this.personParentAssocsToDelete.keySet().removeAll(this.personsProcessed);
// Filter out associations to authorities that simply can't exist (and log if debugging is enabled)
logRetainParentAssociations(this.personParentAssocsToCreate, this.allZonePersons);
// Update associations to persons not updated themselves
if (!this.personParentAssocsToCreate.isEmpty())
if (!this.personParentAssocsToDelete.isEmpty())
{
BatchProcessor<Map.Entry<String, Set<String>>> groupCreator = new BatchProcessor<Map.Entry<String, Set<String>>>(
zone + " Person Association", ChainingUserRegistrySynchronizer.this.transactionService
.getRetryingTransactionHelper(), this.personParentAssocsToCreate.entrySet(),
.getRetryingTransactionHelper(), this.personParentAssocsToDelete.entrySet(),
ChainingUserRegistrySynchronizer.this.workerThreads, 20,
ChainingUserRegistrySynchronizer.this.applicationEventPublisher,
ChainingUserRegistrySynchronizer.logger,
@@ -1340,7 +1568,7 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
final Analyzer groupAnalyzer = new Analyzer(lastModifiedMillis);
int groupProcessedCount = groupProcessor.process(groupAnalyzer, splitTxns);
groupAnalyzer.processGroups(userRegistry, allowDeletions, splitTxns);
groupAnalyzer.processGroups(userRegistry, isFullSync, splitTxns);
// Process persons and their parent associations
@@ -1413,10 +1641,19 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
// Check whether the user is in any of the authentication chain zones
Set<String> intersection = new TreeSet<String>(zones);
intersection.retainAll(allZoneIds);
if (intersection.size() == 0)
// Check whether the user is in any of the higher priority authentication chain zones
Set<String> visited = new TreeSet<String>(intersection);
visited.retainAll(visitedZoneIds);
if (visited.size() > 0)
{
// The person exists, but not in a zone that's in the authentication chain. May be due
// to upgrade or zone changes. Let's re-zone them
// A person that exists in a different zone with higher precedence - ignore
return;
}
else if (!allowDeletions || intersection.isEmpty())
{
// The person exists, but in a different zone. Either deletions are disallowed or the zone is
// not in the authentication chain. May be due to upgrade or zone changes. Let's re-zone them
if (ChainingUserRegistrySynchronizer.logger.isWarnEnabled())
{
ChainingUserRegistrySynchronizer.logger.warn("Updating user '" + personName
@@ -1431,14 +1668,6 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
}
else
{
// Check whether the user is in any of the higher priority authentication chain zones
intersection.retainAll(visitedZoneIds);
if (intersection.size() > 0)
{
// A person that exists in a different zone with higher precedence - ignore
return;
}
// The person existed, but in a zone with lower precedence
if (ChainingUserRegistrySynchronizer.logger.isWarnEnabled())
{
@@ -1491,7 +1720,7 @@ public class ChainingUserRegistrySynchronizer extends AbstractLifecycleBean impl
// Delete authorities if we have complete information for the zone
Set<String> deletionCandidates = groupAnalyzer.getDeletionCandidates();
if (allowDeletions && !deletionCandidates.isEmpty())
if (isFullSync && allowDeletions && !deletionCandidates.isEmpty())
{
BatchProcessor<String> authorityDeletionProcessor = new BatchProcessor<String>(
zone + " Authority Deletion", this.transactionService.getRetryingTransactionHelper(),

View File

@@ -1,5 +1,5 @@
/*
* Copyright (C) 2005-2010 Alfresco Software Limited.
* Copyright (C) 2005-2012 Alfresco Software Limited.
*
* This file is part of Alfresco
*
@@ -208,7 +208,19 @@ public class ChainingUserRegistrySynchronizerTest extends TestCase
*/
public void tearDownTestUsersAndGroups() throws Exception
{
// Wipe out everything that was in Z1 and Z2
// Re-zone everything that may have gone astray
this.applicationContextManager.setUserRegistries(new MockUserRegistry("Z0", new NodeDescription[]
{
newPerson("U1"), newPerson("U2"), newPerson("U3"), newPerson("U4"), newPerson("U5"), newPerson("U6"),
newPerson("U7")
}, new NodeDescription[]
{
newGroup("G1"), newGroup("G2"), newGroup("G3"), newGroup("G4"), newGroup("G5"), newGroup("G6"),
newGroup("G7")
}), new MockUserRegistry("Z1", new NodeDescription[] {}, new NodeDescription[] {}), new MockUserRegistry("Z2",
new NodeDescription[] {}, new NodeDescription[] {}));
this.synchronizer.synchronize(true, true, true);
// Wipe out everything that was in Z0 - Z2
this.applicationContextManager.setUserRegistries(new MockUserRegistry("Z0", new NodeDescription[] {},
new NodeDescription[] {}), new MockUserRegistry("Z1", new NodeDescription[] {},
new NodeDescription[] {}), new MockUserRegistry("Z2", new NodeDescription[] {},
@@ -382,6 +394,53 @@ public class ChainingUserRegistrySynchronizerTest extends TestCase
tearDownTestUsersAndGroups();
}
/**
* Tests a forced update of the test users and groups with deletions disabled. No users or groups should be deleted,
* whether or not they move registry. Groups that would have been deleted should have no members and should only be
* in the default zone.
*
* @throws Exception
* the exception
*/
public void testForcedUpdateWithoutDeletions() throws Exception
{
UserRegistrySynchronizer synchronizer = (UserRegistrySynchronizer) ChainingUserRegistrySynchronizerTest.context
.getBean("testUserRegistrySynchronizerPreventDeletions");
setUpTestUsersAndGroups();
this.applicationContextManager.setUserRegistries(new MockUserRegistry("Z0", new NodeDescription[]
{
newPerson("U2"), newPerson("U3"), newPerson("U4"),
}, new NodeDescription[]
{
newGroup("G1"), newGroup("G2"),
}), new MockUserRegistry("Z1", new NodeDescription[]
{
newPerson("U5"), newPerson("u6"),
}, new NodeDescription[] {}), new MockUserRegistry("Z2", new NodeDescription[]
{
newPerson("U6"),
}, new NodeDescription[] {}));
synchronizer.synchronize(true, true, true);
this.retryingTransactionHelper.doInTransaction(new RetryingTransactionCallback<Object>()
{
public Object execute() throws Throwable
{
assertExists("Z0", "U2");
assertExists("Z0", "U3");
assertExists("Z0", "U4");
assertExists("Z1", "U5");
assertExists("Z1", "u6");
assertExists(null, "U1");
assertExists(null, "U7");
assertExists(null, "G5");
assertExists(null, "G6");
return null;
}
}, false, true);
tearDownTestUsersAndGroups();
}
/**
* Tests a forced update of the test users and groups where some of the users change their case and some groups
* appear with different case.
@@ -604,8 +663,17 @@ public class ChainingUserRegistrySynchronizerTest extends TestCase
assertTrue(this.authorityService.authorityExists(longName));
// Check in correct zone
assertTrue(this.authorityService.getAuthorityZones(longName).contains(
AuthorityService.ZONE_AUTH_EXT_PREFIX + zone));
if (zone == null)
{
assertEquals(Collections.singleton(AuthorityService.ZONE_APP_DEFAULT), this.authorityService
.getAuthorityZones(longName));
}
else
{
assertTrue(this.authorityService.getAuthorityZones(longName).contains(
AuthorityService.ZONE_AUTH_EXT_PREFIX + zone));
}
if (AuthorityType.getAuthorityType(longName).equals(AuthorityType.GROUP))
{
// Check groups have expected members

View File

@@ -52,7 +52,7 @@ public interface UserRegistrySynchronizer
* the user registry and updated locally. When <code>false</code> then each source is only queried for
* those users and groups modified since the most recent modification date of all the objects last
* queried from that same source.
* @param allowDeletions
* @param isFullSync
* Should a complete set of user and group IDs be queried from the user registries in order to determine
* deletions? This parameter is independent of <code>force</code> as a separate query is run to process
* updates.
@@ -62,7 +62,7 @@ public interface UserRegistrySynchronizer
* <code>false</code>, all users and groups are processed in the current transaction. This is required if
* calling synchronously (e.g. in response to an authentication event in the same transaction).
*/
public void synchronize(boolean forceUpdate, boolean allowDeletions, boolean splitTxns);
public void synchronize(boolean forceUpdate, boolean isFullSync, boolean splitTxns);
/**
* Gets the set of property names that are auto-mapped for the user with the given user name. These should remain

View File

@@ -571,6 +571,10 @@ public class LDAPUserRegistry implements UserRegistry, LDAPNameResolver, Initial
}
else
{
if (LDAPUserRegistry.logger.isDebugEnabled())
{
LDAPUserRegistry.logger.debug("Person DN recognized: " + nameAttribute.get());
}
personNames.add((String) nameAttribute.get());
}
}
@@ -614,6 +618,10 @@ public class LDAPUserRegistry implements UserRegistry, LDAPNameResolver, Initial
else
{
String authority = "GROUP_" + (String) nameAttribute.get();
if (LDAPUserRegistry.logger.isDebugEnabled())
{
LDAPUserRegistry.logger.debug("Group DN recognized: " + authority);
}
groupNames.add(authority);
}
}
@@ -716,7 +724,11 @@ public class LDAPUserRegistry implements UserRegistry, LDAPNameResolver, Initial
Attribute memAttribute = getRangeRestrictedAttribute(attributes,
LDAPUserRegistry.this.memberAttributeName);
int nextStart = LDAPUserRegistry.this.attributeBatchSize;
if (LDAPUserRegistry.logger.isDebugEnabled())
{
LDAPUserRegistry.logger.debug("Processing group: " + gid +
", from source: " + group.getSourceId());
}
// Loop until we get to the end of the range
while (memAttribute != null)
{
@@ -745,6 +757,10 @@ public class LDAPUserRegistry implements UserRegistry, LDAPNameResolver, Initial
&& (nameAttribute = nameAttributes
.get(LDAPUserRegistry.this.userIdAttributeName)) != null)
{
if (LDAPUserRegistry.logger.isDebugEnabled())
{
LDAPUserRegistry.logger.debug("User DN recognized: " + nameAttribute.get());
}
childAssocs.add((String) nameAttribute.get());
continue;
}
@@ -754,6 +770,10 @@ public class LDAPUserRegistry implements UserRegistry, LDAPNameResolver, Initial
&& (nameAttribute = nameAttributes
.get(LDAPUserRegistry.this.groupIdAttributeName)) != null)
{
if (LDAPUserRegistry.logger.isDebugEnabled())
{
LDAPUserRegistry.logger.debug("Group DN recognized: " + "GROUP_" + nameAttribute.get());
}
childAssocs.add("GROUP_" + nameAttribute.get());
continue;
}
@@ -793,7 +813,10 @@ public class LDAPUserRegistry implements UserRegistry, LDAPNameResolver, Initial
continue;
}
}
if (LDAPUserRegistry.logger.isDebugEnabled())
{
LDAPUserRegistry.logger.debug("User DN recognized by directory lookup: " + nameAttribute.get());
}
childAssocs.add((String) nameAttribute.get());
continue;
}
@@ -815,6 +838,10 @@ public class LDAPUserRegistry implements UserRegistry, LDAPNameResolver, Initial
continue;
}
}
if (LDAPUserRegistry.logger.isDebugEnabled())
{
LDAPUserRegistry.logger.debug("Group DN recognized by directory lookup: " + "GROUP_" + nameAttribute.get());
}
childAssocs.add("GROUP_" + nameAttribute.get());
continue;
}
@@ -844,6 +871,10 @@ public class LDAPUserRegistry implements UserRegistry, LDAPNameResolver, Initial
{
// The member attribute didn't parse as a DN. So assume we have a group class like
// posixGroup (FDS) that directly lists user names
if (LDAPUserRegistry.logger.isDebugEnabled())
{
LDAPUserRegistry.logger.debug("Member DN recognized as posixGroup: " + attribute);
}
childAssocs.add(attribute);
}
}
@@ -1121,7 +1152,20 @@ public class LDAPUserRegistry implements UserRegistry, LDAPNameResolver, Initial
SearchControls searchControls = new SearchControls();
searchControls.setSearchScope(SearchControls.SUBTREE_SCOPE);
searchControls.setReturningAttributes(returningAttributes);
if (LDAPUserRegistry.logger.isDebugEnabled())
{
LDAPUserRegistry.logger.debug("Processing query");
LDAPUserRegistry.logger.debug("Search base: " + searchBase);
LDAPUserRegistry.logger.debug(" Return result limit: " + searchControls.getCountLimit());
LDAPUserRegistry.logger.debug(" DerefLink: " + searchControls.getDerefLinkFlag());
LDAPUserRegistry.logger.debug(" Return named object: " + searchControls.getReturningObjFlag());
LDAPUserRegistry.logger.debug(" Time limit for search: " + searchControls.getTimeLimit());
LDAPUserRegistry.logger.debug(" Attributes to return: " + returningAttributes.length + " items.");
for (String ra : returningAttributes)
{
LDAPUserRegistry.logger.debug(" Attribute: " + ra);
}
}
InitialDirContext ctx = null;
try
{
@@ -1285,6 +1329,11 @@ public class LDAPUserRegistry implements UserRegistry, LDAPNameResolver, Initial
public void process(SearchResult result) throws NamingException, ParseException
{
this.count++;
if (LDAPUserRegistry.logger.isDebugEnabled())
{
String personName = result.getNameInNamespace();
LDAPUserRegistry.logger.debug("Processing person: " + personName);
}
}
/*

View File

@@ -50,7 +50,7 @@ public abstract class AbstractTenantRoutingContentStore extends AbstractRoutingC
private SimpleCache<String, ContentStore> singletonCache; // eg. for contentStore
private final String KEY_CONTENT_STORE = "key.tenant.routing.content.store";
public void setDefaultRootDir(String defaultRootDirectory)
public void setRootLocation(String defaultRootDirectory)
{
this.defaultRootDirectory = defaultRootDirectory;
}
@@ -70,8 +70,7 @@ public abstract class AbstractTenantRoutingContentStore extends AbstractRoutingC
this.singletonCache = singletonCache;
}
/*
* (non-Javadoc)
/* (non-Javadoc)
* @see org.springframework.context.ApplicationContextAware#setApplicationContext(org.springframework.context.
* ApplicationContext)
*/
@@ -80,6 +79,12 @@ public abstract class AbstractTenantRoutingContentStore extends AbstractRoutingC
this.applicationContext = applicationContext;
}
@Override
public String getRootLocation()
{
return defaultRootDirectory;
}
@Override
protected ContentStore selectWriteStore(ContentContext ctx)
{
@@ -149,7 +154,7 @@ public abstract class AbstractTenantRoutingContentStore extends AbstractRoutingC
public void init()
{
String rootDir = defaultRootDirectory;
String rootDir = getRootLocation();
Tenant tenant = tenantService.getTenant(tenantService.getCurrentUserDomain());
if (tenant != null)
{
@@ -177,10 +182,5 @@ public abstract class AbstractTenantRoutingContentStore extends AbstractRoutingC
destroy();
}
public String getDefaultRootDir()
{
return this.defaultRootDirectory;
}
protected abstract ContentStore initContentStore(ApplicationContext ctx, String contentRoot);
}

View File

@@ -32,6 +32,7 @@ import net.sf.acegisecurity.providers.encoding.PasswordEncoder;
import org.alfresco.error.AlfrescoRuntimeException;
import org.alfresco.repo.admin.RepoModelDefinition;
import org.alfresco.repo.content.ContentStore;
import org.alfresco.repo.dictionary.DictionaryComponent;
import org.alfresco.repo.domain.tenant.TenantAdminDAO;
import org.alfresco.repo.domain.tenant.TenantEntity;
@@ -85,7 +86,7 @@ public class MultiTAdminServiceImpl implements TenantAdminService, ApplicationCo
protected DictionaryComponent dictionaryComponent;
protected TenantAdminDAO tenantAdminDAO;
protected PasswordEncoder passwordEncoder;
protected TenantRoutingFileContentStore tenantFileContentStore;
protected ContentStore tenantFileContentStore;
private ThumbnailRegistry thumbnailRegistry;
private WorkflowService workflowService;
@@ -166,7 +167,7 @@ public class MultiTAdminServiceImpl implements TenantAdminService, ApplicationCo
this.passwordEncoder = passwordEncoder;
}
public void setTenantFileContentStore(TenantRoutingFileContentStore tenantFileContentStore)
public void setTenantFileContentStore(ContentStore tenantFileContentStore)
{
this.tenantFileContentStore = tenantFileContentStore;
}
@@ -259,9 +260,12 @@ public class MultiTAdminServiceImpl implements TenantAdminService, ApplicationCo
// register dictionary - to allow enable/disable tenant callbacks
register(dictionaryComponent);
// register file store - to allow enable/disable tenant callbacks
// note: tenantFileContentStore must be registed before dictionaryRepositoryBootstrap
register(tenantFileContentStore, 0);
if (tenantFileContentStore instanceof TenantDeployer)
{
// register file store - to allow enable/disable tenant callbacks
// note: tenantFileContentStore must be registed before dictionaryRepositoryBootstrap
register((TenantDeployer)tenantFileContentStore, 0);
}
UserTransaction userTransaction = transactionService.getUserTransaction();
@@ -272,12 +276,18 @@ public class MultiTAdminServiceImpl implements TenantAdminService, ApplicationCo
// bootstrap Tenant Service internal cache
List<Tenant> tenants = getAllTenants();
int enabledCount = 0;
int disabledCount = 0;
for (Tenant tenant : tenants)
{
if ((! (tenantFileContentStore instanceof AbstractTenantRoutingContentStore)) && (! tenantFileContentStore.getRootLocation().equals(tenant.getRootContentStoreDir())))
{
// eg. MT will not work with replicating-content-services-context.sample if tenants are not co-mingled
throw new AlfrescoRuntimeException("MT: cannot start tenants - TenantRoutingContentStore is not configured AND not all tenants use co-mingled content store");
}
if (tenant.isEnabled())
{
// this will also call tenant deployers registered so far ...
@@ -359,7 +369,11 @@ public class MultiTAdminServiceImpl implements TenantAdminService, ApplicationCo
AuthenticationUtil.setFullyAuthenticatedUser(getSystemUser(tenantDomain));
dictionaryComponent.init();
tenantFileContentStore.init();
if (tenantFileContentStore instanceof TenantDeployer)
{
((TenantDeployer)tenantFileContentStore).init();
}
// create tenant-specific stores
ImporterBootstrap userImporterBootstrap = (ImporterBootstrap)ctx.getBean("userBootstrap-mt");
@@ -367,14 +381,14 @@ public class MultiTAdminServiceImpl implements TenantAdminService, ApplicationCo
ImporterBootstrap systemImporterBootstrap = (ImporterBootstrap)ctx.getBean("systemBootstrap-mt");
bootstrapSystemTenantStore(systemImporterBootstrap, tenantDomain);
// deprecated
ImporterBootstrap versionImporterBootstrap = (ImporterBootstrap)ctx.getBean("versionBootstrap-mt");
bootstrapVersionTenantStore(versionImporterBootstrap, tenantDomain);
ImporterBootstrap version2ImporterBootstrap = (ImporterBootstrap)ctx.getBean("version2Bootstrap-mt");
bootstrapVersionTenantStore(version2ImporterBootstrap, tenantDomain);
ImporterBootstrap spacesArchiveImporterBootstrap = (ImporterBootstrap)ctx.getBean("spacesArchiveBootstrap-mt");
bootstrapSpacesArchiveTenantStore(spacesArchiveImporterBootstrap, tenantDomain);
@@ -444,7 +458,11 @@ public class MultiTAdminServiceImpl implements TenantAdminService, ApplicationCo
AuthenticationUtil.setFullyAuthenticatedUser(getSystemUser(tenantDomain));
dictionaryComponent.init();
tenantFileContentStore.init();
if (tenantFileContentStore instanceof TenantDeployer)
{
((TenantDeployer)tenantFileContentStore).init();
}
// import tenant-specific stores
importBootstrapUserTenantStore(tenantDomain, directorySource);
@@ -1141,19 +1159,21 @@ public class MultiTAdminServiceImpl implements TenantAdminService, ApplicationCo
private void initTenant(String tenantDomain, String rootContentStoreDir)
{
validateTenantName(tenantDomain);
validateTenantName(tenantDomain);
if (existsTenant(tenantDomain))
{
throw new AlfrescoRuntimeException("Tenant already exists: " + tenantDomain);
}
if (rootContentStoreDir == null)
{
rootContentStoreDir = tenantFileContentStore.getDefaultRootDir();
}
else
if (rootContentStoreDir != null)
{
if (! (tenantFileContentStore instanceof AbstractTenantRoutingContentStore))
{
// eg. MT will not work with replicating-content-services-context.sample
throw new AlfrescoRuntimeException("MT: cannot initialse tenant - TenantRoutingContentStore is not configured AND tenant is not using co-mingled content store (ie. default root location)");
}
File tenantRootDir = new File(rootContentStoreDir);
if ((tenantRootDir.exists()) && (tenantRootDir.list().length != 0))
{
@@ -1161,6 +1181,11 @@ public class MultiTAdminServiceImpl implements TenantAdminService, ApplicationCo
}
}
if (rootContentStoreDir == null)
{
rootContentStoreDir = tenantFileContentStore.getRootLocation();
}
// init - need to enable tenant (including tenant service) before stores bootstrap
TenantEntity tenantEntity = new TenantEntity(tenantDomain);
tenantEntity.setEnabled(true);

View File

@@ -24,5 +24,4 @@ package org.alfresco.repo.tenant;
*/
public interface TenantRoutingContentStore extends TenantDeployer
{
public String getDefaultRootDir();
}

View File

@@ -0,0 +1,65 @@
/*
* Copyright (C) 2005-2012 Alfresco Software Limited.
*
* This file is part of Alfresco
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
*/
package org.alfresco.service.cmr.remoteconnector;
import java.io.IOException;
/**
* An exception thrown when the remote server indicates that the
* client has made a mistake with the request.
* This exception is normally thrown for responses in the 4xx range,
* eg if a 404 (not found) is returned by the remote server.
*
* Provided that the response was not too large, the response from
* the server will also be available.
*
* @author Nick Burch
* @since 4.0.3
*/
public class RemoteConnectorClientException extends IOException
{
private static final long serialVersionUID = -639209368873463536L;
private final int statusCode;
private final String statusText;
private final RemoteConnectorResponse response;
public RemoteConnectorClientException(int statusCode, String statusText,
RemoteConnectorResponse response)
{
super(statusText);
this.statusCode = statusCode;
this.statusText = statusText;
this.response = response;
}
public int getStatusCode()
{
return statusCode;
}
public String getStatusText()
{
return statusText;
}
public RemoteConnectorResponse getResponse()
{
return response;
}
}

View File

@@ -22,6 +22,7 @@ import java.io.IOException;
import java.io.InputStream;
import org.apache.commons.httpclient.Header;
import org.springframework.extensions.webscripts.Status;
/**
* Helper wrapper around a Remote Response, for a request that
@@ -37,6 +38,11 @@ public interface RemoteConnectorResponse
*/
RemoteConnectorRequest getRequest();
/**
* @return The HTTP {@link Status} Code for the response
*/
int getStatus();
/**
* @return The raw response content type, if available
*/

View File

@@ -0,0 +1,53 @@
/*
* Copyright (C) 2005-2012 Alfresco Software Limited.
*
* This file is part of Alfresco
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
*/
package org.alfresco.service.cmr.remoteconnector;
import java.io.IOException;
/**
* An exception thrown when the remote server indicates that it
* has encountered a problem with the request, and cannot process
* it. This typically means a 5xx response.
*
* @author Nick Burch
* @since 4.0.3
*/
public class RemoteConnectorServerException extends IOException
{
private static final long serialVersionUID = -639209368873463536L;
private final int statusCode;
private final String statusText;
public RemoteConnectorServerException(int statusCode, String statusText)
{
super(statusText);
this.statusCode = statusCode;
this.statusText = statusText;
}
public int getStatusCode()
{
return statusCode;
}
public String getStatusText()
{
return statusText;
}
}

View File

@@ -44,9 +44,15 @@ public interface RemoteConnectorService
RemoteConnectorRequest buildRequest(String url, String method);
/**
* Executes the specified request, and return the response
* Executes the specified request, and return the response.
*
* @throws IOException If there was a problem with the communication to the server
* @throws AuthenticationException If the authentication details supplied were not accepted
* @throws RemoteConnectorClientException If the server indicates the client request was invalid
* @throws RemoteConnectorServerException If the server was itself unable to perform the request
*/
RemoteConnectorResponse executeRequest(RemoteConnectorRequest request) throws IOException, AuthenticationException;
RemoteConnectorResponse executeRequest(RemoteConnectorRequest request) throws IOException, AuthenticationException,
RemoteConnectorClientException, RemoteConnectorServerException;
/**
* Executes the given request, requesting a JSON response, and

View File

@@ -223,6 +223,25 @@ public interface ContentService
@Auditable(parameters = {"sourceMimetype", "targetMimetype"})
public ContentTransformer getTransformer(String sourceMimetype, String targetMimetype);
/**
* Fetch the transformers that are capable of transforming the content in the
* given source mimetype to the given target mimetype with the provided transformation
* options.
* <p/>
* The transformation options provide a finer grain way of discovering the correct transformer,
* since the values and type of the options provided are considered by the transformer when
* deciding whether it can satisfy the transformation request.
* @param sourceUrl TODO
* @param sourceMimetype the source mimetype
* @param sourceSize the source size (bytes). Ignored if negative.
* @param targetMimetype the target mimetype
* @param options the transformation options
*
* @return ContentTransformer the transformers that can be used, or null if none are available
*/
@Auditable(parameters = {"sourceMimetype", "sourceSize", "targetMimetype", "options"})
public List<ContentTransformer> getTransformers(String sourceUrl, String sourceMimetype, long sourceSize, String targetMimetype, TransformationOptions options);
/**
* Fetch the transformer that is capable of transforming the content in the
* given source mimetype to the given target mimetype with the provided transformation
@@ -261,32 +280,14 @@ public interface ContentService
public long getMaxSourceSizeBytes(String sourceMimetype, String targetMimetype, TransformationOptions options);
/**
* Fetch all the transformers that are capable of transforming the content in the
* given source mimetype to the given target mimetype with the provided transformation
* options.
* <p/>
* The transformation options provide a finer grain way of discovering the correct transformer,
* since the values and type of the options provided are considered by the transformer when
* deciding whether it can satisfy the transformation request.
* <p/>
* The list will contain all currently active, applicable transformers sorted in repository preference order.
* The contents of this list may change depending on such factors as the availability of particular transformers
* as well as their current behaviour. For these reasons, this list should not be cached.
*
* @param sourceMimetype the source mimetype
* @param sourceSize the source size (bytes). Ignored if negative.
* @param targetMimetype the target mimetype
* @param options the transformation options
* @return ContentTransformers a List of the transformers that can be used, or the empty list if none were available
*
* @deprecated use {@link #getTransformers(String, String, long, String, TransformationOptions).
* @since 3.5
* @see ContentAccessor#getMimetype()
*/
@Auditable(parameters = {"sourceMimetype", "sourceSize", "targetMimetype", "options"})
public List<ContentTransformer> getActiveTransformers(String sourceMimetype, long sourceSize, String targetMimetype, TransformationOptions options);
/**
* @deprecated use overloaded method with sourceSize parameter.
* @deprecated use {@link #getTransformers(String, String, long, String, TransformationOptions).
*/
public List<ContentTransformer> getActiveTransformers(String sourceMimetype, String targetMimetype, TransformationOptions options);

View File

@@ -1,5 +1,5 @@
/*
* Copyright (C) 2005-2011 Alfresco Software Limited.
* Copyright (C) 2005-2012 Alfresco Software Limited.
*
* This file is part of Alfresco
*
@@ -56,11 +56,20 @@ public class TransformationOptionLimits
pages = new TransformationOptionPair();
}
private TransformationOptionLimits(TransformationOptionLimits a, TransformationOptionLimits b)
private TransformationOptionLimits(TransformationOptionLimits a, TransformationOptionLimits b, boolean lower)
{
time = a.time.combine(b.time);
kbytes = a.kbytes.combine(b.kbytes);
pages = a.pages.combine(b.pages);
if (lower)
{
time = a.time.combine(b.time);
kbytes = a.kbytes.combine(b.kbytes);
pages = a.pages.combine(b.pages);
}
else
{
time = a.time.combineUpper(b.time);
kbytes = a.kbytes.combineUpper(b.kbytes);
pages = a.pages.combineUpper(b.pages);
}
}
// --------------- Time ---------------
@@ -179,7 +188,22 @@ public class TransformationOptionLimits
*/
public TransformationOptionLimits combine(final TransformationOptionLimits that)
{
return new TransformationOptionLimits(this, that)
return combine(that, true);
}
/**
* Returns a TransformationOptionLimits that has getter methods that combine the
* the values from the getter methods of this and the supplied TransformationOptionLimits
* so that they return the lowest common denominator of the limits .
*/
public TransformationOptionLimits combineUpper(final TransformationOptionLimits that)
{
return combine(that, false);
}
private TransformationOptionLimits combine(final TransformationOptionLimits that, boolean lower)
{
return new TransformationOptionLimits(this, that, lower)
{
@Override
public void setTimeoutMs(long timeoutMs)

View File

@@ -1,5 +1,5 @@
/*
* Copyright (C) 2005-2011 Alfresco Software Limited.
* Copyright (C) 2005-2012 Alfresco Software Limited.
*
* This file is part of Alfresco
*
@@ -229,20 +229,85 @@ public class TransformationOptionLimitsTest
@Test
public void testCombine() throws Exception
{
limits.setReadLimitTimeMs(123); // limit >
limits.setReadLimitKBytes(45); // limit <
limits.setMaxPages(789); // max =
limits.setReadLimitTimeMs(123);
limits.setReadLimitKBytes(45);
limits.setPageLimit(789);
TransformationOptionLimits second = new TransformationOptionLimits();
second.setTimeoutMs(12); // max <
second.setMaxSourceSizeKBytes(456); // max >
second.setMaxPages(789); // max =
second.setTimeoutMs(12);
second.setMaxSourceSizeKBytes(456);
second.setMaxPages(789);
TransformationOptionLimits combined = limits.combine(second);
assertEquals("Expected the lower value", 12, combined.getTimeoutMs()); // max <
assertEquals("Expected the lower value", 45, combined.getReadLimitKBytes()); // limit <
assertEquals("Expected the lower value", 789, combined.getMaxPages()); // max =
assertEquals("Expected -1 as max is set", -1, combined.getReadLimitTimeMs()); // max <
assertEquals("Expected -1 as limit is set", -1, combined.getMaxSourceSizeKBytes()); // limit <
assertEquals("Expected -1 as limit is the same", -1, combined.getPageLimit()); // max =
}
@Test
public void testCombineLimits() throws Exception
{
limits.setReadLimitTimeMs(123);
limits.setReadLimitKBytes(45);
limits.setPageLimit(789);
TransformationOptionLimits second = new TransformationOptionLimits();
second.setReadLimitTimeMs(12);
second.setReadLimitKBytes(-1);
second.setPageLimit(789);
TransformationOptionLimits combined = limits.combine(second);
assertEquals("Expected the lower value", 12, combined.getReadLimitTimeMs());
assertEquals("Expected the lower value", 45, combined.getReadLimitKBytes());
assertEquals("Expected the lower value", 789, combined.getPageLimit());
}
@Test
public void testCombineUpper() throws Exception
{
limits.setReadLimitTimeMs(123);
limits.setReadLimitKBytes(45);
limits.setPageLimit(789);
TransformationOptionLimits second = new TransformationOptionLimits();
second.setTimeoutMs(12);
second.setMaxSourceSizeKBytes(456);
second.setMaxPages(789);
TransformationOptionLimits combined = limits.combineUpper(second);
assertEquals("Expected -1 as only one max value was set", -1, combined.getTimeoutMs());
assertEquals("Expected -1 as only one max value was set", -1, combined.getMaxSourceSizeKBytes());
assertEquals("Expected -1 as only one max value was set", -1, combined.getMaxPages());
assertEquals("Expected -1 as only one limit value was set", -1, combined.getReadLimitTimeMs());
assertEquals("Expected -1 as only one limit value was set", -1, combined.getReadLimitKBytes());
assertEquals("Expected -1 as only one limit value was set", -1, combined.getPageLimit());
}
@Test
public void testCombineUpperLimits() throws Exception
{
limits.setReadLimitTimeMs(123);
limits.setReadLimitKBytes(45);
limits.setPageLimit(789);
TransformationOptionLimits second = new TransformationOptionLimits();
second.setReadLimitTimeMs(12);
second.setReadLimitKBytes(-1);
second.setPageLimit(789);
TransformationOptionLimits combined = limits.combineUpper(second);
assertEquals("Expected the higher value", 123, combined.getReadLimitTimeMs());
assertEquals("Expected -1 as only one limit value was set", -1, combined.getReadLimitKBytes());
assertEquals("Expected the higher value", 789, combined.getPageLimit());
}
@Test

View File

@@ -1,5 +1,5 @@
/*
* Copyright (C) 2005-2011 Alfresco Software Limited.
* Copyright (C) 2005-2012 Alfresco Software Limited.
*
* This file is part of Alfresco
*
@@ -128,6 +128,19 @@ public class TransformationOptionPair
}
return Math.min(value1, value2);
}
/**
* Returns the higher (common denominator) of the two value supplied.
* If either value is less than 0, -1 is returned.
*/
private long maxSet(long value1, long value2)
{
if (value1 < 0 || value2 < 0)
{
return -1;
}
return Math.max(value1, value2);
}
public Map<String, Object> toMap(Map<String, Object> optionsMap, String optMaxKey, String optLimitKey)
{
@@ -161,24 +174,81 @@ public class TransformationOptionPair
* Returns a TransformationOptionPair that has getter methods that combine the
* the values from the getter methods of this and the supplied TransformationOptionPair.
*/
public TransformationOptionPair combine(final TransformationOptionPair that)
public TransformationOptionPair combine(TransformationOptionPair that)
{
return combine(that, true);
}
/**
* Returns a TransformationOptionPair that has getter methods that combine the
* the values from the getter methods of this and the supplied TransformationOptionPair
* so that they return the lowest common denominator of the two limits .
*/
public TransformationOptionPair combineUpper(final TransformationOptionPair that)
{
return combine(that, false);
}
private TransformationOptionPair combine(final TransformationOptionPair that, final boolean lower)
{
return new TransformationOptionPair()
{
/**
* Combines max values of this TransformationOptionPair and the supplied
* one to return the max to be used in a transformation. The limit
* one to return the max to be used in a transformation. When 'lower' the max
* value is discarded (-1 is returned) if the combined limit value is lower.
* When 'not lower' (lowest common denominator) the max is only returned if the
* limit value is -1.
*/
@Override
public long getMax()
{
long max = minSet(TransformationOptionPair.this.getMax(), that.getMax());
long limit = minSet(TransformationOptionPair.this.getLimit(), that.getLimit());
long max = getMaxValue();
long limit = getLimitValue();
return (max >= 0 && (limit < 0 || limit >= max))
? max
: -1;
return lower
? (max >= 0 && (limit < 0 || limit >= max))
? max
: -1
: (limit < 0)
? max
: -1;
}
/**
* Combines limit values of this TransformationOptionPair and the supplied
* one to return the limit to be used in a transformation. When 'lower' the limit
* value is discarded (-1 is returned) if the combined max value is lower.
* When 'not lower' (lowest common denominator) the limit is only returned if the
* max value is -1.
*/
@Override
public long getLimit()
{
long max = getMaxValue();
long limit = getLimitValue();
return lower
? (limit >= 0 && (max < 0 || max > limit))
? limit
: -1
: (max < 0)
? limit
: -1;
}
private long getLimitValue()
{
return lower
? minSet(TransformationOptionPair.this.getLimit(), that.getLimit())
: maxSet(TransformationOptionPair.this.getLimit(), that.getLimit());
}
private long getMaxValue()
{
return lower
? minSet(TransformationOptionPair.this.getMax(), that.getMax())
: maxSet(TransformationOptionPair.this.getMax(), that.getMax());
}
@Override
@@ -186,22 +256,6 @@ public class TransformationOptionPair
{
throw new UnsupportedOperationException();
}
/**
* Combines limit values of this TransformationOptionPair and the supplied
* one to return the limit to be used in a transformation. The limit
* value is discarded (-1 is returned) if the combined max value is lower.
*/
@Override
public long getLimit()
{
long max = minSet(TransformationOptionPair.this.getMax(), that.getMax());
long limit = minSet(TransformationOptionPair.this.getLimit(), that.getLimit());
return (limit >= 0 && (max < 0 || max >= limit))
? limit
: -1;
}
@Override
public void setLimit(long limit, String exceptionMessage)

View File

@@ -1,441 +1,443 @@
/*
* Copyright (C) 2005-2012 Alfresco Software Limited.
*
* This file is part of Alfresco
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
*/
package org.alfresco.util;
import java.io.Serializable;
import java.math.BigDecimal;
import java.math.BigInteger;
import java.util.Collection;
import java.util.Collections;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Locale;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.locks.ReentrantReadWriteLock;
/**
* A map that protects keys and values from accidental modification.
* <p/>
* Use this map when keys or values need to be protected against client modification.
* For example, when a component pulls a map from a common resource it can wrap
* the map with this class to prevent any accidental modification of the shared
* resource.
* <p/>
* Upon first write to this map , the underlying map will be copied (selectively cloned),
* the original map handle will be discarded and the copied map will be used. Note that
* the map copy process will also occur if any mutable value is in danger of being
* exposed to client modification. Therefore, methods that iterate and retrieve values
* will also trigger the copy if any values are mutable.
*
* @param <K> the map key type (must extend {@link Serializable})
* @param <V> the map value type (must extend {@link Serializable})
*
* @author Derek Hulley
* @since 3.4.9
* @since 4.0.1
*/
public class ValueProtectingMap<K extends Serializable, V extends Serializable> implements Map<K, V>
{
/**
* Default immutable classes:
* <li>String</li>
* <li>BigDecimal</li>
* <li>BigInteger</li>
* <li>Byte</li>
* <li>Double</li>
* <li>Float</li>
* <li>Integer</li>
* <li>Long</li>
* <li>Short</li>
* <li>Boolean</li>
* <li>Date</li>
* <li>Locale</li>
*/
public static final Set<Class<?>> DEFAULT_IMMUTABLE_CLASSES;
static
{
DEFAULT_IMMUTABLE_CLASSES = new HashSet<Class<?>>(13);
DEFAULT_IMMUTABLE_CLASSES.add(String.class);
DEFAULT_IMMUTABLE_CLASSES.add(BigDecimal.class);
DEFAULT_IMMUTABLE_CLASSES.add(BigInteger.class);
DEFAULT_IMMUTABLE_CLASSES.add(Byte.class);
DEFAULT_IMMUTABLE_CLASSES.add(Double.class);
DEFAULT_IMMUTABLE_CLASSES.add(Float.class);
DEFAULT_IMMUTABLE_CLASSES.add(Integer.class);
DEFAULT_IMMUTABLE_CLASSES.add(Long.class);
DEFAULT_IMMUTABLE_CLASSES.add(Short.class);
DEFAULT_IMMUTABLE_CLASSES.add(Boolean.class);
DEFAULT_IMMUTABLE_CLASSES.add(Date.class);
DEFAULT_IMMUTABLE_CLASSES.add(Locale.class);
}
/**
* Protect a specific value if it is considered mutable
*
* @param <S> the type of the value, which must be {@link Serializable}
* @param value the value to protect if it is mutable (may be <tt>null</tt>)
* @param immutableClasses a set of classes that can be considered immutable
* over and above the {@link #DEFAULT_IMMUTABLE_CLASSES default set}
* @return a cloned instance (via serialization) or the instance itself, if immutable
*/
@SuppressWarnings("unchecked")
public static <S extends Serializable> S protectValue(S value, Set<Class<?>> immutableClasses)
{
if (!mustProtectValue(value, immutableClasses))
{
return value;
}
// We have to clone it
// No worries about the return type; it has to be the same as we put into the serializer
return (S) SerializationUtils.deserialize(SerializationUtils.serialize(value));
}
/**
* Utility method to check if values need to be cloned or not
*
* @param <S> the type of the value, which must be {@link Serializable}
* @param value the value to check
* @param immutableClasses a set of classes that can be considered immutable
* over and above the {@link #DEFAULT_IMMUTABLE_CLASSES default set}
* @return <tt>true</tt> if the value must <b>NOT</b> be given
* to the calling clients
*/
public static <S extends Serializable> boolean mustProtectValue(S value, Set<Class<?>> immutableClasses)
{
if (value == null)
{
return false;
}
Class<?> clazz = value.getClass();
return (
DEFAULT_IMMUTABLE_CLASSES.contains(clazz) == false &&
immutableClasses.contains(clazz) == false);
}
/**
* Utility method to clone a map, preserving immutable instances
*
* @param <K> the map key type, which must be {@link Serializable}
* @param <V> the map value type, which must be {@link Serializable}
* @param map the map to copy
* @param immutableClasses a set of classes that can be considered immutable
* over and above the {@link #DEFAULT_IMMUTABLE_CLASSES default set}
*/
public static <K extends Serializable, V extends Serializable> Map<K, V> cloneMap(Map<K, V> map, Set<Class<?>> immutableClasses)
{
Map<K, V> copy = new HashMap<K, V>((int)(map.size() * 1.3));
for (Map.Entry<K, V> element : map.entrySet())
{
K key = element.getKey();
V value = element.getValue();
// Clone as necessary
key = ValueProtectingMap.protectValue(key, immutableClasses);
value = ValueProtectingMap.protectValue(value, immutableClasses);
copy.put(key, value);
}
return copy;
}
private ReentrantReadWriteLock.ReadLock readLock;
private ReentrantReadWriteLock.WriteLock writeLock;
private boolean cloned = false;
private Map<K, V> map;
private Set<Class<?>> immutableClasses;
/**
* Construct providing a protected map and using only the
* {@link #DEFAULT_IMMUTABLE_CLASSES default immutable classes}
*
* @param protectedMap the map to safeguard
*/
public ValueProtectingMap(Map<K, V> protectedMap)
{
this (protectedMap, null);
}
/**
* Construct providing a protected map, complementing the set of
* {@link #DEFAULT_IMMUTABLE_CLASSES default immutable classes}
*
* @param protectedMap the map to safeguard
* @param immutableClasses additional immutable classes
* over and above the {@link #DEFAULT_IMMUTABLE_CLASSES default set}
* (may be <tt>null</tt>
*/
public ValueProtectingMap(Map<K, V> protectedMap, Set<Class<?>> immutableClasses)
{
// Unwrap any internal maps if given a value protecting map
if (protectedMap instanceof ValueProtectingMap)
{
ValueProtectingMap<K, V> mapTemp = (ValueProtectingMap<K, V>) protectedMap;
this.map = mapTemp.map;
}
else
{
this.map = protectedMap;
}
this.cloned = false;
if (immutableClasses == null)
{
this.immutableClasses = Collections.emptySet();
}
else
{
this.immutableClasses = new HashSet<Class<?>>(immutableClasses);
}
// Construct locks
ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
this.readLock = lock.readLock();
this.writeLock = lock.writeLock();
}
/**
* An unsafe method to use for anything except tests.
*
* @return the map that this instance is protecting
*/
/* protected */ Map<K, V> getProtectedMap()
{
return map;
}
/**
* Called by methods that need to force the map into a safe state.
* <p/>
* This method can be called without any locks being active.
*/
private void cloneMap()
{
readLock.lock();
try
{
// Check that it hasn't been copied already
if (cloned)
{
return;
}
}
finally
{
readLock.unlock();
}
/*
* Note: This space here is a window during which some code could have made
* a copy. Therefore we will do a cautious double-check.
*/
// Put in a write lock before cloning the map
writeLock.lock();
try
{
// Check that it hasn't been copied already
if (cloned)
{
return;
}
Map<K, V> copy = ValueProtectingMap.cloneMap(map, immutableClasses);
// Discard the original
this.map = copy;
this.cloned = true;
}
finally
{
writeLock.unlock();
}
}
/*
* READ-ONLY METHODS
*/
@Override
public int size()
{
readLock.lock();
try
{
return map.size();
}
finally
{
readLock.unlock();
}
}
@Override
public boolean isEmpty()
{
readLock.lock();
try
{
return map.isEmpty();
}
finally
{
readLock.unlock();
}
}
@Override
public boolean containsKey(Object key)
{
readLock.lock();
try
{
return map.containsKey(key);
}
finally
{
readLock.unlock();
}
}
@Override
public boolean containsValue(Object value)
{
readLock.lock();
try
{
return map.containsValue(value);
}
finally
{
readLock.unlock();
}
}
@Override
public int hashCode()
{
readLock.lock();
try
{
return map.hashCode();
}
finally
{
readLock.unlock();
}
}
@Override
public boolean equals(Object obj)
{
readLock.lock();
try
{
return map.equals(obj);
}
finally
{
readLock.unlock();
}
}
@Override
public String toString()
{
readLock.lock();
try
{
return map.toString();
}
finally
{
readLock.unlock();
}
}
/*
* METHODS THAT *MIGHT* REQUIRE COPY
*/
@Override
public V get(Object key)
{
readLock.lock();
try
{
V value = map.get(key);
return ValueProtectingMap.protectValue(value, immutableClasses);
}
finally
{
readLock.unlock();
}
}
/*
* METHODS THAT REQUIRE COPY
*/
@Override
public V put(K key, V value)
{
cloneMap();
return map.put(key, value);
}
@Override
public V remove(Object key)
{
cloneMap();
return map.remove(key);
}
@Override
public void putAll(Map<? extends K, ? extends V> m)
{
cloneMap();
map.putAll(m);
}
@Override
public void clear()
{
cloneMap();
map.clear();
}
@Override
public Set<K> keySet()
{
cloneMap();
return map.keySet();
}
@Override
public Collection<V> values()
{
cloneMap();
return map.values();
}
@Override
public Set<Entry<K, V>> entrySet()
{
cloneMap();
return map.entrySet();
}
}
/*
* Copyright (C) 2005-2012 Alfresco Software Limited.
*
* This file is part of Alfresco
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
*/
package org.alfresco.util;
import java.io.Serializable;
import java.math.BigDecimal;
import java.math.BigInteger;
import java.util.Collection;
import java.util.Collections;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Locale;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.locks.ReentrantReadWriteLock;
/**
* A map that protects keys and values from accidental modification.
* <p/>
* Use this map when keys or values need to be protected against client modification.
* For example, when a component pulls a map from a common resource it can wrap
* the map with this class to prevent any accidental modification of the shared
* resource.
* <p/>
* Upon first write to this map , the underlying map will be copied (selectively cloned),
* the original map handle will be discarded and the copied map will be used. Note that
* the map copy process will also occur if any mutable value is in danger of being
* exposed to client modification. Therefore, methods that iterate and retrieve values
* will also trigger the copy if any values are mutable.
*
* @param <K> the map key type (must extend {@link Serializable})
* @param <V> the map value type (must extend {@link Serializable})
*
* @author Derek Hulley
* @since 3.4.9
* @since 4.0.1
*/
public class ValueProtectingMap<K extends Serializable, V extends Serializable> implements Map<K, V>, Serializable
{
private static final long serialVersionUID = -9073485393875357605L;
/**
* Default immutable classes:
* <li>String</li>
* <li>BigDecimal</li>
* <li>BigInteger</li>
* <li>Byte</li>
* <li>Double</li>
* <li>Float</li>
* <li>Integer</li>
* <li>Long</li>
* <li>Short</li>
* <li>Boolean</li>
* <li>Date</li>
* <li>Locale</li>
*/
public static final Set<Class<?>> DEFAULT_IMMUTABLE_CLASSES;
static
{
DEFAULT_IMMUTABLE_CLASSES = new HashSet<Class<?>>(13);
DEFAULT_IMMUTABLE_CLASSES.add(String.class);
DEFAULT_IMMUTABLE_CLASSES.add(BigDecimal.class);
DEFAULT_IMMUTABLE_CLASSES.add(BigInteger.class);
DEFAULT_IMMUTABLE_CLASSES.add(Byte.class);
DEFAULT_IMMUTABLE_CLASSES.add(Double.class);
DEFAULT_IMMUTABLE_CLASSES.add(Float.class);
DEFAULT_IMMUTABLE_CLASSES.add(Integer.class);
DEFAULT_IMMUTABLE_CLASSES.add(Long.class);
DEFAULT_IMMUTABLE_CLASSES.add(Short.class);
DEFAULT_IMMUTABLE_CLASSES.add(Boolean.class);
DEFAULT_IMMUTABLE_CLASSES.add(Date.class);
DEFAULT_IMMUTABLE_CLASSES.add(Locale.class);
}
/**
* Protect a specific value if it is considered mutable
*
* @param <S> the type of the value, which must be {@link Serializable}
* @param value the value to protect if it is mutable (may be <tt>null</tt>)
* @param immutableClasses a set of classes that can be considered immutable
* over and above the {@link #DEFAULT_IMMUTABLE_CLASSES default set}
* @return a cloned instance (via serialization) or the instance itself, if immutable
*/
@SuppressWarnings("unchecked")
public static <S extends Serializable> S protectValue(S value, Set<Class<?>> immutableClasses)
{
if (!mustProtectValue(value, immutableClasses))
{
return value;
}
// We have to clone it
// No worries about the return type; it has to be the same as we put into the serializer
return (S) SerializationUtils.deserialize(SerializationUtils.serialize(value));
}
/**
* Utility method to check if values need to be cloned or not
*
* @param <S> the type of the value, which must be {@link Serializable}
* @param value the value to check
* @param immutableClasses a set of classes that can be considered immutable
* over and above the {@link #DEFAULT_IMMUTABLE_CLASSES default set}
* @return <tt>true</tt> if the value must <b>NOT</b> be given
* to the calling clients
*/
public static <S extends Serializable> boolean mustProtectValue(S value, Set<Class<?>> immutableClasses)
{
if (value == null)
{
return false;
}
Class<?> clazz = value.getClass();
return (
DEFAULT_IMMUTABLE_CLASSES.contains(clazz) == false &&
immutableClasses.contains(clazz) == false);
}
/**
* Utility method to clone a map, preserving immutable instances
*
* @param <K> the map key type, which must be {@link Serializable}
* @param <V> the map value type, which must be {@link Serializable}
* @param map the map to copy
* @param immutableClasses a set of classes that can be considered immutable
* over and above the {@link #DEFAULT_IMMUTABLE_CLASSES default set}
*/
public static <K extends Serializable, V extends Serializable> Map<K, V> cloneMap(Map<K, V> map, Set<Class<?>> immutableClasses)
{
Map<K, V> copy = new HashMap<K, V>((int)(map.size() * 1.3));
for (Map.Entry<K, V> element : map.entrySet())
{
K key = element.getKey();
V value = element.getValue();
// Clone as necessary
key = ValueProtectingMap.protectValue(key, immutableClasses);
value = ValueProtectingMap.protectValue(value, immutableClasses);
copy.put(key, value);
}
return copy;
}
private ReentrantReadWriteLock.ReadLock readLock;
private ReentrantReadWriteLock.WriteLock writeLock;
private boolean cloned = false;
private Map<K, V> map;
private Set<Class<?>> immutableClasses;
/**
* Construct providing a protected map and using only the
* {@link #DEFAULT_IMMUTABLE_CLASSES default immutable classes}
*
* @param protectedMap the map to safeguard
*/
public ValueProtectingMap(Map<K, V> protectedMap)
{
this (protectedMap, null);
}
/**
* Construct providing a protected map, complementing the set of
* {@link #DEFAULT_IMMUTABLE_CLASSES default immutable classes}
*
* @param protectedMap the map to safeguard
* @param immutableClasses additional immutable classes
* over and above the {@link #DEFAULT_IMMUTABLE_CLASSES default set}
* (may be <tt>null</tt>
*/
public ValueProtectingMap(Map<K, V> protectedMap, Set<Class<?>> immutableClasses)
{
// Unwrap any internal maps if given a value protecting map
if (protectedMap instanceof ValueProtectingMap)
{
ValueProtectingMap<K, V> mapTemp = (ValueProtectingMap<K, V>) protectedMap;
this.map = mapTemp.map;
}
else
{
this.map = protectedMap;
}
this.cloned = false;
if (immutableClasses == null)
{
this.immutableClasses = Collections.emptySet();
}
else
{
this.immutableClasses = new HashSet<Class<?>>(immutableClasses);
}
// Construct locks
ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
this.readLock = lock.readLock();
this.writeLock = lock.writeLock();
}
/**
* An unsafe method to use for anything except tests.
*
* @return the map that this instance is protecting
*/
/* protected */ Map<K, V> getProtectedMap()
{
return map;
}
/**
* Called by methods that need to force the map into a safe state.
* <p/>
* This method can be called without any locks being active.
*/
private void cloneMap()
{
readLock.lock();
try
{
// Check that it hasn't been copied already
if (cloned)
{
return;
}
}
finally
{
readLock.unlock();
}
/*
* Note: This space here is a window during which some code could have made
* a copy. Therefore we will do a cautious double-check.
*/
// Put in a write lock before cloning the map
writeLock.lock();
try
{
// Check that it hasn't been copied already
if (cloned)
{
return;
}
Map<K, V> copy = ValueProtectingMap.cloneMap(map, immutableClasses);
// Discard the original
this.map = copy;
this.cloned = true;
}
finally
{
writeLock.unlock();
}
}
/*
* READ-ONLY METHODS
*/
@Override
public int size()
{
readLock.lock();
try
{
return map.size();
}
finally
{
readLock.unlock();
}
}
@Override
public boolean isEmpty()
{
readLock.lock();
try
{
return map.isEmpty();
}
finally
{
readLock.unlock();
}
}
@Override
public boolean containsKey(Object key)
{
readLock.lock();
try
{
return map.containsKey(key);
}
finally
{
readLock.unlock();
}
}
@Override
public boolean containsValue(Object value)
{
readLock.lock();
try
{
return map.containsValue(value);
}
finally
{
readLock.unlock();
}
}
@Override
public int hashCode()
{
readLock.lock();
try
{
return map.hashCode();
}
finally
{
readLock.unlock();
}
}
@Override
public boolean equals(Object obj)
{
readLock.lock();
try
{
return map.equals(obj);
}
finally
{
readLock.unlock();
}
}
@Override
public String toString()
{
readLock.lock();
try
{
return map.toString();
}
finally
{
readLock.unlock();
}
}
/*
* METHODS THAT *MIGHT* REQUIRE COPY
*/
@Override
public V get(Object key)
{
readLock.lock();
try
{
V value = map.get(key);
return ValueProtectingMap.protectValue(value, immutableClasses);
}
finally
{
readLock.unlock();
}
}
/*
* METHODS THAT REQUIRE COPY
*/
@Override
public V put(K key, V value)
{
cloneMap();
return map.put(key, value);
}
@Override
public V remove(Object key)
{
cloneMap();
return map.remove(key);
}
@Override
public void putAll(Map<? extends K, ? extends V> m)
{
cloneMap();
map.putAll(m);
}
@Override
public void clear()
{
cloneMap();
map.clear();
}
@Override
public Set<K> keySet()
{
cloneMap();
return map.keySet();
}
@Override
public Collection<V> values()
{
cloneMap();
return map.values();
}
@Override
public Set<Entry<K, V>> entrySet()
{
cloneMap();
return map.entrySet();
}
}

View File

@@ -1,242 +1,264 @@
/*
* Copyright (C) 2005-2012 Alfresco Software Limited.
*
* This file is part of Alfresco
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
*/
package org.alfresco.util;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import junit.framework.TestCase;
/**
* Tests {@link ValueProtectingMap}
*
* @author Derek Hulley
* @since 3.4.9
* @since 4.0.1
*/
public class ValueProtectingMapTest extends TestCase
{
private static Set<Class<?>> moreImmutableClasses;
static
{
moreImmutableClasses = new HashSet<Class<?>>(13);
moreImmutableClasses.add(TestImmutable.class);
}
/**
* A class that is immutable
*/
@SuppressWarnings("serial")
private static class TestImmutable implements Serializable
{
}
/**
* A class that is mutable
*/
@SuppressWarnings("serial")
private static class TestMutable extends TestImmutable
{
public int i = 0;
public void increment()
{
i++;
}
@Override
public boolean equals(Object obj)
{
if (this == obj) return true;
if (obj == null) return false;
if (getClass() != obj.getClass()) return false;
TestMutable other = (TestMutable) obj;
if (i != other.i) return false;
return true;
}
}
private List<String> valueList;
private Map<String, String> valueMap;
private Date valueDate;
private TestImmutable valueImmutable;
private TestMutable valueMutable;
private ValueProtectingMap<String, Serializable> map;
private Map<String, Serializable> holyMap;
@Override
protected void setUp() throws Exception
{
valueList = new ArrayList<String>(4);
valueList.add("ONE");
valueList.add("TWO");
valueList.add("THREE");
valueList.add("FOUR");
valueList = Collections.unmodifiableList(valueList);
valueMap = new HashMap<String, String>(5);
valueMap.put("ONE", "ONE");
valueMap.put("TWO", "TWO");
valueMap.put("THREE", "THREE");
valueMap.put("FOUR", "FOUR");
valueMap = Collections.unmodifiableMap(valueMap);
valueDate = new Date();
valueImmutable = new TestImmutable();
valueMutable = new TestMutable();
holyMap = new HashMap<String, Serializable>();
holyMap.put("DATE", valueDate);
holyMap.put("LIST", (Serializable) valueList);
holyMap.put("MAP", (Serializable) valueMap);
holyMap.put("IMMUTABLE", valueImmutable);
holyMap.put("MUTABLE", valueMutable);
// Now wrap our 'holy' map so that it cannot be modified
holyMap = Collections.unmodifiableMap(holyMap);
map = new ValueProtectingMap<String, Serializable>(holyMap, moreImmutableClasses);
}
/**
* Make sure that NOTHING has changed in our 'holy' map
*/
private void checkMaps(boolean expectMapClone)
{
assertEquals("Holy map size is wrong: ", 5, holyMap.size());
// Note that the immutability of the maps and lists means that we don't need
// to check every value within the lists and maps
if (expectMapClone)
{
// Make sure that the holy map has been released
assertTrue("Expect holy map to have been released: ", map.getProtectedMap() != holyMap);
// Do some updates to the backing map and ensure that they stick
Map<String, Serializable> mapClone = map.getProtectedMap();
mapClone.put("ONE", "ONE");
assertEquals("Modified the backing directly but value is not visible: ", map.get("ONE"), "ONE");
map.put("TWO", "TWO");
assertTrue("Backing map was changed again!", mapClone == map.getProtectedMap());
mapClone.containsKey("TWO");
}
else
{
// Make sure that the holy map is still acting as the backing map
assertTrue("Expect holy map to still be in use: ", map.getProtectedMap() == holyMap);
}
}
public void testSetup()
{
checkMaps(false);
}
/**
* No matter how many times we wrap instances in instances, the backing map must remain
* the same.
*/
public void testMapWrapping()
{
ValueProtectingMap<String, Serializable> mapTwo = new ValueProtectingMap<String, Serializable>(map);
assertTrue("Backing map must be shared: ", mapTwo.getProtectedMap() == map.getProtectedMap());
ValueProtectingMap<String, Serializable> mapThree = new ValueProtectingMap<String, Serializable>(map);
assertTrue("Backing map must be shared: ", mapThree.getProtectedMap() == map.getProtectedMap());
}
public void testMapClear()
{
map.clear();
assertEquals("Map should be empty: ", 0, map.size());
checkMaps(true);
}
public void testMapContainsKey()
{
assertTrue(map.containsKey("LIST"));
assertFalse(map.containsKey("LISTXXX"));
checkMaps(false);
}
public void testMapContainsValue()
{
assertTrue(map.containsValue(valueMutable));
assertFalse(map.containsValue("Dassie"));
checkMaps(false);
}
public void testMapEntrySet()
{
map.entrySet();
checkMaps(true);
}
/**
* Ensures that single, immutable values are given out as-is and
* without affecting the backing storage
*/
public void testMapGetImmutable()
{
assertTrue("Immutable value instance incorrect", map.get("IMMUTABLE") == valueImmutable);
checkMaps(false);
}
/**
* Ensures that single, immutable values are cloned before being given out
* without affecting the backing storage
*/
public void testMapGetMutable()
{
TestMutable mutable = (TestMutable) map.get("MUTABLE");
assertFalse("Mutable value instance incorrect", mutable == valueMutable);
checkMaps(false);
// Modify the instance
mutable.increment();
assertEquals("Backing mutable should not have changed: ", 0, valueMutable.i);
}
public void testMapIsEmpty()
{
assertFalse(map.isEmpty());
checkMaps(false);
}
public void testMapKeySet()
{
map.keySet();
checkMaps(true);
}
public void testMapPut()
{
map.put("ANOTHER", "VALUE");
checkMaps(true);
}
public void testMapPutAll()
{
map.putAll(holyMap);
checkMaps(true);
}
}
/*
* Copyright (C) 2005-2012 Alfresco Software Limited.
*
* This file is part of Alfresco
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
*/
package org.alfresco.util;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import junit.framework.TestCase;
/**
* Tests {@link ValueProtectingMap}
*
* @author Derek Hulley
* @since 3.4.9
* @since 4.0.1
*/
public class ValueProtectingMapTest extends TestCase
{
private static Set<Class<?>> moreImmutableClasses;
static
{
moreImmutableClasses = new HashSet<Class<?>>(13);
moreImmutableClasses.add(TestImmutable.class);
}
/**
* A class that is immutable
*/
@SuppressWarnings("serial")
private static class TestImmutable implements Serializable
{
}
/**
* A class that is mutable
*/
@SuppressWarnings("serial")
private static class TestMutable extends TestImmutable
{
public int i = 0;
public void increment()
{
i++;
}
@Override
public boolean equals(Object obj)
{
if (this == obj) return true;
if (obj == null) return false;
if (getClass() != obj.getClass()) return false;
TestMutable other = (TestMutable) obj;
if (i != other.i) return false;
return true;
}
}
private List<String> valueList;
private Map<String, String> valueMap;
private Date valueDate;
private TestImmutable valueImmutable;
private TestMutable valueMutable;
private ValueProtectingMap<String, Serializable> map;
private Map<String, Serializable> holyMap;
@Override
protected void setUp() throws Exception
{
valueList = new ArrayList<String>(4);
valueList.add("ONE");
valueList.add("TWO");
valueList.add("THREE");
valueList.add("FOUR");
valueList = Collections.unmodifiableList(valueList);
valueMap = new HashMap<String, String>(5);
valueMap.put("ONE", "ONE");
valueMap.put("TWO", "TWO");
valueMap.put("THREE", "THREE");
valueMap.put("FOUR", "FOUR");
valueMap = Collections.unmodifiableMap(valueMap);
valueDate = new Date();
valueImmutable = new TestImmutable();
valueMutable = new TestMutable();
holyMap = new HashMap<String, Serializable>();
holyMap.put("DATE", valueDate);
holyMap.put("LIST", (Serializable) valueList);
holyMap.put("MAP", (Serializable) valueMap);
holyMap.put("IMMUTABLE", valueImmutable);
holyMap.put("MUTABLE", valueMutable);
// Now wrap our 'holy' map so that it cannot be modified
holyMap = Collections.unmodifiableMap(holyMap);
map = new ValueProtectingMap<String, Serializable>(holyMap, moreImmutableClasses);
}
/**
* Make sure that NOTHING has changed in our 'holy' map
*/
private void checkMaps(boolean expectMapClone)
{
assertEquals("Holy map size is wrong: ", 5, holyMap.size());
// Note that the immutability of the maps and lists means that we don't need
// to check every value within the lists and maps
if (expectMapClone)
{
// Make sure that the holy map has been released
assertTrue("Expect holy map to have been released: ", map.getProtectedMap() != holyMap);
// Do some updates to the backing map and ensure that they stick
Map<String, Serializable> mapClone = map.getProtectedMap();
mapClone.put("ONE", "ONE");
assertEquals("Modified the backing directly but value is not visible: ", map.get("ONE"), "ONE");
map.put("TWO", "TWO");
assertTrue("Backing map was changed again!", mapClone == map.getProtectedMap());
mapClone.containsKey("TWO");
}
else
{
// Make sure that the holy map is still acting as the backing map
assertTrue("Expect holy map to still be in use: ", map.getProtectedMap() == holyMap);
}
}
public void testSetup()
{
checkMaps(false);
}
/**
* No matter how many times we wrap instances in instances, the backing map must remain
* the same.
*/
public void testMapWrapping()
{
ValueProtectingMap<String, Serializable> mapTwo = new ValueProtectingMap<String, Serializable>(map);
assertTrue("Backing map must be shared: ", mapTwo.getProtectedMap() == map.getProtectedMap());
ValueProtectingMap<String, Serializable> mapThree = new ValueProtectingMap<String, Serializable>(map);
assertTrue("Backing map must be shared: ", mapThree.getProtectedMap() == map.getProtectedMap());
}
public void testMapClear()
{
map.clear();
assertEquals("Map should be empty: ", 0, map.size());
checkMaps(true);
}
public void testMapContainsKey()
{
assertTrue(map.containsKey("LIST"));
assertFalse(map.containsKey("LISTXXX"));
checkMaps(false);
}
public void testMapContainsValue()
{
assertTrue(map.containsValue(valueMutable));
assertFalse(map.containsValue("Dassie"));
checkMaps(false);
}
public void testMapEntrySet()
{
map.entrySet();
checkMaps(true);
}
/**
* Ensures that single, immutable values are given out as-is and
* without affecting the backing storage
*/
public void testMapGetImmutable()
{
assertTrue("Immutable value instance incorrect", map.get("IMMUTABLE") == valueImmutable);
checkMaps(false);
}
/**
* Ensures that single, immutable values are cloned before being given out
* without affecting the backing storage
*/
public void testMapGetMutable()
{
TestMutable mutable = (TestMutable) map.get("MUTABLE");
assertFalse("Mutable value instance incorrect", mutable == valueMutable);
checkMaps(false);
// Modify the instance
mutable.increment();
assertEquals("Backing mutable should not have changed: ", 0, valueMutable.i);
}
public void testMapIsEmpty()
{
assertFalse(map.isEmpty());
checkMaps(false);
}
public void testMapKeySet()
{
map.keySet();
checkMaps(true);
}
public void testMapPut()
{
map.put("ANOTHER", "VALUE");
checkMaps(true);
}
public void testMapPutAll()
{
map.putAll(holyMap);
checkMaps(true);
}
@SuppressWarnings("unchecked")
public void testSerializability() throws Exception
{
map.put("MORE", "STUFF");
checkMaps(true);
ByteArrayOutputStream baos = new ByteArrayOutputStream(1024);
ObjectOutputStream os = new ObjectOutputStream(baos);
os.writeObject(map);
os.close();
// Read it back in
ByteArrayInputStream bais = new ByteArrayInputStream(baos.toByteArray());
ObjectInputStream ois = new ObjectInputStream(bais);
ValueProtectingMap<String, Serializable> reloadedMap = (ValueProtectingMap<String, Serializable>) ois.readObject();
ois.close();
// Make sure it has the value
assertEquals("Reloaded object not same.", "STUFF", reloadedMap.get("MORE"));
}
}

View File

@@ -32,6 +32,40 @@
</property>
</bean>
<!-- A test instance of the user registry synchronizer -->
<bean id="testUserRegistrySynchronizerPreventDeletions" class="org.alfresco.repo.security.sync.ChainingUserRegistrySynchronizer">
<property name="authorityService">
<ref bean="authorityService" />
</property>
<property name="personService">
<ref bean="personService" />
</property>
<property name="attributeService">
<ref bean="attributeService" />
</property>
<property name="applicationContextManager">
<ref bean="testApplicationContextManager" />
</property>
<property name="transactionService">
<ref bean="transactionService" />
</property>
<property name="ruleService">
<ref bean="ruleService" />
</property>
<property name="jobLockService">
<ref bean="jobLockService" />
</property>
<property name="sourceBeanName">
<value>userRegistry</value>
</property>
<property name="loggingInterval">
<value>100</value>
</property>
<property name="allowDeletions">
<value>false</value>
</property>
</bean>
<!-- A fake applicaton context manager into which we can inject test-specific beans -->
<bean id="testApplicationContextManager" class="org.alfresco.repo.security.sync.ChainingUserRegistrySynchronizerTest$MockApplicationContextManager" />

View File

@@ -8,7 +8,7 @@
<!-- -->
<bean id="fileContentStore" class="org.alfresco.repo.tenant.TenantRoutingFileContentStore" parent="baseTenantRoutingContentStore">
<property name="defaultRootDir" value="${dir.contentstore}" />
<property name="rootLocation" value="${dir.contentstore}" />
</bean>
</beans>