Dave Ward cfe1c63566 Merged V4.1-BUG-FIX to HEAD
44674: Fix for ALF-17189 - The "Company Home" item in the top navigator menu and in the toolbar panel is invisible, if login as guest first then directly access the login page via URL.
   44701: Merged BRANCHES/DEV/V3.4-BUG-FIX to BRANCHES/DEV/V4.1-BUG-FIX
      44700: Fix for ALF-10369 - support for OPTIONS requests for WebScript framework and Share proxy
   44709: ALF-17164 Fix version.properties which was wrong in sdk zip
   44710: ALF-14570 ("Check out" outboud rule works incorrect)
   44722: MNT-246: Need the ability to configure a proxy with Enterprise Sync.
      - RemoteConnectorServiceImpl will now use an http/https proxy if the standard system properties for Java network proxy configuration are found. See http://docs.oracle.com/javase/6/docs/technotes/guides/net/proxies.html (Section 2.1 and 2.2)
   44730: Merged V4.1 to V4.1-BUG-FIX
      44461: Merged PATCHES/V4.1.1 to V4.1
         44060: ALF-16962 / MNT-221 Links from a deleted user cause error in the "Links" page 
         44129: ALF-17134 / MNT-223: Unbound SOLR result set (from Explorer trashcan query) consumes heap
         - Now we emulate the behaviour of ACLEntryAfterInvocationProvider in SolrQueryHTTPClient, thus limiting otherwise unconstrained SOLR queries to return a finite number of results
         - New solr subsystem parameter solr.query.maximumResultsFromUnlimitedQuery introduced
         - Its default value is ${system.acl.maxPermissionChecks}, thus providing backward compatibility with old behaviour (1000 results max)
         - When there are no other limits in the search parameters, this value will be used to limit the number of results
         - SolrJSONResultSet.getResultSetMetata().getLimitedBy() will return an appropriate LimitBy value, according to how the query was limited
         44130: ALF-17134 / MNT-223: Unbound SOLR result set (from Explorer trashcan query) consumes heap
         - Improved calculation of SolrJSONResultSet.getResultSetMetata().getLimitedBy() to better emulate ACLEntryAfterInvocationProvider
         44141: ALF-17134 / MNT-223: Unbound SOLR result set (from Explorer trashcan query) consumes heap
         - Correction to handling of limited queries (Share search works again!)
         44291: ALF-17094 / MNT-222 InvalidNodeRefException on user deletion in Share UI 
      44462: Merged PATCHES/V4.0.2 to V4.1
         44221: ALF-17038 / MNT-226: Out-of-order versions for existing data during migration from 3.4.9 to 4.0.2.19
            - Have been able to remove the need for any Comparators in the normal case.
              As Dave said, he thought it was ordered already. It is with "assoc.assoc_index ASC, assoc.id ASC".
              Required a bit of re factoring of Version2ServiceImpl to do it as they were referenced/used in a couple of other classes.
            - Modified all 43 Oracle sequences to include ORDER in the create statement.
              Probably only really was needed to do it on alf_child_assoc_seq to fix this issue, but it will stop similar issues in
              other clustered database setups. Did not change the upgrade scripts, as this will give us a clue that there will be
              existing data issues.
            - The name of a Comparator<Version> may be specified in the Alfresco global property:
              org.alfresco.repo.version.common.VersionLabelComparator and it will be used by BOTH Version2ServiceImpl and VersionServiceImpl.
              They in turn pass it on to Version2ServiceImpl instances when they create them.
            - A VersionLabelComparator already existed (still deprecated as we don't normally use it) and works:
              org.alfresco.repo.version.common.VersionLabelComparator.
            - Customers with out of sequence ids on Oracle RDBMS using a clustered database may 'correct on the fly' the order of their
              versions by setting the alfresco global property described above.
            - Have tested both with and without a comparator in development environment. Using break points and Collections.shuffle(version)
              in an expression was able to simulate out of order IDs.
            - New unit tests added to VersionHistoryImplTest and VersionServiceImplTest to test db ids out of order
         44336: ALF-15935: Fixed SecureContext errors when ticket has expired. MNT-180
      44467: Fixed compilation failure
      44520: ALF-16590: Improved fix after testing by Mark Lugert
      44563: Merged DEV to V4.1 (with corrections)
         44547: ALF-17132: Possible XSS - arbitrary url parameters re-sent to the browser
            Escaping of keys and values of request attributes
      44610: Merged PATCHES/V4.0.2 to V4.1
         44435: ALF-17183: Merged DEV to V4.0.2 (4.0.2.22)
            44429: MNT-232: Upgrade from 3.4.9 to 4.0.2 - FAILED
            - Initialize rootRefs in the property definition to prevent NPE.
         44591: Fix to CIFS reported user free space when disk quotas are not enabled. 
         44595: ALF-17184 / MNT-243 Minimal fix for disk size and user quotas.   (Bring values into line with API.)
         44601: ALF-17184 / MNT-243 - Implementation of file size on Abstract Tennant Routing Content Store.
         44608: ALF-15935 / MNT-180: Moved closeFile() call to closeConnection() cleanup method, always call closeFile()
         Do not check if file is marked as closed during cleanup, only open files should still be in the file table.
      44652: ALF-17117: Created article or publication cant be viewed on WQS site
      - Fixes by Dmitry Vaserin
      - Removed unnecessary outer read locks from getRelatedAssets and getRelatedAsset to prevent deadlock
      - Correct markup error when node doesn't have tags
      44653: ALF-17117: Created article or publication cant be viewed on WQS site
      - Missed file from previous checkin
      44682: ALF-17118 WQS: Impossible to upload document to publications space
         - Only first part to do with the transformation failure has been committed. 
   44731: Merged V4.1 to V4.1-BUG-FIX (RECORD ONLY)
      44441: Merge V4.1-BUG-FIX to V4.1
         44270: Merge V3.4-BUG-FIX to V4.1-BUG-FIX
            44266: BDE-111: harden generation of Windows installers
               - make sure build fails if installer generation fails
               - generate Windows unsigned installers in a place that is cleaned later, avoiding leftovers
      44598: Merged V4.1-BUG-FIX to V4.1
         44541: Fix for     ALF-17151   SOLR - add support to disable permission checks
         44577: Final part for     ALF-16558 SOLR tracking does not do incremental updates but one single chunk 
         - fixed code so SolrSearchers are held for as little time as possible
      44607: Merged V4.1-BUG-FIX to V4.1
         44603: ALF-14201: upgrade activiti to 5.7-20121211
         44606: ALF-14201: upgrade activiti to 5.7-20121211 in Maven poms


git-svn-id: https://svn.alfresco.com/repos/alfresco-enterprise/alfresco/HEAD/root@44732 c4b6b30b-aa2e-2d43-bbcb-ca4b014f7261
2012-12-15 10:12:46 +00:00

321 lines
11 KiB
Java

/*
* Copyright (C) 2005-2010 Alfresco Software Limited.
*
* This file is part of Alfresco
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
*/
package org.alfresco.repo.content;
import java.util.Date;
import org.alfresco.repo.content.ContentLimitProvider.NoLimitProvider;
import org.alfresco.service.cmr.repository.ContentIOException;
import org.alfresco.service.cmr.repository.ContentReader;
import org.alfresco.service.cmr.repository.ContentWriter;
import org.alfresco.util.Pair;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
/**
* Base class providing support for different types of content stores.
* <p>
* Since content URLs have to be consistent across all stores for
* reasons of replication and backup, the most important functionality
* provided is the generation of new content URLs and the checking of
* existing URLs.
* <p>
* Implementations must override either of the <b>getWriter</b> methods;
* {@link #getWriter(ContentContext)} or {@link #getWriterInternal(ContentReader, String)}.
*
* @see #getWriter(ContentReader, String)
* @see #getWriterInternal(ContentReader, String)
*
* @author Derek Hulley
*/
public abstract class AbstractContentStore implements ContentStore
{
private static Log logger = LogFactory.getLog(AbstractContentStore.class);
/** Helper */
private static final int PROTOCOL_DELIMETER_LENGTH = PROTOCOL_DELIMITER.length();
/**
* Checks that the content conforms to the format <b>protocol://identifier</b>
* as specified in the contract of the {@link ContentStore} interface.
*
* @param contentUrl the content URL to check
* @return Returns <tt>true</tt> if the content URL is valid
*
* @since 2.1
*/
public static final boolean isValidContentUrl(String contentUrl)
{
if (contentUrl == null)
{
return false;
}
int index = contentUrl.indexOf(ContentStore.PROTOCOL_DELIMITER);
if (index <= 0)
{
return false;
}
if (contentUrl.length() <= (index + PROTOCOL_DELIMETER_LENGTH))
{
return false;
}
return true;
}
protected ContentLimitProvider contentLimitProvider = new NoLimitProvider();
/**
* Splits the content URL into its component parts as separated by
* {@link ContentStore#PROTOCOL_DELIMITER protocol delimiter}.
*
* @param contentUrl the content URL to split
* @return Returns the protocol and identifier portions of the content URL,
* both of which will not be <tt>null</tt>
* @throws UnsupportedContentUrlException if the content URL is invalid
*
* @since 2.1
*/
protected Pair<String, String> getContentUrlParts(String contentUrl)
{
if (contentUrl == null)
{
throw new IllegalArgumentException("The contentUrl may not be null");
}
int index = contentUrl.indexOf(ContentStore.PROTOCOL_DELIMITER);
if (index <= 0)
{
throw new UnsupportedContentUrlException(this, contentUrl);
}
String protocol = contentUrl.substring(0, index);
String identifier = contentUrl.substring(
index + PROTOCOL_DELIMETER_LENGTH,
contentUrl.length());
if (identifier.length() == 0)
{
throw new UnsupportedContentUrlException(this, contentUrl);
}
return new Pair<String, String>(protocol, identifier);
}
/**
* Override this method to supply a efficient and direct check of the URL supplied.
* The default implementation checks whether {@link ContentStore#getReader(String)}
* throws the {@link UnsupportedContentUrlException} exception.
*
* @since 2.1
*/
public boolean isContentUrlSupported(String contentUrl)
{
try
{
getReader(contentUrl);
return true;
}
catch (UnsupportedContentUrlException e)
{
// It is not supported
return false;
}
}
/**
* Override if the derived class supports the operation.
*
* @throws UnsupportedOperationException always
*
* @since 2.1
*/
public boolean delete(String contentUrl)
{
throw new UnsupportedOperationException();
}
/**
* @see #getUrls(Date, Date, ContentUrlHandler)
*/
public final void getUrls(ContentUrlHandler handler) throws ContentIOException
{
getUrls(null, null, handler);
}
/**
* Override to provide an implementation. If no implementation is supplied, then the store will not support
* cleaning of orphaned content binaries.
*
* @throws UnsupportedOperationException always
*/
public void getUrls(Date createdAfter, Date createdBefore, ContentUrlHandler handler) throws ContentIOException
{
throw new UnsupportedOperationException();
}
/**
* Implement to supply a store-specific writer for the given existing content
* and optional target content URL.
*
* @param existingContentReader a reader onto any content to initialize the new writer with
* @param newContentUrl an optional target for the new content
*
* @throws UnsupportedContentUrlException
* if the content URL supplied is not supported by the store
* @throws ContentExistsException
* if the content URL is already in use
* @throws ContentIOException
* if an IO error occurs
*
* @since 2.1
*/
protected ContentWriter getWriterInternal(ContentReader existingContentReader, String newContentUrl)
{
throw new UnsupportedOperationException("Override getWriterInternal (preferred) or getWriter");
}
/**
* An implementation that does some sanity checking before requesting a writer from the
* store. If this method is not overridden, then an implementation of
* {@link #getWriterInternal(ContentReader, String)} must be supplied.
*
* @see #getWriterInternal(ContentReader, String)
* @since 2.1
*/
public ContentWriter getWriter(ContentContext context)
{
ContentReader existingContentReader = context.getExistingContentReader();
String contentUrl = context.getContentUrl();
// Check if the store handles writes
if (!isWriteSupported())
{
if (logger.isDebugEnabled())
{
logger.debug(
"Write requests are not supported for this store:\n" +
" Store: " + this + "\n" +
" Context: " + context);
}
throw new UnsupportedOperationException("Write operations are not supported by this store: " + this);
}
// Check the content URL
if (contentUrl != null)
{
if (!isContentUrlSupported(contentUrl))
{
if (logger.isDebugEnabled())
{
logger.debug(
"Specific writer content URL is unsupported: \n" +
" Store: " + this + "\n" +
" Context: " + context);
}
throw new UnsupportedContentUrlException(this, contentUrl);
}
else if (exists(contentUrl))
{
if (logger.isDebugEnabled())
{
logger.debug(
"The content location is already used: \n" +
" Store: " + this + "\n" +
" Context: " + context);
}
throw new ContentExistsException(this, contentUrl);
}
}
// Get the writer
ContentWriter writer = getWriterInternal(existingContentReader, contentUrl);
// Done
if (logger.isDebugEnabled())
{
logger.debug(
"Fetched new writer: \n" +
" Store: " + this + "\n" +
" Context: " + context + "\n" +
" Writer: " + writer);
}
return writer;
}
/**
* @see ContentContext
* @see ContentStore#getWriter(ContentContext)
*/
public final ContentWriter getWriter(ContentReader existingContentReader, String newContentUrl)
{
ContentContext ctx = new ContentContext(existingContentReader, newContentUrl);
return getWriter(ctx);
}
/**
* Simple implementation that uses the
* {@link ContentReader#exists() reader's exists} method as its implementation.
* Override this method if a more efficient implementation is possible.
*/
public boolean exists(String contentUrl)
{
ContentReader reader = getReader(contentUrl);
return reader.exists();
}
/**
* Uses {@link #getSpaceUsed()}, which is the equivalent method. This method is now
* final in order to catch any implementations that should switch over to {@link #getSpaceUsed()}.
*/
public final long getTotalSize()
{
return getSpaceUsed();
}
/**
* @return Returns <tt>-1</tt> always
*/
public long getSpaceUsed()
{
return -1L;
}
/**
* @return Returns <tt>-1</tt> always
*/
@Override
public long getSpaceFree()
{
return -1;
}
/**
* @return Returns <tt>-1</tt> always
*/
@Override
public long getSpaceTotal()
{
return -1;
}
/**
* {@inheritDoc}
*/
public String getRootLocation()
{
return ".";
}
public void setContentLimitProvider(ContentLimitProvider contentLimitProvider)
{
this.contentLimitProvider = contentLimitProvider;
}
}