mirror of
https://github.com/Alfresco/alfresco-community-repo.git
synced 2025-06-30 18:15:39 +00:00
44674: Fix for ALF-17189 - The "Company Home" item in the top navigator menu and in the toolbar panel is invisible, if login as guest first then directly access the login page via URL. 44701: Merged BRANCHES/DEV/V3.4-BUG-FIX to BRANCHES/DEV/V4.1-BUG-FIX 44700: Fix for ALF-10369 - support for OPTIONS requests for WebScript framework and Share proxy 44709: ALF-17164 Fix version.properties which was wrong in sdk zip 44710: ALF-14570 ("Check out" outboud rule works incorrect) 44722: MNT-246: Need the ability to configure a proxy with Enterprise Sync. - RemoteConnectorServiceImpl will now use an http/https proxy if the standard system properties for Java network proxy configuration are found. See http://docs.oracle.com/javase/6/docs/technotes/guides/net/proxies.html (Section 2.1 and 2.2) 44730: Merged V4.1 to V4.1-BUG-FIX 44461: Merged PATCHES/V4.1.1 to V4.1 44060: ALF-16962 / MNT-221 Links from a deleted user cause error in the "Links" page 44129: ALF-17134 / MNT-223: Unbound SOLR result set (from Explorer trashcan query) consumes heap - Now we emulate the behaviour of ACLEntryAfterInvocationProvider in SolrQueryHTTPClient, thus limiting otherwise unconstrained SOLR queries to return a finite number of results - New solr subsystem parameter solr.query.maximumResultsFromUnlimitedQuery introduced - Its default value is ${system.acl.maxPermissionChecks}, thus providing backward compatibility with old behaviour (1000 results max) - When there are no other limits in the search parameters, this value will be used to limit the number of results - SolrJSONResultSet.getResultSetMetata().getLimitedBy() will return an appropriate LimitBy value, according to how the query was limited 44130: ALF-17134 / MNT-223: Unbound SOLR result set (from Explorer trashcan query) consumes heap - Improved calculation of SolrJSONResultSet.getResultSetMetata().getLimitedBy() to better emulate ACLEntryAfterInvocationProvider 44141: ALF-17134 / MNT-223: Unbound SOLR result set (from Explorer trashcan query) consumes heap - Correction to handling of limited queries (Share search works again!) 44291: ALF-17094 / MNT-222 InvalidNodeRefException on user deletion in Share UI 44462: Merged PATCHES/V4.0.2 to V4.1 44221: ALF-17038 / MNT-226: Out-of-order versions for existing data during migration from 3.4.9 to 4.0.2.19 - Have been able to remove the need for any Comparators in the normal case. As Dave said, he thought it was ordered already. It is with "assoc.assoc_index ASC, assoc.id ASC". Required a bit of re factoring of Version2ServiceImpl to do it as they were referenced/used in a couple of other classes. - Modified all 43 Oracle sequences to include ORDER in the create statement. Probably only really was needed to do it on alf_child_assoc_seq to fix this issue, but it will stop similar issues in other clustered database setups. Did not change the upgrade scripts, as this will give us a clue that there will be existing data issues. - The name of a Comparator<Version> may be specified in the Alfresco global property: org.alfresco.repo.version.common.VersionLabelComparator and it will be used by BOTH Version2ServiceImpl and VersionServiceImpl. They in turn pass it on to Version2ServiceImpl instances when they create them. - A VersionLabelComparator already existed (still deprecated as we don't normally use it) and works: org.alfresco.repo.version.common.VersionLabelComparator. - Customers with out of sequence ids on Oracle RDBMS using a clustered database may 'correct on the fly' the order of their versions by setting the alfresco global property described above. - Have tested both with and without a comparator in development environment. Using break points and Collections.shuffle(version) in an expression was able to simulate out of order IDs. - New unit tests added to VersionHistoryImplTest and VersionServiceImplTest to test db ids out of order 44336: ALF-15935: Fixed SecureContext errors when ticket has expired. MNT-180 44467: Fixed compilation failure 44520: ALF-16590: Improved fix after testing by Mark Lugert 44563: Merged DEV to V4.1 (with corrections) 44547: ALF-17132: Possible XSS - arbitrary url parameters re-sent to the browser Escaping of keys and values of request attributes 44610: Merged PATCHES/V4.0.2 to V4.1 44435: ALF-17183: Merged DEV to V4.0.2 (4.0.2.22) 44429: MNT-232: Upgrade from 3.4.9 to 4.0.2 - FAILED - Initialize rootRefs in the property definition to prevent NPE. 44591: Fix to CIFS reported user free space when disk quotas are not enabled. 44595: ALF-17184 / MNT-243 Minimal fix for disk size and user quotas. (Bring values into line with API.) 44601: ALF-17184 / MNT-243 - Implementation of file size on Abstract Tennant Routing Content Store. 44608: ALF-15935 / MNT-180: Moved closeFile() call to closeConnection() cleanup method, always call closeFile() Do not check if file is marked as closed during cleanup, only open files should still be in the file table. 44652: ALF-17117: Created article or publication cant be viewed on WQS site - Fixes by Dmitry Vaserin - Removed unnecessary outer read locks from getRelatedAssets and getRelatedAsset to prevent deadlock - Correct markup error when node doesn't have tags 44653: ALF-17117: Created article or publication cant be viewed on WQS site - Missed file from previous checkin 44682: ALF-17118 WQS: Impossible to upload document to publications space - Only first part to do with the transformation failure has been committed. 44731: Merged V4.1 to V4.1-BUG-FIX (RECORD ONLY) 44441: Merge V4.1-BUG-FIX to V4.1 44270: Merge V3.4-BUG-FIX to V4.1-BUG-FIX 44266: BDE-111: harden generation of Windows installers - make sure build fails if installer generation fails - generate Windows unsigned installers in a place that is cleaned later, avoiding leftovers 44598: Merged V4.1-BUG-FIX to V4.1 44541: Fix for ALF-17151 SOLR - add support to disable permission checks 44577: Final part for ALF-16558 SOLR tracking does not do incremental updates but one single chunk - fixed code so SolrSearchers are held for as little time as possible 44607: Merged V4.1-BUG-FIX to V4.1 44603: ALF-14201: upgrade activiti to 5.7-20121211 44606: ALF-14201: upgrade activiti to 5.7-20121211 in Maven poms git-svn-id: https://svn.alfresco.com/repos/alfresco-enterprise/alfresco/HEAD/root@44732 c4b6b30b-aa2e-2d43-bbcb-ca4b014f7261
460 lines
16 KiB
Java
460 lines
16 KiB
Java
/*
|
|
* Copyright (C) 2005-2010 Alfresco Software Limited.
|
|
*
|
|
* This file is part of Alfresco
|
|
*
|
|
* Alfresco is free software: you can redistribute it and/or modify
|
|
* it under the terms of the GNU Lesser General Public License as published by
|
|
* the Free Software Foundation, either version 3 of the License, or
|
|
* (at your option) any later version.
|
|
*
|
|
* Alfresco is distributed in the hope that it will be useful,
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
* GNU Lesser General Public License for more details.
|
|
*
|
|
* You should have received a copy of the GNU Lesser General Public License
|
|
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
|
|
*/
|
|
package org.alfresco.repo.content;
|
|
|
|
import java.util.Date;
|
|
import java.util.List;
|
|
import java.util.concurrent.locks.ReentrantReadWriteLock;
|
|
import java.util.concurrent.locks.ReentrantReadWriteLock.ReadLock;
|
|
import java.util.concurrent.locks.ReentrantReadWriteLock.WriteLock;
|
|
|
|
import org.alfresco.error.AlfrescoRuntimeException;
|
|
import org.alfresco.repo.cache.SimpleCache;
|
|
import org.alfresco.service.cmr.repository.ContentIOException;
|
|
import org.alfresco.service.cmr.repository.ContentReader;
|
|
import org.alfresco.service.cmr.repository.ContentWriter;
|
|
import org.alfresco.util.GUID;
|
|
import org.alfresco.util.Pair;
|
|
import org.apache.commons.logging.Log;
|
|
import org.apache.commons.logging.LogFactory;
|
|
|
|
/**
|
|
* A store providing support for content store implementations that provide
|
|
* routing of content read and write requests based on context.
|
|
*
|
|
* @see ContentContext
|
|
*
|
|
* @since 2.1
|
|
* @author Derek Hulley
|
|
*/
|
|
public abstract class AbstractRoutingContentStore implements ContentStore
|
|
{
|
|
private static Log logger = LogFactory.getLog(AbstractRoutingContentStore.class);
|
|
|
|
private String instanceKey = GUID.generate();
|
|
private SimpleCache<Pair<String, String>, ContentStore> storesByContentUrl;
|
|
private ReadLock storesCacheReadLock;
|
|
private WriteLock storesCacheWriteLock;
|
|
|
|
protected AbstractRoutingContentStore()
|
|
{
|
|
ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
|
|
storesCacheReadLock = lock.readLock();
|
|
storesCacheWriteLock = lock.writeLock();
|
|
}
|
|
|
|
/**
|
|
* @param storesCache cache of stores used to access URLs
|
|
*/
|
|
public void setStoresCache(SimpleCache<Pair<String, String>, ContentStore> storesCache)
|
|
{
|
|
this.storesByContentUrl = storesCache;
|
|
}
|
|
|
|
/**
|
|
* @return Returns a list of all possible stores available for reading or writing
|
|
*/
|
|
protected abstract List<ContentStore> getAllStores();
|
|
|
|
/**
|
|
* Get a content store based on the context provided. The applicability of the
|
|
* context and even the types of context allowed are up to the implementation, but
|
|
* normally there should be a fallback case for when the parameters are not adequate
|
|
* to make a decision.
|
|
*
|
|
* @param ctx the context to use to make the choice
|
|
* @return Returns the store most appropriate for the given context and
|
|
* <b>never <tt>null</tt></b>
|
|
*/
|
|
protected abstract ContentStore selectWriteStore(ContentContext ctx);
|
|
|
|
/**
|
|
* Checks the cache for the store and ensures that the URL is in the store.
|
|
*
|
|
* @param contentUrl the content URL to search for
|
|
* @return Returns the store matching the content URL
|
|
*/
|
|
private ContentStore selectReadStore(String contentUrl)
|
|
{
|
|
Pair<String, String> cacheKey = new Pair<String, String>(instanceKey, contentUrl);
|
|
storesCacheReadLock.lock();
|
|
try
|
|
{
|
|
// Check if the store is in the cache
|
|
ContentStore store = storesByContentUrl.get(cacheKey);
|
|
if (store != null)
|
|
{
|
|
// We found a store that was previously used
|
|
try
|
|
{
|
|
// It is possible for content to be removed from a store and
|
|
// it might have moved into another store.
|
|
if (store.exists(contentUrl))
|
|
{
|
|
// We found a store and can use it
|
|
return store;
|
|
}
|
|
}
|
|
catch (UnsupportedContentUrlException e)
|
|
{
|
|
// This is odd. The store that previously supported the content URL
|
|
// no longer does so. I can't think of a reason why that would be.
|
|
throw new AlfrescoRuntimeException(
|
|
"Found a content store that previously supported a URL, but no longer does: \n" +
|
|
" Store: " + store + "\n" +
|
|
" Content URL: " + contentUrl);
|
|
}
|
|
}
|
|
}
|
|
finally
|
|
{
|
|
storesCacheReadLock.unlock();
|
|
}
|
|
// Get the write lock and double check
|
|
storesCacheWriteLock.lock();
|
|
try
|
|
{
|
|
// Double check
|
|
ContentStore store = storesByContentUrl.get(cacheKey);
|
|
if (store != null && store.exists(contentUrl))
|
|
{
|
|
// We found a store and can use it
|
|
if (logger.isDebugEnabled())
|
|
{
|
|
logger.debug(
|
|
"Found mapped store for content URL: \n" +
|
|
" Content URL: " + contentUrl + "\n" +
|
|
" Store: " + store);
|
|
}
|
|
return store;
|
|
}
|
|
else
|
|
{
|
|
store = null;
|
|
}
|
|
// It isn't, so search all the stores
|
|
List<ContentStore> stores = getAllStores();
|
|
// Keep track of the unsupported state of the content URL - it might be a rubbish URL
|
|
boolean contentUrlSupported = false;
|
|
for (ContentStore storeInList : stores)
|
|
{
|
|
boolean exists = false;
|
|
try
|
|
{
|
|
exists = storeInList.exists(contentUrl);
|
|
// At least the content URL was supported
|
|
contentUrlSupported = true;
|
|
}
|
|
catch (UnsupportedContentUrlException e)
|
|
{
|
|
// The store can't handle the content URL
|
|
}
|
|
if (!exists)
|
|
{
|
|
// It is not in the store
|
|
continue;
|
|
}
|
|
// We found one
|
|
store = storeInList;
|
|
// Put the value in the cache
|
|
storesByContentUrl.put(cacheKey, store);
|
|
break;
|
|
}
|
|
// Check if the content URL was supported
|
|
if (!contentUrlSupported)
|
|
{
|
|
throw new UnsupportedContentUrlException(this, contentUrl);
|
|
}
|
|
// Done
|
|
if (logger.isDebugEnabled())
|
|
{
|
|
logger.debug(
|
|
"Mapped content URL to store for reading: \n" +
|
|
" Content URL: " + contentUrl + "\n" +
|
|
" Store: " + store);
|
|
}
|
|
return store;
|
|
}
|
|
finally
|
|
{
|
|
storesCacheWriteLock.unlock();
|
|
}
|
|
}
|
|
|
|
/**
|
|
* @return Returns <tt>true</tt> if the URL is supported by any of the stores.
|
|
*/
|
|
public boolean isContentUrlSupported(String contentUrl)
|
|
{
|
|
List<ContentStore> stores = getAllStores();
|
|
boolean supported = false;
|
|
for (ContentStore store : stores)
|
|
{
|
|
if (store.isContentUrlSupported(contentUrl))
|
|
{
|
|
supported = true;
|
|
break;
|
|
}
|
|
}
|
|
// Done
|
|
if (logger.isDebugEnabled())
|
|
{
|
|
logger.debug("The url " + (supported ? "is" : "is not") + " supported by at least one store.");
|
|
}
|
|
return supported;
|
|
}
|
|
|
|
/**
|
|
* @return Returns <tt>true</tt> if write is supported by any of the stores.
|
|
*/
|
|
public boolean isWriteSupported()
|
|
{
|
|
List<ContentStore> stores = getAllStores();
|
|
boolean supported = false;
|
|
for (ContentStore store : stores)
|
|
{
|
|
if (store.isWriteSupported())
|
|
{
|
|
supported = true;
|
|
break;
|
|
}
|
|
}
|
|
// Done
|
|
if (logger.isDebugEnabled())
|
|
{
|
|
logger.debug("Writing " + (supported ? "is" : "is not") + " supported by at least one store.");
|
|
}
|
|
return supported;
|
|
}
|
|
|
|
/**
|
|
* @return Returns <b>.</b> always
|
|
*/
|
|
public String getRootLocation()
|
|
{
|
|
return ".";
|
|
}
|
|
|
|
/**
|
|
* Uses {@link #getSpaceUsed()}, which is the equivalent method. This method is now
|
|
* final in order to catch any implementations that should switch over to {@link #getSpaceUsed()}.
|
|
*/
|
|
public final long getTotalSize()
|
|
{
|
|
return getSpaceUsed();
|
|
}
|
|
|
|
/**
|
|
* @return Returns <tt>-1</tt> always
|
|
*/
|
|
public long getSpaceUsed()
|
|
{
|
|
return -1L;
|
|
}
|
|
|
|
/**
|
|
* @return Returns <tt>-1</tt> always
|
|
*/
|
|
@Override
|
|
public long getSpaceFree()
|
|
{
|
|
return -1L;
|
|
}
|
|
|
|
/**
|
|
* @return Returns <tt>-1</tt> always
|
|
*/
|
|
@Override
|
|
public long getSpaceTotal()
|
|
{
|
|
return -1L;
|
|
}
|
|
|
|
/**
|
|
* @see #selectReadStore(String)
|
|
*/
|
|
public boolean exists(String contentUrl) throws ContentIOException
|
|
{
|
|
ContentStore store = selectReadStore(contentUrl);
|
|
return (store != null);
|
|
}
|
|
|
|
/**
|
|
* @return Returns a valid reader from one of the stores otherwise
|
|
* a {@link EmptyContentReader} is returned.
|
|
*/
|
|
public ContentReader getReader(String contentUrl) throws ContentIOException
|
|
{
|
|
ContentStore store = selectReadStore(contentUrl);
|
|
if (store != null)
|
|
{
|
|
if (logger.isDebugEnabled())
|
|
{
|
|
logger.debug("Getting reader from store: \n" +
|
|
" Content URL: " + contentUrl + "\n" +
|
|
" Store: " + store);
|
|
}
|
|
return store.getReader(contentUrl);
|
|
}
|
|
else
|
|
{
|
|
if (logger.isDebugEnabled())
|
|
{
|
|
logger.debug("Getting empty reader for content URL: " + contentUrl);
|
|
}
|
|
return new EmptyContentReader(contentUrl);
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Selects a store for the given context and caches store that was used.
|
|
*
|
|
* @see #selectWriteStore(ContentContext)
|
|
*/
|
|
public ContentWriter getWriter(ContentContext context) throws ContentIOException
|
|
{
|
|
String contentUrl = context.getContentUrl();
|
|
Pair<String, String> cacheKey = new Pair<String, String>(instanceKey, contentUrl);
|
|
if (contentUrl != null)
|
|
{
|
|
// Check to see if it is in the cache
|
|
storesCacheReadLock.lock();
|
|
try
|
|
{
|
|
// Check if the store is in the cache
|
|
ContentStore store = storesByContentUrl.get(cacheKey);
|
|
if (store != null)
|
|
{
|
|
throw new ContentExistsException(this, contentUrl);
|
|
}
|
|
/*
|
|
* We could go further and check each store for the existence of the URL,
|
|
* but that would be overkill. The main problem we need to prevent is
|
|
* the simultaneous access of the same store. The router represents
|
|
* a single store and therefore if the URL is present in any of the stores,
|
|
* it is effectively present in all of them.
|
|
*/
|
|
}
|
|
finally
|
|
{
|
|
storesCacheReadLock.unlock();
|
|
}
|
|
}
|
|
// Select the store for writing
|
|
ContentStore store = selectWriteStore(context);
|
|
// Check that we were given a valid store
|
|
if (store == null)
|
|
{
|
|
throw new NullPointerException(
|
|
"Unable to find a writer. 'selectWriteStore' may not return null: \n" +
|
|
" Router: " + this + "\n" +
|
|
" Chose: " + store);
|
|
}
|
|
else if (!store.isWriteSupported())
|
|
{
|
|
throw new AlfrescoRuntimeException(
|
|
"A write store was chosen that doesn't support writes: \n" +
|
|
" Router: " + this + "\n" +
|
|
" Chose: " + store);
|
|
}
|
|
ContentWriter writer = store.getWriter(context);
|
|
String newContentUrl = writer.getContentUrl();
|
|
Pair<String, String> newCacheKey = new Pair<String, String>(instanceKey, newContentUrl);
|
|
// Cache the store against the URL
|
|
storesCacheWriteLock.lock();
|
|
try
|
|
{
|
|
storesByContentUrl.put(newCacheKey, store);
|
|
}
|
|
finally
|
|
{
|
|
storesCacheWriteLock.unlock();
|
|
}
|
|
// Done
|
|
if (logger.isDebugEnabled())
|
|
{
|
|
logger.debug(
|
|
"Got writer and cache URL from store: \n" +
|
|
" Context: " + context + "\n" +
|
|
" Writer: " + writer + "\n" +
|
|
" Store: " + store);
|
|
}
|
|
return writer;
|
|
}
|
|
|
|
public ContentWriter getWriter(ContentReader existingContentReader, String newContentUrl) throws ContentIOException
|
|
{
|
|
return getWriter(new ContentContext(existingContentReader, newContentUrl));
|
|
}
|
|
|
|
/**
|
|
* @see #getUrls(Date, Date, ContentUrlHandler)
|
|
*/
|
|
public void getUrls(ContentUrlHandler handler) throws ContentIOException
|
|
{
|
|
getUrls(null, null, handler);
|
|
}
|
|
|
|
/**
|
|
* Passes the call to each of the stores wrapped by this store
|
|
*
|
|
* @see ContentStore#getUrls(Date, Date, ContentUrlHandler)
|
|
*/
|
|
public void getUrls(Date createdAfter, Date createdBefore, ContentUrlHandler handler) throws ContentIOException
|
|
{
|
|
List<ContentStore> stores = getAllStores();
|
|
for (ContentStore store : stores)
|
|
{
|
|
try
|
|
{
|
|
store.getUrls(createdAfter, createdBefore, handler);
|
|
}
|
|
catch (UnsupportedOperationException e)
|
|
{
|
|
// Support of this is not mandatory
|
|
}
|
|
}
|
|
}
|
|
|
|
/**
|
|
* This operation has to be performed on all the stores in order to maintain the
|
|
* {@link ContentStore#exists(String)} contract.
|
|
*/
|
|
public boolean delete(String contentUrl) throws ContentIOException
|
|
{
|
|
boolean deleted = true;
|
|
List<ContentStore> stores = getAllStores();
|
|
for (ContentStore store : stores)
|
|
{
|
|
if (store.isWriteSupported())
|
|
{
|
|
deleted &= store.delete(contentUrl);
|
|
}
|
|
}
|
|
// Done
|
|
if (logger.isDebugEnabled())
|
|
{
|
|
logger.debug("Deleted content URL from stores: \n" +
|
|
" Stores: " + stores.size() + "\n" +
|
|
" Deleted: " + deleted);
|
|
}
|
|
return deleted;
|
|
}
|
|
}
|