Merged V3.3 to HEAD

20128: Reverse part of previous build fix that breaks other tests ...
   20129: ALF-202, ALF-1488: Fixed portlets in alfresco.war
      - Stop excluding portlet.xml from alfresco.war
      - Used JSR 286 ResourceURL solution to get upload iframes to work in portlets
      - Removed horrific hacks concerning faces session map resolution in portlets and upload servlet
      - WebClientPortletAuthenticator now dispatches to a helper servlet, AuthenticatorServlet, allowing it to use identical servlet mechanisms to authenticate / sign-on the user
      - Portlet Authenticated user now set consistently in application-scoped attribute, so web client, web script portlets and client portlet share same notion of user ID
      - Application.inPortalServer flag now thread local (and thread safe!)
   20130: Merged BRANCHES/V2.2 to BRANCHES/V3.3
      13819: *RECORD ONLY* ACT-6420 - Office 2003 "Install for all users" - DO NOT MERGE
   20131: Merged BRANCHES/V3.1 to BRANCHES/V3.3
      19600: *RECORD ONLY* ALF-2205 - CLONE: Office Plugin: filename overlaps the plugin UI if longer than 40 characters without spaces
         Merged V3.2 to V3.1 (Adobe)
         17499: ETHREEOH-2322 - Office Plugin: filename overlaps the plugin UI if longer than 40 characters without spaces
         19443: ALF-2131 - Office webscripts: Missing close brace, '}'
   20132: ALF-2749 - temporarily skip couple of -ve checks (for MS SQL Server only)
   20133: Merged BRANCHES/V3.2 to BRANCHES/V3.3
      19550: *RECORD ONLY* ALF-1091 - Only 15 tags displayed in Tags section in Browser pane
   20134: Adding files missed during first commit of Meeting Workspace code
   20135: Merged V3.2 to V3.3
      19814: *RECORD ONLY* Fix for ALF-2322 - discussion topic containing non-ascii characters cannot be saved
      19934: *RECORD ONLY* Fix for ALF-2512 - ability to execute JavaScript via cmd servlet by a non-admin user disabled by default.
             - user script execution privileges can be reactivated if required via web-client-config flag <allow-user-script-execute>
      19935: *RECORD ONLY* Corrected imports for 3.2 compatability
   20136: Merge Dev to V3.3
      20104 : ALF-676 -  imapFolders patch fails if versionable aspect is mandatory on cm:content
   20137: Workaround for ALF-2639: Sharepoint: Share Edit Online uses Share protocol rather than Alfresco protocol to build link
      - Replace "https:" protocol with "http:" when generating "Edit Online" URL
   20138: Merged V3.1 to V3.3
      18204: *RECORD ONLY* Merged DEV/TEMPORARY to 3.1
         17837: ETHREEOH-3801: Creating users via the api does not add them to the user store
      18577: *RECORD ONLY* Fix for ETHREEOH-4117, based on CHK-11154
      19373: *RECORD ONLY* Merged V3.2 to V3.1
         19216: ENH-506 - allow script compilation to be disabled for repository tier. Fix to unreported issue with return aspect array from a ScriptNode.
   20139: Merged V2.2 to V3.3
      18518: *RECORD ONLY* Fix for ETWOTWO-1375
      18522: *RECORD ONLY* Merged DEV-TEMPORARY to V2.2
         18440: TinyMCE HTML Image gets invalid path
         18503: ETWOTWO-1035: Error message when bypassing the 'close' and directly clicking on breadcrumb link after a deployment
         18504: ETWOTWO-1035: Error message when bypassing the 'close' and directly clicking on breadcrumb link after a deployment
      18578: Merged DEV-TEMPORARY to V2.2
         18528: ETWOTWO-1114: Missing 'Required' items are not highlighted in the error when missed
      19094: *RECORD ONLY* Merged V3.1 to V2.2
         14015: Fixes for ETHREEOH-1864 and ETHREEOH-1840
   20140: Remove unwanted @overide
   20141: Lazy schema introspection to shave off a few seconds on startup
      - Saves about 5s on dev machine
      - Hibernate still has to look at the DB metadata, though
   20144: Merged V2.2 to V3.3
      18859: (RECORD ONLY) ALF-1882: Merged V3.2 to V2.2
         17292: ETHREEOH-1842: Ticket association with HttpSession IDs tracked so that we don't invalidate a ticket in use by multiple sessions prematurely
            - AuthenticationService validate, getCurrentTicket, etc. methods now take optional sessionId arguments
      18864: (RECORD ONLY) ALF-1882: Fixed compilation error from previous checkin.
   20145: Merged V3,1 to V3.3
      19584: (RECORD ONLY) ALF-2207: Merged V3.2 to V3.1 (Adobe)
         18277: Merged DEV_TEMPORARY to V3.2
            18178: ETHREEOH-3222: ERROR [org.alfresco.webdav.protocol] WebDAV method not implemented - PROPPATCH
      19660: (RECORD ONLY) ALF-2266: Merged V3.2 to V3.1 (Adobe)
         19562: Merged DEV/BELARUS/V3.2-2010_02_24 to V3.2
            19244: ALF-1816: Email templates can no longer be selected when creating a rule for the action 'Send email to specified users' following an upgrade
               - New patch has been created to create invite email templates and notify email templates folders if those are absent. Also it moves default notify and invite templates into appropriate folders. 
      19662: (RECORD ONLY) Incremented version label
      19663: (RECORD ONLY) Corrected version label
      19779: (RECORD ONLY) Incremented version label
   20148: Merged PATCHES/V3.2.r to V3.3
      20029: ALF-2624: Avoid NPE in LDAP sync when there are dangling references and improve logging
      20053: (RECORD ONLY) Incremented version number
   20151: ALF-2749 - unit test fix (re-arranged -ve checks for txn boundaries, functionally equivalent)
   20152: Merged HEAD to BRANCHES/V3.3: (RECORD ONLY)
      20050: Fix ALF-2637: objectTypeId updatability reported as "readonly" rather then "oncreate"
      20051: Fix for ALF-2609:  CMIS ACL mapping improvements
      20052: Fix for ALF-2609:  CMIS ACL mapping improvements
      20086: Fix re-opened ALF-2637: "objectTypeId" updatability reported as "readonly" rather then "oncreate"
      20125: Fix ALF-2728: AtomPub renditions are not rendered as part of cmis:object, although their rel links are.
   20153: Merged HEAD to BRANCHES/V3.3: (RECORD ONLY)
      20067: Fix ALF-2691: Choice display names in Type Definition are not escaped properly in AtomPub binding
   20154: ALF-1598: Share - Edit online missing on preview page
      - Note: The details page doesn't know when Office opens the file, so may show stale information.
   20156: Build/unit test - comment-out force re-index (IndexCheckServiceImplTest)
   20157: Office add-in: Missing i18n string found whilst investigating ALF-605: Script error appears when start typing not-existent user in "Assign to" filed
      - Changed behaviour slightly so that "start workflow" panel remains if error occurred during submit
   20164: Fix trailing commas that MSIE doesn't like. Plus fix for renamed webscript reference.
   20168: Attempting to fix failing test in ThumbnailService.
      The change adds some extra logging and exception info too.
   20169: Build/unit test - temporarily put back "force re-index" (IndexCheckServiceImplTest)
      - TODO: re-work test for build env
   20170: Fix NPE (AVMStoreImpl.createSnapshot)
      - see DBC-HEADPOSTGRESQL-34
   20173: Propagate IOExceptions from retryable write transactions in AlfrescoDiskDriver
   20176: Merge from V3.2 to V3.3. Merge ok'ed by Steve.
      20175: JMX configuration of enterprise logging broken
   20178: JodConverter loggers are now exposed in JMX.
      This follows on from check-ins 20175 (on V32) and 20176 (on V33) which fixed the JMX logging for enterprise code.
   20180: Fixes ALF-2021 by adding new date format properties and exposing YUI widget options.
   20185: Various core fixes and additional debug output. Part of ALF-1554.
   20186: Fix for OpenOffice multiple versions per edit problem. ALF-1554.
   20187: Merged BRANCHES/DEV/V3.3-BUG-FIX to BRANCHES/V3.3:
      20181: IndexCheckServiceImplTest - by default, check test store only (reduces current ent build time by nearly 1 hour !)
   20188: Fix -exploded build target for Share to copy core classes folder
   20191: Merged HEAD to BRANCHES/V3.3: (RECORD ONLY)
      20190: Fix ALF-2774: Atompub createDocument with versioningState=checkedout followed by checkin does not create major version, Fix ALF-2782: AtomPub binding incorrectly handles atom:title when no value is provided (often done for compliant atom entry)
   20193: Merge 3.2 to 3.3:
      19759: Fix for CIFS/CheckInOut.exe save of working copy breaks lock on original file. ALF-2028. (Record-only)
      19760: Fix for working copy checked out via CIFS is not accessible until FileStateReaper expires file state. ALF-962. (Record-only)
   20195: Form fields for numbers are now rendered much smaller that text fields following feedback from meetups. Must be included in 3.3 as requested by Paul.
   20197: Rules: Size property is now more userfriendly & IE bugs are solved
      - Numbers and booleans where posted as strings to the server making property comparisons against properties such as "Size" to fail on the server
      - Size, encoding & mimetype are now options by default in the "IF/Unless" drop downs
      - When comparing Size properties a "bytes" label is placed to the right of the text field
      - "Show more..." menu now displays aspect/type ids on mouse hover in the tree 
      - "Show more..." menu now displays a new column for the property name in the list next to the property displayLabel
      - The list in the "Show more..." menu now stays in its place instead of being pushed down in some browsers
      - IE css fixes to make rules look good in IE 6, 7 & 8
      - Fixed IE 6 & 7 issue with generateDomId & getAttribute("id") not being in sync
      - Fixed IE 6 & 7 issue where Selector.query only worked with "id" as root attribute
   20199: Merge 3.1 to 3.3 (All record-only):
      14483: Merged HEAD to v3.1:
                 13942 Added FTP IPv6 support. MOB-714.
      14484: Merged HEAD to v3.1:
                 13943 Added FTP IPv6 configuration. Added the ftp.ipv6 property. MOB-714.
      14523: Add trailing 'A' to CIFS server name, removed by recent checkin.
      14916: Fixes for local domain lookup when WINS is configured. ETHREEOH-2263.
      14921: Merge HEAD to V3.1:
                 14599: Fixes to file server ACL parsing, part of ETHREEOH-2177
      14930: Updated svn:mergeinfo
      15231: Fix for cut/paste file between folders on CIFS. ETHREEOH-2323.
      15570: Merge 3.2 to 3.1:
                 15548: CIFS server memory leak fixes (clear auth context, session close). ETHREEOH-2538
      15571: Merge 3.2 to 3.1:
                 15549: Check for null ClientInfo in the setCurrentUser() method and clear the auth context. Part of ETHREEOH-2538.
                 15550: Fixed performance issue in the continue search code, add warn level output of folder search timing.
      15572: Update svn:mergeinfo
      15627: Merge 3.2 to 3.1:
                 15626: Fixed NetBIOS reports an invalid packet during session connection, and connection stalls for a while. JLAN-86.
      15628: Update svn:mergeinfo
      15780: Fix for MS Office document locking issue. ETHREEOH-2579.
      15827: Fixed bug in delete node event processing.
      16160: Minor change to debug output
      16162: Add support for the . and .. pseudo entries in a folder search.
      16163: Added timstamp tracking via the file state cache, blend cached timestamps into file info/folder search results.
      16555: Fix for processing of NetBIOS packets over 64K in the older JNI code. Part of ETHREEOH-2882.
      16556: Fix for CIFS session leak and 100% CPU when connect/disconnecting quickly. ETHREEOH-2881.
      16559: Fix for ACL parsing in the standalone JLAN Server build. JLAN-89.
      16666: Fix for CIFS cannot handle requests over 64K in JNI code, causes session disconnect, standalone server. JLAN-91.
      16709: Fixed the FTP not logged on status return code, now uses reply code 530. JLAN-90.
      16710: Added CIFS NT status code/text for the 'account locked' status, 0xC0000234. ETHREEOH-2897.
      16717: Fixed setAllowConsoleShutdown setting in standalone server can cause infinite loop. JLAN-38.
      16718: Fix for Alfresco and AVM spaces are empty when viewed by FTP and Alfresco is run as non-root. ETHREEOH-2652.
      16727: Fix for unable to connect via FTP via Firefox (when anonymous logons are not enabled). ETHREEOH-2012.
      16987: Merge 2.2 to 3.1:
                 13089: (record-only) Fix "Read-Write transaction" exception, when the user does not exist. ETWOTWO-1055.
                 13091: (record-only) Fix for NFS server "Read-Write transaction started within read-only transaction" exception. ETWOTWO-1054.
                 14190: (record-only) Fix for cut/paste a folder from Alfresco CIFS to local drive loses folder contents. ETWOTWO-1159.
                 14191: (record-only) Additional fix for CIFS 'No more connections' error. ETWOTWO-556
                 14199: (record-only) Fix for NFS problem with Solaris doing an Access check on the share level handle. ETWOTWO-1225.
                 14210: (record-only) Added support for FTP EPRT and EPSV commands, on IPv4 only. ETWOTWO-325.
                 14216: (record-only) Fixed FTP character encoding, ported UTF8 normalizer code from v3.x. ETWOTWO-1151.
                 14229: (record-only) Remove unused import.
                 14655: (record-only) Convert content I/O exceptions to file server exceptions during write and truncate. ETWOTWO-1241.
                 14825: (record-only) Add support for the extended response to the CIFS NTCreateAndX call, back port of ETWOTWO-1232.
                 15869: (record-only) Port of desktop action client side EXE fixes from v3.x. ETWOTWO-1374.
      17130: Fix for cannot delete file via CIFS that has a thumbnail associated with it. ETHREEOH-3143 and ETHREEOH-3115.
      17359: Fix for CIFS/Kerberos/SPNEGO logon problem with Win2008/Win7 client. ETHREEOH-3225.
      17839: Rewrite the rename file logic to handle MS Office file rename patterns. ETHREEOH-1951.
      17842: Missing file from previous checkin.
      17843: Re-use open files for the same session/process id so that writes on each file handle go to the same file. Port of ETWOTWO-1250.
      17861: Merge 2.2 to 3.1:
                 17803: Re-use open files for the same session/process id so that writes on each file handle go to the same file. ETWOTWO-1250. (Record-only)
      18432: Added FTP data port range configuration via <dataPorts>n:n</dataPorts> config value. ETHREEOH-4103.
      18451: Fixed incorrect FTP debug level name.
   20200: Merge PATCHES/V3.2.1 to 3.3:
      20142: Added debug output to dump the restart file name for FindFirst/FindNext folder searches (via the 'Search' debug output level).
   20201: Merge PATCHES/V3.2.1 to 3.3:
      20143: Fix for files being skipped during a long folder listing via CIFS, ALF-2730.
   20202: Update svn:mergeinfo
   20219: Fix for ALF-2791 - correction to changes in rev 20129 so the upload file servlet path is generated for all cases.


git-svn-id: https://svn.alfresco.com/repos/alfresco-enterprise/alfresco/HEAD/root@20567 c4b6b30b-aa2e-2d43-bbcb-ca4b014f7261
This commit is contained in:
Dave Ward
2010-06-09 13:25:16 +00:00
parent ee4c496b7d
commit 0097d5a092
20 changed files with 667 additions and 218 deletions

View File

@@ -219,6 +219,9 @@
<!-- Installed AMP modules -->
<value>classpath*:alfresco/module/*/log4j.properties</value>
<!-- Enterprise extensions -->
<value>classpath*:alfresco/enterprise/*-log4j.properties</value>
<!-- Other installed extensions -->
<value>classpath*:alfresco/extension/*-log4j.properties</value>

View File

@@ -65,6 +65,7 @@
<property name="nodeMonitorFactory"><ref bean="nodeMonitorFactory"/></property>
<property name="nodeArchiveService"><ref bean="nodeArchiveService" /></property>
<property name="lockService"><ref bean="lockService" /></property>
<property name="policyFilter"><ref bean="policyBehaviourFilter" /></property>
</bean>
<bean id="nodeMonitorFactory" class="org.alfresco.filesys.repo.NodeMonitorFactory">

View File

@@ -1924,6 +1924,7 @@ public class ServerConfigurationBean extends AbstractServerConfigurationBean {
// Create the shared filesystem
filesys = new DiskSharedDevice(filesysName, filesysDriver, filesysContext);
filesys.setConfiguration( this);
// Check if the filesystem uses the file state cache, if so then add to the file state reaper
@@ -1987,6 +1988,7 @@ public class ServerConfigurationBean extends AbstractServerConfigurationBean {
// Create the shared filesystem
filesys = new DiskSharedDevice(filesysName, filesysDriver, filesysContext);
filesys.setConfiguration( this);
// Attach desktop actions to the filesystem
@@ -2071,7 +2073,10 @@ public class ServerConfigurationBean extends AbstractServerConfigurationBean {
// Create the shared filesystem
fsysConfig.addShare( new DiskSharedDevice( storeName, avmDriver, avmContext));
DiskSharedDevice filesys = new DiskSharedDevice( storeName, avmDriver, avmContext);
filesys.setConfiguration( this);
fsysConfig.addShare( filesys);
// DEBUG

View File

@@ -18,6 +18,7 @@
package org.alfresco.filesys.alfresco;
import java.io.IOException;
import java.util.concurrent.Callable;
import javax.transaction.Status;
@@ -163,8 +164,10 @@ public abstract class AlfrescoDiskDriver implements IOCtlInterface, Transactiona
* @param callback
* callback for the retryable operation
* @return the result of the operation
* @throws Exception
*/
public <T> T doInWriteTransaction(SrvSession sess, final Callable<T> callback)
public <T> T doInWriteTransaction(SrvSession sess, final CallableIO<T> callback)
throws IOException
{
Boolean wasInRetryingTransaction = m_inRetryingTransaction.get();
try
@@ -178,11 +181,18 @@ public abstract class AlfrescoDiskDriver implements IOCtlInterface, Transactiona
T result = m_transactionService.getRetryingTransactionHelper().doInTransaction(
new RetryingTransactionHelper.RetryingTransactionCallback<T>()
{
public T execute() throws Throwable
{
try
{
return callback.call();
}
catch (IOException e)
{
// Ensure original checked IOExceptions get propagated
throw new PropagatingException(e);
}
}
});
if (hadTransaction)
{
@@ -190,6 +200,11 @@ public abstract class AlfrescoDiskDriver implements IOCtlInterface, Transactiona
}
return result;
}
catch (PropagatingException e)
{
// Unwrap checked exceptions
throw (IOException) e.getCause();
}
finally
{
m_inRetryingTransaction.set(wasInRetryingTransaction);
@@ -390,4 +405,30 @@ public abstract class AlfrescoDiskDriver implements IOCtlInterface, Transactiona
((AlfrescoContext) ctx).initialize(this);
}
}
/**
* An extended {@link Callable} that throws {@link IOException}s.
*
* @param <V>
*/
public interface CallableIO <V> extends Callable<V>
{
public V call() throws IOException;
}
/**
* A wrapper for checked exceptions to be passed through the retrying transaction handler.
*/
protected static class PropagatingException extends RuntimeException
{
private static final long serialVersionUID = 1L;
/**
* @param cause
*/
public PropagatingException(Throwable cause)
{
super(cause);
}
}
}

View File

@@ -49,7 +49,7 @@ public abstract class AlfrescoNetworkFile extends NetworkFile implements Network
*
* @return FileState
*/
public final FileState getFileState()
public FileState getFileState()
{
return m_state;
}

View File

@@ -24,11 +24,9 @@ import java.util.List;
import java.util.Map;
import java.util.SortedMap;
import java.util.StringTokenizer;
import java.util.concurrent.Callable;
import javax.transaction.UserTransaction;
import org.springframework.extensions.config.ConfigElement;
import org.alfresco.filesys.alfresco.AlfrescoDiskDriver;
import org.alfresco.jlan.server.SrvSession;
import org.alfresco.jlan.server.auth.ClientInfo;
@@ -80,6 +78,7 @@ import org.alfresco.service.namespace.RegexQNamePattern;
import org.alfresco.wcm.sandbox.SandboxConstants;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.extensions.config.ConfigElement;
/**
* AVM Repository Filesystem Driver Class
@@ -825,9 +824,9 @@ public class AVMDiskDriver extends AlfrescoDiskDriver implements DiskInterface
if ( logger.isDebugEnabled())
logger.debug("Close file " + file.getFullName());
doInWriteTransaction(sess, new Callable<Void>(){
doInWriteTransaction(sess, new CallableIO<Void>(){
public Void call() throws Exception
public Void call() throws IOException
{
// Close the file
@@ -896,9 +895,9 @@ public class AVMDiskDriver extends AlfrescoDiskDriver implements DiskInterface
try
{
doInWriteTransaction(sess, new Callable<Void>(){
doInWriteTransaction(sess, new CallableIO<Void>(){
public Void call() throws Exception
public Void call() throws IOException
{
// Create the new file entry
@@ -984,9 +983,9 @@ public class AVMDiskDriver extends AlfrescoDiskDriver implements DiskInterface
try
{
// Create a new file
return doInWriteTransaction(sess, new Callable<NetworkFile>(){
return doInWriteTransaction(sess, new CallableIO<NetworkFile>(){
public NetworkFile call() throws Exception
public NetworkFile call() throws IOException
{
// Create the new file entry
@@ -1080,9 +1079,9 @@ public class AVMDiskDriver extends AlfrescoDiskDriver implements DiskInterface
try
{
doInWriteTransaction(sess, new Callable<Void>(){
doInWriteTransaction(sess, new CallableIO<Void>(){
public Void call() throws Exception
public Void call() throws IOException
{
AVMNodeDescriptor nodeDesc = m_avmService.lookup(storePath.getVersion(), storePath.getAVMPath());
if (nodeDesc != null)
@@ -1159,9 +1158,9 @@ public class AVMDiskDriver extends AlfrescoDiskDriver implements DiskInterface
try
{
doInWriteTransaction(sess, new Callable<Void>(){
doInWriteTransaction(sess, new CallableIO<Void>(){
public Void call() throws Exception
public Void call() throws IOException
{
AVMNodeDescriptor nodeDesc = m_avmService.lookup(storePath.getVersion(), storePath.getAVMPath());
if (nodeDesc != null)
@@ -1695,9 +1694,9 @@ public class AVMDiskDriver extends AlfrescoDiskDriver implements DiskInterface
try
{
doInWriteTransaction(sess, new Callable<Void>(){
doInWriteTransaction(sess, new CallableIO<Void>(){
public Void call() throws Exception
public Void call() throws IOException
{
// Rename the file/folder
@@ -2004,9 +2003,9 @@ public class AVMDiskDriver extends AlfrescoDiskDriver implements DiskInterface
// Truncate or extend the file
if (avmFile.hasContentChannel() == false || avmFile.isWritable() == false)
{
doInWriteTransaction(sess, new Callable<Void>(){
doInWriteTransaction(sess, new CallableIO<Void>(){
public Void call() throws Exception
public Void call() throws IOException
{
file.truncateFile(siz);
file.flushFile();
@@ -2058,9 +2057,9 @@ public class AVMDiskDriver extends AlfrescoDiskDriver implements DiskInterface
// Write the data to the file
if (avmFile.hasContentChannel() == false || avmFile.isWritable() == false)
{
doInWriteTransaction(sess, new Callable<Void>(){
doInWriteTransaction(sess, new CallableIO<Void>(){
public Void call() throws Exception
public Void call() throws IOException
{
file.writeFile(buf, siz, bufoff, fileoff);
return null;

View File

@@ -1559,6 +1559,7 @@ public class ServerConfigurationBean extends AbstractServerConfigurationBean
// Create the shared filesystem
filesys = new DiskSharedDevice(filesystem.getDeviceName(), filesysDriver, (AVMContext)filesystem);
filesys.setConfiguration( this);
// Check if the filesystem uses the file state cache, if so then add to the file state reaper
@@ -1603,6 +1604,7 @@ public class ServerConfigurationBean extends AbstractServerConfigurationBean
// Create the shared filesystem
filesys = new DiskSharedDevice(filesystem.getDeviceName(), filesysDriver, filesysContext);
filesys.setConfiguration( this);
// Add any access controls to the share
@@ -1672,7 +1674,10 @@ public class ServerConfigurationBean extends AbstractServerConfigurationBean
// Create the shared filesystem
fsysConfig.addShare(new DiskSharedDevice(storeName, avmDriver, avmContext));
DiskSharedDevice filesys = new DiskSharedDevice(storeName, avmDriver, avmContext);
filesys.setConfiguration( this);
fsysConfig.addShare( filesys);
// DEBUG

View File

@@ -23,12 +23,14 @@ import org.alfresco.filesys.alfresco.AlfrescoContext;
import org.alfresco.filesys.alfresco.AlfrescoDiskDriver;
import org.alfresco.filesys.alfresco.IOControlHandler;
import org.alfresco.filesys.config.acl.AccessControlListBean;
import org.alfresco.jlan.server.config.CoreServerConfigSection;
import org.alfresco.jlan.server.core.DeviceContextException;
import org.alfresco.jlan.server.filesys.DiskInterface;
import org.alfresco.jlan.server.filesys.DiskSharedDevice;
import org.alfresco.jlan.server.filesys.FileName;
import org.alfresco.jlan.server.filesys.FileSystem;
import org.alfresco.jlan.server.filesys.quota.QuotaManagerException;
import org.alfresco.jlan.server.thread.ThreadRequestPool;
import org.alfresco.service.cmr.repository.NodeRef;
/**
@@ -65,6 +67,10 @@ public class ContentContext extends AlfrescoContext
private NodeMonitor m_nodeMonitor;
// Thread pool
private ThreadRequestPool m_threadPool;
/**
* Default constructor allowing initialization by container.
*/
@@ -251,6 +257,15 @@ public class ContentContext extends AlfrescoContext
return m_rootNodeRef;
}
/**
* Return the thread pool
*
* @return ThreadRequestPool
*/
public final ThreadRequestPool getThreadPool() {
return m_threadPool;
}
/**
* Close the filesystem context
*/
@@ -309,6 +324,12 @@ public class ContentContext extends AlfrescoContext
super.startFilesystem(share);
// Find the thread pool via the configuration
CoreServerConfigSection coreConfig = (CoreServerConfigSection) share.getConfiguration().getConfigSection( CoreServerConfigSection.SectionName);
if ( coreConfig != null)
m_threadPool = coreConfig.getThreadPool();
// Start the node monitor, if enabled
if ( m_nodeMonitor != null)

View File

@@ -73,6 +73,7 @@ import org.alfresco.jlan.util.WildCard;
import org.alfresco.model.ContentModel;
import org.alfresco.repo.admin.SysAdminParams;
import org.alfresco.repo.node.archive.NodeArchiveService;
import org.alfresco.repo.policy.BehaviourFilter;
import org.alfresco.repo.security.authentication.AuthenticationContext;
import org.alfresco.repo.security.authentication.AuthenticationUtil;
import org.alfresco.repo.transaction.RetryingTransactionHelper;
@@ -149,6 +150,8 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
private AuthenticationService authService;
private SysAdminParams sysAdminParams;
private BehaviourFilter policyBehaviourFilter;
// Node monitor factory
private NodeMonitorFactory m_nodeMonitorFactory;
@@ -271,6 +274,14 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
return lockService;
}
/**
* Get the policy behaviour filter, used to inhibit versioning on a per transaction basis
*/
public BehaviourFilter getPolicyFilter()
{
return policyBehaviourFilter;
}
/**
* @param contentService the content service
*/
@@ -389,6 +400,16 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
this.lockService = lockService;
}
/**
* Set the policy behaviour filter, used to inhibit versioning on a per transaction basis
*
* @param policyFilter PolicyBehaviourFilter
*/
public void setPolicyFilter(BehaviourFilter policyFilter)
{
this.policyBehaviourFilter = policyFilter;
}
/**
* Parse and validate the parameter string and create a device context object for this instance
* of the shared device. The same DeviceInterface implementation may be used for multiple
@@ -1595,7 +1616,6 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
nosharing = false;
// Check if the caller wants read access, check the sharing mode
// Check if the caller wants write access, check if the sharing mode allows write
else if ( params.isReadOnlyAccess() && (fstate.getSharedAccess() & SharingMode.READ) != 0)
nosharing = false;
@@ -1668,7 +1688,7 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
{
// Check if the file is already opened by this client/process
if ( tree.openFileCount() > 1) {
if ( tree.openFileCount() > 0) {
// Search the open file table for this session/virtual circuit
@@ -1703,8 +1723,14 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
// DEBUG
if ( logger.isDebugEnabled() && ctx.hasDebug(AlfrescoContext.DBG_FILE))
logger.debug("Re-use existing file open Path " + params.getPath() + ", PID=" + params.getProcessId());
logger.debug("Re-use existing file open Path " + params.getPath() + ", PID=" + params.getProcessId() + ", params=" +
( params.isReadOnlyAccess() ? "ReadOnly" : "Write") + ", file=" +
( contentFile.getGrantedAccess() == NetworkFile.READONLY ? "ReadOnly" : "Write"));
}
else if ( logger.isDebugEnabled() && ctx.hasDebug(AlfrescoContext.DBG_FILE))
logger.debug("Not re-using file path=" + params.getPath() + ", readWrite=" + (params.isReadWriteAccess() ? "true" : "false") +
", readOnly=" + (params.isReadOnlyAccess() ? "true" : "false") +
", grantedAccess=" + contentFile.getGrantedAccessAsString());
}
}
@@ -1716,8 +1742,12 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
// Create the network file, if we could not match an existing file open
if ( netFile == null)
netFile = ContentNetworkFile.createFile(nodeService, contentService, mimetypeService, cifsHelper, nodeRef, params);
if ( netFile == null) {
// Create a new network file for the open request
netFile = ContentNetworkFile.createFile(nodeService, contentService, mimetypeService, cifsHelper, nodeRef, params, sess);
}
}
else
{
@@ -1861,8 +1891,8 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
{
// Access the repository in a retryable write transaction
Pair<String, NodeRef> result = doInWriteTransaction(sess, new Callable<Pair<String, NodeRef>>(){
public Pair<String, NodeRef> call() throws Exception
Pair<String, NodeRef> result = doInWriteTransaction(sess, new CallableIO<Pair<String, NodeRef>>(){
public Pair<String, NodeRef> call() throws IOException
{
// Get the device root
@@ -1920,7 +1950,7 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
// Create the network file
ContentNetworkFile netFile = ContentNetworkFile.createFile(nodeService, contentService, mimetypeService, cifsHelper, result.getSecond(), params);
ContentNetworkFile netFile = ContentNetworkFile.createFile(nodeService, contentService, mimetypeService, cifsHelper, result.getSecond(), params, sess);
// Always allow write access to a newly created file
@@ -2037,10 +2067,10 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
try
{
// Access the repository in a retryable write transaction
Pair<String, NodeRef> result = doInWriteTransaction(sess, new Callable<Pair<String, NodeRef>>()
Pair<String, NodeRef> result = doInWriteTransaction(sess, new CallableIO<Pair<String, NodeRef>>()
{
public Pair<String, NodeRef> call() throws Exception
public Pair<String, NodeRef> call() throws IOException
{
// get the device root
@@ -2166,9 +2196,9 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
try
{
NodeRef nodeRef = doInWriteTransaction(sess, new Callable<NodeRef>(){
NodeRef nodeRef = doInWriteTransaction(sess, new CallableIO<NodeRef>(){
public NodeRef call() throws Exception
public NodeRef call() throws IOException
{
// Get the node for the folder
@@ -2287,23 +2317,11 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
final ContentContext ctx = (ContentContext) tree.getContext();
FileState toUpdate = null;
// Check for a content file
if ( file instanceof ContentNetworkFile) {
// Decrement the file open count
ContentNetworkFile contentFile = (ContentNetworkFile) file;
if ( contentFile.decrementOpenCount() > 0) {
// DEBUG
if ( logger.isDebugEnabled() && ctx.hasDebug(AlfrescoContext.DBG_FILE))
logger.debug("Deferred file close, path=" + file.getFullName() + ", openCount=" + contentFile.getOpenCount());
// Defer the file close to the last reference
return;
}
// Update the file state
if ( ctx.hasStateCache())
{
@@ -2324,6 +2342,24 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
}
}
}
// Decrement the file open count
ContentNetworkFile contentFile = (ContentNetworkFile) file;
if ( contentFile.decrementOpenCount() > 0) {
// DEBUG
if ( logger.isDebugEnabled() && ctx.hasDebug(AlfrescoContext.DBG_FILE))
logger.debug("Deferred file close, path=" + file.getFullName() + ", openCount=" + contentFile.getOpenCount());
// Defer the file close to the last reference
return;
}
else if ( logger.isDebugEnabled())
logger.debug("Last reference to file, closing, path=" + file.getFullName() + ", access=" + file.getGrantedAccessAsString() + ", fid=" + file.getProtocolId());
}
// Check if there is a quota manager enabled
@@ -2347,13 +2383,50 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
// Perform repository updates in a retryable write transaction
final FileState finalFileState = toUpdate;
Pair<NodeRef, Boolean> result = doInWriteTransaction(sess, new Callable<Pair<NodeRef, Boolean>>()
Pair<NodeRef, Boolean> result = doInWriteTransaction(sess, new CallableIO<Pair<NodeRef, Boolean>>()
{
public Pair<NodeRef, Boolean> call() throws Exception
public Pair<NodeRef, Boolean> call() throws IOException
{
// Check if the file is an OpenOffice document and hte truncation flag is set
//
// Note: Check before the timestamp update
if ( file instanceof OpenOfficeContentNetworkFile) {
OpenOfficeContentNetworkFile ooFile = (OpenOfficeContentNetworkFile) file;
if ( ooFile.truncatedToZeroLength()) {
// Inhibit versioning for this transaction
getPolicyFilter().disableBehaviour( ContentModel.ASPECT_VERSIONABLE);
// Debug
if ( logger.isDebugEnabled() && ctx.hasDebug(AlfrescoContext.DBG_FILE))
logger.debug("OpenOffice file truncation update only, inhibit versioning, " + file.getFullName());
}
}
// Update the modification date on the file/folder node
if (finalFileState != null)
if (finalFileState != null && file instanceof ContentNetworkFile)
{
// Check if the file data has been updated, if not then inhibit versioning for this txn
// so the timestamp update does not generate a new file version
ContentNetworkFile contentFile = (ContentNetworkFile) file;
if ( contentFile.isModified() == false &&
nodeService.hasAspect((NodeRef) finalFileState.getFilesystemObject(), ContentModel.ASPECT_VERSIONABLE)) {
// Stop a new file version being generated
getPolicyFilter().disableBehaviour( ContentModel.ASPECT_VERSIONABLE);
// Debug
if ( logger.isDebugEnabled() && ctx.hasDebug(AlfrescoContext.DBG_FILE))
logger.debug("Timestamp update only, inhibit versioning, " + file.getFullName());
}
// Update the modification timestamp
Date modifyDate = new Date(finalFileState.getModifyDateTime());
nodeService.setProperty((NodeRef) finalFileState.getFilesystemObject(), ContentModel.PROP_MODIFIED, modifyDate);
@@ -2361,7 +2434,7 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
// Debug
if ( logger.isDebugEnabled() && ctx.hasDebug(AlfrescoContext.DBG_FILE))
logger.debug("Updated modifcation timestamp, " + file.getFullName() + ", modTime=" + modifyDate);
logger.debug("Updated modification timestamp, " + file.getFullName() + ", modTime=" + modifyDate);
}
// Defer to the network file to close the stream and remove the content
@@ -2396,9 +2469,17 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
}
catch ( Exception ex)
{
// Propagate retryable errors. Log the rest.
if (RetryingTransactionHelper.extractRetryCause(ex) != null)
{
throw ex;
if (ex instanceof RuntimeException)
{
throw (RuntimeException)ex;
}
else
{
throw new AlfrescoRuntimeException("Error during delete on close, " + file.getFullName(), ex);
}
}
if ( logger.isWarnEnabled() && ctx.hasDebug(AlfrescoContext.DBG_FILE))
logger.warn("Error during delete on close, " + file.getFullName(), ex);
@@ -2472,8 +2553,13 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
// DEBUG
if (logger.isDebugEnabled() && (ctx.hasDebug(AlfrescoContext.DBG_FILE) || ctx.hasDebug(AlfrescoContext.DBG_RENAME)))
if (logger.isDebugEnabled() && (ctx.hasDebug(AlfrescoContext.DBG_FILE) || ctx.hasDebug(AlfrescoContext.DBG_RENAME))) {
logger.debug("Closed file: network file=" + file + " delete on close=" + file.hasDeleteOnClose());
if ( file.hasDeleteOnClose() == false && file instanceof ContentNetworkFile) {
ContentNetworkFile cFile = (ContentNetworkFile) file;
logger.debug(" File " + file.getFullName() + ", version=" + nodeService.getProperty( cFile.getNodeRef(), ContentModel.PROP_VERSION_LABEL));
}
}
}
/**
@@ -2497,19 +2583,20 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
final QuotaManager quotaMgr = ctx.getQuotaManager();
// Perform repository updates in a retryable write transaction
Callable<Void> postTxn = doInWriteTransaction(sess, new Callable<Callable<Void>>()
Callable<Void> postTxn = doInWriteTransaction(sess, new CallableIO<Callable<Void>>()
{
public Callable<Void> call() throws Exception
public Callable<Void> call() throws IOException
{
// Get the size of the file being deleted
final FileInfo fInfo = quotaMgr == null ? null : getFileInformation(sess, tree, name);
// Get the node and delete it
final NodeRef nodeRef = getNodeForPath(tree, name);
Callable<Void> result = null;
if (fileFolderService.exists(nodeRef))
{
// Get the size of the file being deleted
final FileInfo fInfo = quotaMgr == null ? null : getFileInformation(sess, tree, name);
// Check if the node is versionable
final boolean isVersionable = nodeService.hasAspect(nodeRef, ContentModel.ASPECT_VERSIONABLE);
@@ -2517,6 +2604,7 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
fileFolderService.delete(nodeRef);
// Return the operations to perform when the transaction succeeds
result = new Callable<Void>()
{
@@ -2699,10 +2787,10 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
// Rename or move the file/folder
doInWriteTransaction(sess, new Callable<Void>()
doInWriteTransaction(sess, new CallableIO<Void>()
{
public Void call() throws Exception
public Void call() throws IOException
{
if (sameFolder == true)
cifsHelper.rename(nodeToMoveRef, name);
@@ -2727,10 +2815,10 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
final int newExists = fileExists(sess, tree, newName);
final FileState newState = ctx.getStateCache().findFileState(newName, true);
List<Runnable> postTxn = doInWriteTransaction(sess, new Callable<List<Runnable>>()
List<Runnable> postTxn = doInWriteTransaction(sess, new CallableIO<List<Runnable>>()
{
public List<Runnable> call() throws Exception
public List<Runnable> call() throws IOException
{
List<Runnable> postTxn = new LinkedList<Runnable>();
@@ -3005,9 +3093,9 @@ public class ContentDiskDriver extends AlfrescoDiskDriver implements DiskInterfa
final FileState fstate = getStateForPath(tree, name);
doInWriteTransaction(sess, new Callable<Pair<Boolean, Boolean>>(){
doInWriteTransaction(sess, new CallableIO<Pair<Boolean, Boolean>>(){
public Pair<Boolean, Boolean> call() throws Exception
public Pair<Boolean, Boolean> call() throws IOException
{
// Get the file/folder node

View File

@@ -29,6 +29,7 @@ import java.nio.charset.Charset;
import org.alfresco.error.AlfrescoRuntimeException;
import org.springframework.extensions.surf.util.I18NUtil;
import org.alfresco.jlan.server.SrvSession;
import org.alfresco.jlan.server.filesys.AccessDeniedException;
import org.alfresco.jlan.server.filesys.DiskFullException;
import org.alfresco.jlan.server.filesys.FileAttribute;
@@ -36,6 +37,7 @@ import org.alfresco.jlan.server.filesys.FileInfo;
import org.alfresco.jlan.server.filesys.FileOpenParams;
import org.alfresco.jlan.server.filesys.NetworkFile;
import org.alfresco.jlan.smb.SeekType;
import org.alfresco.jlan.smb.server.SMBSrvSession;
import org.alfresco.model.ContentModel;
import org.alfresco.repo.content.AbstractContentReader;
import org.alfresco.repo.content.encoding.ContentCharsetFinder;
@@ -93,29 +95,27 @@ public class ContentNetworkFile extends NodeRefNetworkFile
/**
* Helper method to create a {@link NetworkFile network file} given a node reference.
*/
public static ContentNetworkFile createFile(
NodeService nodeService,
ContentService contentService,
MimetypeService mimetypeService,
CifsHelper cifsHelper,
NodeRef nodeRef,
FileOpenParams params)
public static ContentNetworkFile createFile( NodeService nodeService, ContentService contentService, MimetypeService mimetypeService,
CifsHelper cifsHelper, NodeRef nodeRef, FileOpenParams params, SrvSession sess)
{
String path = params.getPath();
// Check write access
// TODO: Check access writes and compare to write requirements
// Create the file
ContentNetworkFile netFile = null;
if ( isMSOfficeSpecialFile(path)) {
if ( isMSOfficeSpecialFile(path, sess, nodeService, nodeRef)) {
// Create a file for special processing
netFile = new MSOfficeContentNetworkFile( nodeService, contentService, mimetypeService, nodeRef, path);
}
else if ( isOpenOfficeSpecialFile( path, sess, nodeService, nodeRef)) {
// Create a file for special processing
netFile = new OpenOfficeContentNetworkFile( nodeService, contentService, mimetypeService, nodeRef, path);
}
else {
// Create a normal content file
@@ -172,6 +172,10 @@ public class ContentNetworkFile extends NodeRefNetworkFile
netFile.setAttributes(fileInfo.getFileAttributes());
// Set the owner process id
netFile.setProcessId( params.getProcessId());
// If the file is read-only then only allow read access
if ( netFile.isReadOnly())
@@ -725,20 +729,62 @@ public class ContentNetworkFile extends NodeRefNetworkFile
logger.debug("Flush file=" + this);
}
/**
* Return the modified status
*
* @return boolean
*/
public final boolean isModified() {
return modified;
}
/**
* Check if the file is an MS Office document type that needs special processing
*
* @param path String
* @param sess SrvSession
* @param nodeService NodeService
* @param nodeRef NodeRef
* @return boolean
*/
private static final boolean isMSOfficeSpecialFile(String path) {
private static final boolean isMSOfficeSpecialFile( String path, SrvSession sess, NodeService nodeService, NodeRef nodeRef) {
// Check if the file extension indicates a problem MS Office format
path = path.toLowerCase();
if ( path.endsWith( ".xls"))
if ( path.endsWith( ".xls") && sess instanceof SMBSrvSession) {
// Check if the file is versionable
if ( nodeService.hasAspect( nodeRef, ContentModel.ASPECT_VERSIONABLE))
return true;
}
return false;
}
/**
* Check if the file is an OpenOffice document type that needs special processing
*
* @param path String
* @param sess SrvSession
* @param nodeService NodeService
* @param nodeRef NodeRef
* @return boolean
*/
private static final boolean isOpenOfficeSpecialFile( String path, SrvSession sess, NodeService nodeService, NodeRef nodeRef) {
// Check if the file extension indicates a problem OpenOffice format
path = path.toLowerCase();
if ( path.endsWith( ".odt") && sess instanceof SMBSrvSession) {
// Check if the file is versionable
if ( nodeService.hasAspect( nodeRef, ContentModel.ASPECT_VERSIONABLE))
return true;
}
return false;
}
}

View File

@@ -404,10 +404,16 @@ public class ContentSearchContext extends SearchContext
}
}
// Check if the resume file name is the last file returned, no need to reposition the file index
// Check if the resume file name is the last file returned
if ( m_lastFileName != null && info.getFileName().equalsIgnoreCase( m_lastFileName)) {
// Reset the index/resume id
index = index - 1;
resumeId = resId - 1;
donePseudoFiles = true;
// DEBUG
if ( logger.isDebugEnabled())

View File

@@ -22,6 +22,7 @@ package org.alfresco.filesys.repo;
import java.io.IOException;
import org.alfresco.jlan.server.filesys.FileInfo;
import org.alfresco.jlan.server.filesys.cache.FileState;
import org.alfresco.jlan.smb.SeekType;
import org.alfresco.service.cmr.repository.NodeRef;
@@ -83,7 +84,9 @@ public class LinkMemoryNetworkFile extends NodeRefNetworkFile
*/
public void closeFile() throws java.io.IOException
{
// Nothing to do
// Clear the file state
setFileState( null);
}
/**
@@ -247,4 +250,18 @@ public class LinkMemoryNetworkFile extends NodeRefNetworkFile
{
// Allow the write, just do not do anything
}
/**
* Return a dummy file state for this file
*
* @return FileState
*/
public FileState getFileState() {
// Create a dummy file state
if ( super.getFileState() == null)
setFileState(new FileState(getFullName()));
return super.getFileState();
}
}

View File

@@ -18,7 +18,6 @@
package org.alfresco.filesys.repo;
import org.alfresco.filesys.alfresco.AlfrescoNetworkFile;
import org.alfresco.jlan.server.filesys.NetworkFile;
import org.alfresco.service.cmr.repository.NodeRef;

View File

@@ -0,0 +1,200 @@
/*
* Copyright (C) 2005-2010 Alfresco Software Limited.
*
* This file is part of Alfresco
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
*/
package org.alfresco.filesys.repo;
import java.io.IOException;
import org.alfresco.model.ContentModel;
import org.alfresco.service.cmr.repository.ContentService;
import org.alfresco.service.cmr.repository.MimetypeService;
import org.alfresco.service.cmr.repository.NodeRef;
import org.alfresco.service.cmr.repository.NodeService;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
/**
* OpenOffice Content Network File Class
*
* <p>Provides special handling for OpenOffice file saves that open the file, truncate, close, then open the file
* again to write the data, as this causes multiple versions to be generated when the file is versionable.
*
* @author gkspencer
*/
public class OpenOfficeContentNetworkFile extends ContentNetworkFile {
// Debug logging
private static final Log logger = LogFactory.getLog(OpenOfficeContentNetworkFile.class);
// Flag to indicate the last I/O operation was a truncate file to zero size
private boolean m_truncateToZero;
// Delayed file close count
private int m_delayedClose;
/**
* Class constructor
*
* @param transactionService TransactionService
* @param nodeService NodeService
* @param contentService ContentService
* @param nodeRef NodeRef
* @param name String
*/
protected OpenOfficeContentNetworkFile(
NodeService nodeService,
ContentService contentService,
MimetypeService mimetypeService,
NodeRef nodeRef,
String name)
{
super(nodeService, contentService, mimetypeService, nodeRef, name);
// DEBUG
if (logger.isDebugEnabled())
logger.debug("Using OpenOffice network file for " + name + ", versionLabel=" + nodeService.getProperty( nodeRef, ContentModel.PROP_VERSION_LABEL));
}
/**
* Return the delayed close count
*
* @return int
*/
public final int getDelayedCloseCount() {
return m_delayedClose;
}
/**
* Increment the delayed close count
*/
public final void incrementDelayedCloseCount() {
m_delayedClose++;
// Clear the truncate to zero status
m_truncateToZero = false;
// DEBUG
if ( logger.isDebugEnabled())
logger.debug("Increment delayed close count=" + getDelayedCloseCount() + ", path=" + getName());
}
/**
* Check if the last file operation was a truncate to zero length
*
* @return boolean
*/
public final boolean truncatedToZeroLength() {
return m_truncateToZero;
}
/**
* Read from the file.
*
* @param buf byte[]
* @param len int
* @param pos int
* @param fileOff long
* @return Length of data read.
* @exception IOException
*/
public int readFile(byte[] buffer, int length, int position, long fileOffset)
throws IOException
{
// Clear the truncate flag
m_truncateToZero = false;
// Chain to the standard read
return super.readFile( buffer, length, position, fileOffset);
}
/**
* Write a block of data to the file.
*
* @param buf byte[]
* @param len int
* @param pos int
* @param fileOff long
* @exception IOException
*/
public void writeFile(byte[] buffer, int length, int position, long fileOffset)
throws IOException
{
// Clear the truncate flag
m_truncateToZero = false;
// Chain to the standard write
super.writeFile( buffer, length, position, fileOffset);
}
/**
* Truncate or extend the file to the specified length
*
* @param size long
* @exception IOException
*/
public void truncateFile(long size)
throws IOException
{
// Chain to the standard truncate
super.truncateFile( size);
// Check for a truncate to zero length
if ( size == 0L) {
m_truncateToZero = true;
// DEBUG
if ( logger.isDebugEnabled())
logger.debug("OpenOffice document truncated to zero length, path=" + getName());
}
}
/**
* Close the file
*
* @exception IOException
*/
public void closeFile()
throws IOException
{
// DEBUG
if ( logger.isDebugEnabled()) {
logger.debug("Close OpenOffice file, " + getName() + ", delayed close count=" + getDelayedCloseCount() + ", writes=" + getWriteCount() +
", modified=" + isModified());
logger.debug(" Open count=" + getOpenCount() + ", fstate open=" + getFileState().getOpenCount());
}
// Chain to the standard close
super.closeFile();
}
}

View File

@@ -24,17 +24,19 @@
*/
package org.alfresco.filesys.repo.desk;
import java.io.IOException;
import java.io.Serializable;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.concurrent.Callable;
import org.alfresco.error.AlfrescoRuntimeException;
import org.alfresco.filesys.alfresco.DesktopAction;
import org.alfresco.filesys.alfresco.DesktopParams;
import org.alfresco.filesys.alfresco.DesktopResponse;
import org.alfresco.filesys.alfresco.DesktopTarget;
import org.alfresco.filesys.alfresco.AlfrescoDiskDriver.CallableIO;
import org.alfresco.jlan.server.filesys.FileName;
import org.alfresco.jlan.server.filesys.FileStatus;
import org.alfresco.jlan.server.filesys.NotifyChange;
@@ -90,14 +92,14 @@ public class CheckInOutDesktopAction extends DesktopAction {
if ( params.numberOfTargetNodes() == 0)
return new DesktopResponse(StsSuccess);
class WriteTxn implements Callable<DesktopResponse>
class WriteTxn implements CallableIO<DesktopResponse>
{
private List<Pair<Integer, String>> fileChanges;
/* (non-Javadoc)
* @see java.util.concurrent.Callable#call()
*/
public DesktopResponse call() throws Exception
public DesktopResponse call() throws IOException
{
// Initialize / reset the list of file changes
fileChanges = new LinkedList<Pair<Integer,String>>();
@@ -156,10 +158,17 @@ public class CheckInOutDesktopAction extends DesktopAction {
}
catch (Exception ex)
{
// If this is a 'retryable' exception, pass it on
// Propagate retryable errors. Log the rest.
if (RetryingTransactionHelper.extractRetryCause(ex) != null)
{
throw ex;
if (ex instanceof RuntimeException)
{
throw (RuntimeException)ex;
}
else
{
throw new AlfrescoRuntimeException("Desktop action error", ex);
}
}
// Dump the error
@@ -229,10 +238,17 @@ public class CheckInOutDesktopAction extends DesktopAction {
}
catch (Exception ex)
{
// If this is a 'retryable' exception, pass it on
// Propagate retryable errors. Log the rest.
if (RetryingTransactionHelper.extractRetryCause(ex) != null)
{
throw ex;
if (ex instanceof RuntimeException)
{
throw (RuntimeException)ex;
}
else
{
throw new AlfrescoRuntimeException("Desktop action error", ex);
}
}
// Dump the error
@@ -269,7 +285,16 @@ public class CheckInOutDesktopAction extends DesktopAction {
// Process the transaction
WriteTxn callback = new WriteTxn();
DesktopResponse response = params.getDriver().doInWriteTransaction(params.getSession(), callback);
DesktopResponse response;
try
{
response = params.getDriver().doInWriteTransaction(params.getSession(), callback);
}
catch (IOException e)
{
// Should not happen
throw new AlfrescoRuntimeException("Desktop action error", e);
}
// Queue file change notifications
callback.notifyChanges();

View File

@@ -25,21 +25,21 @@ import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.StringTokenizer;
import java.util.concurrent.Callable;
import org.springframework.extensions.config.ConfigElement;
import org.alfresco.filesys.alfresco.AlfrescoContext;
import org.alfresco.filesys.alfresco.AlfrescoDiskDriver;
import org.alfresco.filesys.alfresco.DesktopAction;
import org.alfresco.filesys.alfresco.DesktopActionException;
import org.alfresco.filesys.alfresco.DesktopParams;
import org.alfresco.filesys.alfresco.DesktopResponse;
import org.alfresco.filesys.alfresco.AlfrescoDiskDriver.CallableIO;
import org.alfresco.jlan.server.filesys.DiskSharedDevice;
import org.alfresco.repo.transaction.RetryingTransactionHelper;
import org.alfresco.scripts.ScriptException;
import org.alfresco.service.cmr.repository.ScriptService;
import org.alfresco.util.ResourceFinder;
import org.springframework.core.io.Resource;
import org.springframework.extensions.config.ConfigElement;
/**
* Javascript Desktop Action Class
@@ -230,23 +230,19 @@ public class JavaScriptDesktopAction extends DesktopAction {
if ( hasWebappURL())
model.put("webURL", getWebappURL());
// Compute the response in a retryable write transaction
return params.getDriver().doInWriteTransaction(params.getSession(), new Callable<DesktopResponse>()
try
{
public DesktopResponse call() throws Exception
// Compute the response in a retryable write transaction
return params.getDriver().doInWriteTransaction(params.getSession(), new CallableIO<DesktopResponse>()
{
public DesktopResponse call() throws IOException
{
DesktopResponse response = new DesktopResponse(StsSuccess);
// Run the script
Object result = null;
try
{
// Run the script
result = scriptService.executeScriptString(getScript(), model);
Object result = scriptService.executeScriptString(getScript(), model);
// Check the result
@@ -294,17 +290,6 @@ public class JavaScriptDesktopAction extends DesktopAction {
response.setStatus(sts, msgToken != null ? msgToken : "");
}
}
}
catch (ScriptException ex)
{
if (RetryingTransactionHelper.extractRetryCause(ex) != null)
{
throw ex;
}
// Set the error response for the client
response.setStatus(StsError, ex.getMessage());
}
// Return the response
@@ -312,6 +297,15 @@ public class JavaScriptDesktopAction extends DesktopAction {
}
});
}
catch (ScriptException ex)
{
return new DesktopResponse(StsError, ex.getMessage());
}
catch (IOException ex)
{
return new DesktopResponse(StsError, ex.getMessage());
}
}
else
{
// Return an error response, script service not available

View File

@@ -32,6 +32,7 @@ import org.alfresco.repo.importer.ACPImportPackageHandler;
import org.alfresco.repo.importer.ImporterBootstrap;
import org.alfresco.repo.security.authentication.AuthenticationUtil;
import org.alfresco.repo.security.authentication.AuthenticationUtil.RunAsWork;
import org.alfresco.repo.transaction.RetryingTransactionHelper.RetryingTransactionCallback;
import org.alfresco.service.cmr.admin.PatchException;
import org.alfresco.service.cmr.repository.NodeRef;
import org.alfresco.service.cmr.repository.StoreRef;
@@ -217,7 +218,7 @@ public class ImapFoldersPatch extends AbstractPatch
if (imapConfigFolderNodeRef == null)
{
// import the content
RunAsWork<Object> importRunAs = new RunAsWork<Object>()
final RunAsWork<Object> importRunAs = new RunAsWork<Object>()
{
public Object doWork() throws Exception
{
@@ -227,7 +228,18 @@ public class ImapFoldersPatch extends AbstractPatch
return null;
}
};
RetryingTransactionCallback<Object> cb = new RetryingTransactionCallback<Object>()
{
public Object execute() throws Throwable
{
AuthenticationUtil.runAs(importRunAs, authenticationContext.getSystemUserName());
return null;
}
};
transactionService.getRetryingTransactionHelper().doInTransaction(cb, false, true);
msg = I18NUtil.getMessage(MSG_CREATED);
}
else

View File

@@ -268,34 +268,6 @@ public class AVMStoreImpl implements AVMStore
newChild.setAncestor(child);
parent.putChild(parentName[1], newChild);
}
// TODO This leaves the behavior of LayeredFiles not quite
// right.
/*
String parentName[] = AVMNodeConverter.SplitBase(entry.getPath());
parentName[0] = parentName[0].substring(parentName[0].indexOf(':') + 1);
lookup = lookupDirectory(-1, parentName[0], true);
DirectoryNode parent = (DirectoryNode)lookup.getCurrentNode();
AVMNode child = parent.lookupChild(lookup, parentName[1], false);
// TODO For debugging.
if (child == null)
{
System.err.println("Yoiks!");
}
// TODO This is funky. Need to look carefully to see that this call
// does exactly what's needed.
lookup.add(child, parentName[1], false);
AVMNode newChild = null;
if (child.getType() == AVMNodeType.LAYERED_DIRECTORY)
{
newChild = child.copy(lookup);
}
else
{
newChild = ((LayeredFileNode)child).copyLiterally(lookup);
}
parent.putChild(parentName[1], newChild);
*/
}
if (logger.isTraceEnabled())
@@ -313,7 +285,13 @@ public class AVMStoreImpl implements AVMStore
for (Long layeredID : allLayeredNodeIDs)
{
Layered layered = (Layered)AVMDAOs.Instance().fAVMNodeDAO.getByID(layeredID);
String indirection = layered.getIndirection();
String indirection = null;
if (layered != null)
{
indirection = layered.getIndirection();
}
if (indirection == null)
{
continue;

View File

@@ -205,6 +205,7 @@ public class SchemaBootstrap extends AbstractLifecycleBean
private int maximumStringLength;
private ThreadLocal<StringBuilder> executedStatementsThreadLocal = new ThreadLocal<StringBuilder>();
private File xmlPreSchemaOutputFile; // This must be set if there are any executed statements
public SchemaBootstrap()
{
@@ -912,6 +913,15 @@ public class SchemaBootstrap extends AbstractLifecycleBean
StringBuilder executedStatements = executedStatementsThreadLocal.get();
if (executedStatements == null)
{
// Dump the normalized, pre-upgrade Alfresco schema. We keep the file for later reporting.
xmlPreSchemaOutputFile = dumpSchema(
connection,
this.dialect,
TempFileProvider.createTempFile(
"AlfrescoSchema-" + this.dialect.getClass().getSimpleName() + "-",
"-Startup.xml").getPath(),
"Failed to dump normalized, pre-upgrade schema to file.");
// There is no lock at this stage. This process can fall out if the lock can't be applied.
setBootstrapStarted(connection);
executedStatements = new StringBuilder(8094);
@@ -1256,15 +1266,6 @@ public class SchemaBootstrap extends AbstractLifecycleBean
// Update the schema, if required.
if (updateSchema)
{
// Dump the normalized, pre-upgrade Alfresco schema. We keep the file for later reporting.
File xmlPreSchemaOutputFile = dumpSchema(
connection,
this.dialect,
TempFileProvider.createTempFile(
"AlfrescoSchema-" + this.dialect.getClass().getSimpleName() + "-",
"-Startup.xml").getPath(),
"Failed to dump normalized, pre-upgrade schema to file.");
// Retries are required here as the DB lock will be applied lazily upon first statement execution.
// So if the schema is up to date (no statements executed) then the LockFailException cannot be
// thrown. If it is thrown, the the update needs to be rerun as it will probably generate no SQL
@@ -1333,6 +1334,9 @@ public class SchemaBootstrap extends AbstractLifecycleBean
setBootstrapCompleted(connection);
}
// Report normalized dumps
if (executedStatements != null)
{
// Dump the normalized, post-upgrade Alfresco schema.
File xmlPostSchemaOutputFile = dumpSchema(
connection,
@@ -1342,7 +1346,6 @@ public class SchemaBootstrap extends AbstractLifecycleBean
".xml").getPath(),
"Failed to dump normalized, post-upgrade schema to file.");
// Report normalized dumps
if (createdSchema)
{
// This is a new schema
@@ -1351,7 +1354,7 @@ public class SchemaBootstrap extends AbstractLifecycleBean
LogUtil.info(logger, MSG_NORMALIZED_SCHEMA, xmlPostSchemaOutputFile.getPath());
}
}
else if (executedStatements != null)
else
{
// We upgraded, so have to report pre- and post- schema dumps
if (xmlPreSchemaOutputFile != null)
@@ -1364,6 +1367,7 @@ public class SchemaBootstrap extends AbstractLifecycleBean
}
}
}
}
else
{
LogUtil.info(logger, MSG_BYPASSING_SCHEMA_UPDATE);

View File

@@ -665,7 +665,8 @@ public class LDAPUserRegistry implements UserRegistry, LDAPNameResolver, Initial
return;
}
}
String gid = "GROUP_" + gidAttribute.get(0);
String groupShortName = gidAttribute.get(0).toString();
String gid = "GROUP_" + groupShortName;
NodeDescription group = lookup.get(gid);
if (group == null)
@@ -718,7 +719,7 @@ public class LDAPUserRegistry implements UserRegistry, LDAPNameResolver, Initial
.toAttributes();
// Recognize user DNs
if (distinguishedName.startsWith(userDistinguishedNamePrefix)
if (distinguishedNameForComparison.startsWith(userDistinguishedNamePrefix)
&& (nameAttribute = nameAttributes
.get(LDAPUserRegistry.this.userIdAttributeName)) != null)
{
@@ -727,7 +728,7 @@ public class LDAPUserRegistry implements UserRegistry, LDAPNameResolver, Initial
}
// Recognize group DNs
if (distinguishedName.startsWith(groupDistinguishedNamePrefix)
if (distinguishedNameForComparison.startsWith(groupDistinguishedNamePrefix)
&& (nameAttribute = nameAttributes
.get(LDAPUserRegistry.this.groupIdAttributeName)) != null)
{
@@ -801,20 +802,21 @@ public class LDAPUserRegistry implements UserRegistry, LDAPNameResolver, Initial
// Unresolvable name
if (LDAPUserRegistry.this.errorOnMissingMembers)
{
throw new AlfrescoRuntimeException("Failed to resolve distinguished name: "
+ attribute, e);
throw new AlfrescoRuntimeException("Failed to resolve member of group '"
+ groupShortName + "' with distinguished name: " + attribute, e);
}
LDAPUserRegistry.logger.warn("Failed to resolve distinguished name: "
+ attribute, e);
LDAPUserRegistry.logger.warn("Failed to resolve member of group '"
+ groupShortName + "' with distinguished name: " + attribute, e);
continue;
}
}
if (LDAPUserRegistry.this.errorOnMissingMembers)
{
throw new AlfrescoRuntimeException("Failed to resolve distinguished name: "
+ attribute);
throw new AlfrescoRuntimeException("Failed to resolve member of group '"
+ groupShortName + "' with distinguished name: " + attribute);
}
LDAPUserRegistry.logger.warn("Failed to resolve distinguished name: " + attribute);
LDAPUserRegistry.logger.warn("Failed to resolve member of group '" + groupShortName
+ "' with distinguished name: " + attribute);
}
catch (InvalidNameException e)
{
@@ -1076,6 +1078,8 @@ public class LDAPUserRegistry implements UserRegistry, LDAPNameResolver, Initial
* if there is a problem accessing the attribute values
*/
private boolean hasAttributeValue(Attribute attribute, String value) throws NamingException
{
if (attribute != null)
{
NamingEnumeration<?> values = attribute.getAll();
while (values.hasMore())
@@ -1092,6 +1096,7 @@ public class LDAPUserRegistry implements UserRegistry, LDAPNameResolver, Initial
// Not a string value. ignore and continue
}
}
}
return false;
}