Dave Ward d437d5105d Merged V4.0-BUG-FIX to HEAD
36311: BDE-69: filter long tests if minimal.testing property is defined
   36314: Merged V4.0 to V4.0-BUG-FIX (RECORD ONLY)
      36247: ALF-11027: temporarily remove import of maven.xml, since it makes ant calls fail from enterpriseprojects
   36331: ALF-12447: Further changes required to fix lower case meta-inf folder name
   36333: Revert ALF-12447.
   36334: ALF-14115: Merged V3.4-BUG-FIX to V4.0-BUG-FIX
      36318: ALF-12447: Fix case on META-INF folder for SDK
      36332: ALF-12447: Further changes required to fix lower case meta-inf folder name
   36337: ALF-14115: Merged V3.4-BUG-FIX to V4.0-BUG-FIX
      36332: ALF-12447: Yet more meta-inf case changes needed.
   36342: ALF-14120: fix only completed tasks returned
   36343: ALF-13898: starting workflow from IMAP now using workflowDefs with engine name included, fallback to appending $jbpm when not present, to preserve backwards compatibility.
   36345: Fix for ALF-12730 - Email Space Users fails if template is used
   36346: Fix for ALF-9466 - We can search contents sorted by categories in Advanced search in Share, but saved search will not be shown in UI.
   36364: Switch version to 4.0.3
   36375: Merged BRANCHES/DEV/CLOUDSYNCLOCAL2 to BRANCHES/DEV/V4.0-BUG-FIX:
      36366: Tweak to implementation to ensure that on-authentication-failed, the status is updated within a r/w transaction.
      36374: Provide more specific exceptions from the Remote Connector Service for client and server errors
   36376: Fix ALF-14121 - Alfresco fails to start if using "replicating-content-services-context.xml"
   36393: Final part of ALF-13723 SOLR does not include the same query unit tests as lucene
   - CMIS typed query and ordering tests
   36432: ALF-14133: Merged V3.4-BUG-FIX (3.4.10) to V4.0-BUG-FIX (4.0.3)
      << 4.0.x specific change: Changed transformer.complex.OOXML.Image into transformer.complex.Any.Image >>
      << allowing any transformer to be selected for the conversion to JPEG >>
      36427: ALF-14131 Complex transformers fail if a lower level transformer fails even though there is another transformer that could do the transformation
         - Added a base spring bean for all complex transformers
      36362: ALF-14131 Complex transformers fail if a lower level transformer fails even though there is another transformer that could do the transformation
   36434: Test fix for ALF-13723 SOLR does not include the same query unit tests as lucene
   - CMIS test data change broke AFTS ID ordering
   36503: Removed thousands of compiler warnings (CMIS query test code)
   36518: Fix for ALF-13778 - Links on Share Repository search page show incorrect link name; do not work when root-node is defined.
   Fix now means that Share search correctly handles overridden Repository root node setting. Original work by Vasily Olhin.
   36520: BDE-69: filter all repo tests if minimal.testing property is defined
   36534: ALF-14116: Latest Surf libs (r1075) - ensure that i18n extensions can process browser sent short locales
   36563: Merged V3.4-BUG-FIX to V4.0-BUG-FIX
      36336: ALF-12447: Yet more meta-inf case changes needed.
      36347: Fix for ALF-13920 - Error occurred when try to edit/delete category
      36352: Fix for ALF-13123 - Invalid JSON format from Get Node Tags Webscript - strings not double-quoted. Also fixed POST webscript with same issue.
      36399: ALL LANG: translation updates based on EN r36392
      36421: Fix for Mac Lion versioning issue. ALF-12792 (Part 1 of 2)
      Enable the InfoPassthru and Level2Oplocks server capability flags, InfoPassthru is the flag that fixes the Mac Lion versioning error.
      Added support for filesystems that do not implement the NTFS streams interface in the CIFS transact rename processing, for the Alfresco repo filesystem.
      36422: Fix for Mac Lion versioning issue. ALF-12792 (Part 2 of 2)
      Enable the InfoPassthru and Level2Oplocks server capability flags, InfoPassthru is the flag that fixes the Mac Lion versioning error.
      36423: Add support for file size tracking in the file state. ALF-13616 (Part 1 of 2)
      36424: Fix for Mac MS Word file save issue. ALF-13616 (Part 2 of 2)
      Added live file size tracking to file writing/folder searches so the correct file size is returned before the file is closed.
      36444: Merged DEV to V3.4-BUG-FIX
         36419: ALF-12666 Search against simple-search-additional-attributes doesn't work properly
            SearchContext.buildQuery(int) method was changed.
      36446: Fix for ALF-13404 - Performance: 'Content I'm Editing' dashlet is slow to render when there is lots of data/sites
       - Effectively removed all PATH based queries using the pattern /companyhome/sites/*/container//* as they are a non-optimized case
       - Replaced the "all sites" doclist query using the above pattern with /companyhome/sites//* plus post query resultset processing based on documentLibrary container matching regex
       - Optimized favorite document query to remove need for a PATH
       - Optimized Content I'm Editing discussion PATH query to use /*/* instead of /*//*
       - Fixed issue where Content I'm Editing discussion results would not always show the root topics that a user has edited
       - Added some addition doclist.get.js query scriptlogger debugging output
      36449: ALF-13404 - Fix for issue where favoriates for all sites would be shown in each site document library in the My Favorites filter.
      36475: ALF-14131 Complex transformers fail if a lower level transformer fails even though there is another transformer that could do the transformation
         - Change base spring bean on example config file
      36480: 36453: ALF-3881 : ldap sync deletion behaviour not flexible enough
         - synchronization.allowDeletions parameter introduced
         - default value is true (existing behaviour)
         - when false, no missing users or groups are deleted from the repository
         - instead they are cleared of their zones and missing groups are cleared of all their members
         - colliding users and groups from different zones are also 'moved' rather than recreated
         - unit test added
      36491: Added CIFS transact2 NT passthru levels for set end of file/set allocation size. ALF-13616.
      Also updated FileInfoLevel with the latest list of NT passthru information levels.
      36497: Fixed ALF-14163: JavaScript Behaviour broken: Node properties cannot be cast to java.io.Serializable
       - Fallout from ALF-12855
       - Made class Serializable (like HashMap would have been)
       - Fixed line endings, too
      36531: ALF-13769: Merged BELARUS/V3.4-BUG-FIX-2012_04_05 to V3.4-BUG-FIX (3.4.10)
         35150: ALF-2645 : 3.2+ ldap sync debug information is too scarce 
            - Improved LDAP logging.
      36532: ALF-13769: BRANCHES/DEV/BELARUS/V3.4-BUG-FIX-2012_01_26 to V3.4-BUG-FIX (3.4.10)
         36461: ALF-237: WCM: File conflicts cause file order not to be consistent
            - It is reasonable set values for checkboxes using the indexes from the list, which are not changed. So when we submit the window, the getSelectedNodes method is invoked and 
              it takes selected nodes by checkbox values from "paths" list. 
      36535: Merged DEV to V3.4-BUG-FIX
         36479: ALF-8918 : Cannot "edit offline" a web quick start publication
            A check in TaggableAspect.onUpdatePropertiesOnCommit() was extended to skip the update, if no tags were changed.
      36555: Merged V3.4 to V3.4-BUG-FIX
         36294: ALF-14039: Merged HEAD to V3.4
            31732: ALF-10934: Prevent potential start/stop ping-pong of subsystems across a cluster
               - When a cluster boots up or receives a reinit message it shouldn't be sending out any start messages
   36566: Merged V3.4-BUG-FIX to V4.0-BUG-FIX (RECORD ONLY)
      36172: Merged BRANCHES/DEV/V4.0-BUG-FIX to BRANCHES/DEV/V3.4-BUG-FIX:
         36169: ALF-8755: After renaming content / space by Contributor via WebDAV new items are created
   36572: Merged V4.0 to V4.0-BUG-FIX
      36388: ALF-14025: Updated Surf libs (1071). Fixes to checksum-disabled dependency handling
      36392: ALF-14129 Failed to do upgrade from 3.4.8 to 4.0.2
         << Committed change for Frederik Heremans >>
         - Moved actual activiti-tables creation to before the upgrade
      36409: Fix for ALF-14124 Solr is not working - Errors occur during the startup
      36466: Fix for ALF-12770 - Infinite loop popup alert in TinyMCE after XSS injection in Alfresco Explorer online edit.
      36501: Merged DEV to V4.0
         36496: ALF-14063 : CLONE - Internet Explorer hangs when using the object picker with a larger number of documents
            YUI 2.9.0 library was modified to use chunked unloading of listeners via a series of setTimeout() functions in event.js for IE 6,7,8.
      36502: ALF-14105: Share Advanced search issue with the form values
      - Fix by David We
      36538: ALF-13986: Updated web.xml and index.jsp redirect to ensure that SSO works with proper surf site-configuration customization
      36539: Fix for ALF-14167 Filtering by Tags/Categories doen't findes any content in Repository/DocumentLibrary
      - fix default namespace back to "" -> "" and fix the specific SOLR tests that require otherwise.
      36541: ALF-14082: Input stream leaks in thumbnail rendering webscripts
      36560: Correctly size content length header after HTML stripping process (ALF-9365)
   36574: Merged V4.0 to V4.0-BUG-FIX (RECORD ONLY)
      36316: Merged V4.0-BUG-FIX to V4.0 (4.0.2)
      36391: Merged V4.0-BUG-FIX to V4.0
         36376: Fix ALF-14121 - Alfresco fails to start if using "replicating-content-services-context.xml"


git-svn-id: https://svn.alfresco.com/repos/alfresco-enterprise/alfresco/HEAD/root@36576 c4b6b30b-aa2e-2d43-bbcb-ca4b014f7261
2012-05-18 17:00:53 +00:00

878 lines
28 KiB
Java

/*
* Copyright (C) 2005-2010 Alfresco Software Limited.
*
* This file is part of Alfresco
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
*/
package org.alfresco.filesys.repo;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;
import org.alfresco.error.AlfrescoRuntimeException;
import org.alfresco.jlan.server.SrvSession;
import org.alfresco.jlan.server.filesys.AccessDeniedException;
import org.alfresco.jlan.server.filesys.DiskFullException;
import org.alfresco.jlan.server.filesys.FileAttribute;
import org.alfresco.jlan.server.filesys.FileInfo;
import org.alfresco.jlan.server.filesys.FileOpenParams;
import org.alfresco.jlan.server.filesys.NetworkFile;
import org.alfresco.jlan.smb.SeekType;
import org.alfresco.jlan.smb.server.SMBSrvSession;
import org.alfresco.model.ContentModel;
import org.alfresco.repo.content.AbstractContentReader;
import org.alfresco.repo.content.MimetypeMap;
import org.alfresco.repo.content.filestore.FileContentReader;
import org.alfresco.repo.security.authentication.AuthenticationUtil;
import org.alfresco.repo.security.authentication.AuthenticationUtil.RunAsWork;
import org.alfresco.repo.transaction.AlfrescoTransactionSupport;
import org.alfresco.repo.transaction.TransactionListenerAdapter;
import org.alfresco.service.cmr.repository.ContentAccessor;
import org.alfresco.service.cmr.repository.ContentData;
import org.alfresco.service.cmr.repository.ContentIOException;
import org.alfresco.service.cmr.repository.ContentReader;
import org.alfresco.service.cmr.repository.ContentService;
import org.alfresco.service.cmr.repository.ContentWriter;
import org.alfresco.service.cmr.repository.MimetypeService;
import org.alfresco.service.cmr.repository.NodeRef;
import org.alfresco.service.cmr.repository.NodeService;
import org.alfresco.service.cmr.usage.ContentQuotaException;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.extensions.surf.util.I18NUtil;
/**
* Implementation of the <tt>NetworkFile</tt> for direct interaction
* with the channel repository.
* <p>
* This provides the interaction with the Alfresco Content Model file/folder structure.
*
* @author Derek Hulley
*/
public class ContentNetworkFile extends NodeRefNetworkFile
{
private static final Log logger = LogFactory.getLog(ContentNetworkFile.class);
// Services
private NodeService nodeService;
private ContentService contentService;
private MimetypeService mimetypeService;
private FileChannel channel; // File channel to file content
private ContentAccessor content; // content
private String preUpdateContentURL;
// Indicate if file has been written to or truncated/resized
private boolean modified;
// Flag to indicate if the file channel is writable
private boolean writableChannel;
/**
* Helper method to create a {@link NetworkFile network file} given a node reference.
*/
public static ContentNetworkFile createFile( NodeService nodeService, ContentService contentService, MimetypeService mimetypeService,
CifsHelper cifsHelper, NodeRef nodeRef, String path, boolean readOnly, boolean attributesOnly, SrvSession sess)
{
// Create the file
ContentNetworkFile netFile = null;
if ( isMSOfficeSpecialFile(path, sess, nodeService, nodeRef)) {
// Create a file for special processing for Excel
netFile = new MSOfficeContentNetworkFile( nodeService, contentService, mimetypeService, nodeRef, path);
}
else if ( isOpenOfficeSpecialFile( path, sess, nodeService, nodeRef)) {
// Create a file for special processing
netFile = new OpenOfficeContentNetworkFile( nodeService, contentService, mimetypeService, nodeRef, path);
}
else {
// Create a normal content file
netFile = new ContentNetworkFile(nodeService, contentService, mimetypeService, nodeRef, path);
}
// Set relevant parameters
if (attributesOnly) {
netFile.setGrantedAccess( NetworkFile.ATTRIBUTESONLY);
}
else if (readOnly) {
netFile.setGrantedAccess(NetworkFile.READONLY);
}
else {
netFile.setGrantedAccess(NetworkFile.READWRITE);
}
// Check the type
FileInfo fileInfo;
try
{
fileInfo = cifsHelper.getFileInformation(nodeRef, "", false, false);
}
catch (FileNotFoundException e)
{
throw new AlfrescoRuntimeException("File not found when creating network file: " + nodeRef, e);
}
if (fileInfo.isDirectory())
{
netFile.setAttributes(FileAttribute.Directory);
}
else
{
// Set the current size
netFile.setFileSize(fileInfo.getSize());
}
// Set the file timestamps
if ( fileInfo.hasCreationDateTime())
netFile.setCreationDate( fileInfo.getCreationDateTime());
if ( fileInfo.hasModifyDateTime() && fileInfo.getModifyDateTime() > 0L)
netFile.setModifyDate(fileInfo.getModifyDateTime());
else
netFile.setModifyDate(fileInfo.getCreationDateTime());
if ( fileInfo.hasAccessDateTime() && fileInfo.getAccessDateTime() > 0L)
netFile.setAccessDate(fileInfo.getAccessDateTime());
else
netFile.setAccessDate(fileInfo.getCreationDateTime());
// Set the file attributes
netFile.setAttributes(fileInfo.getFileAttributes());
// Set the owner process id
//
//netFile.setProcessId( params.getProcessId());
// If the file is read-only then only allow read access
if ( netFile.isReadOnly())
netFile.setGrantedAccess(NetworkFile.READONLY);
// DEBUG
if (logger.isDebugEnabled())
logger.debug("Create file node=" + nodeRef + ", path=" + path + ", netfile=" + netFile);
// Return the network file
return netFile;
}
/**
* Class constructor
*
* @param transactionService TransactionService
* @param nodeService NodeService
* @param contentService ContentService
* @param nodeRef NodeRef
* @param name String
*/
protected ContentNetworkFile(
NodeService nodeService,
ContentService contentService,
MimetypeService mimetypeService,
NodeRef nodeRef,
String name)
{
super(name, nodeRef);
setFullName(name);
this.nodeService = nodeService;
this.contentService = contentService;
this.mimetypeService = mimetypeService;
}
/**
* Return the file details as a string
*
* @return String
*/
public String toString()
{
StringBuilder str = new StringBuilder();
str.append( "[");
str.append(getFullName());
str.append(",");
str.append( getNodeRef().getId());
str.append( ",channel=");
str.append( channel);
if ( channel != null)
str.append( writableChannel ? "(Write)" : "(Read)");
if ( modified)
str.append( ",modified");
str.append( "]");
return str.toString();
}
/**
* @return Returns true if the channel should be writable
*
* @see NetworkFile#getGrantedAccess()
* @see NetworkFile#READONLY
* @see NetworkFile#WRITEONLY
* @see NetworkFile#READWRITE
*/
private boolean isWritable()
{
// Check that we are allowed to write
int access = getGrantedAccess();
return (access == NetworkFile.READWRITE || access == NetworkFile.WRITEONLY);
}
/**
* Determine if the file content data has been opened
*
* @return boolean
*/
public final boolean hasContent()
{
return content != null ? true : false;
}
/**
* Opens the channel for reading or writing depending on the access mode.
* <p>
* Side effect: sets fileSize
* <p>
* If the channel is already open, it is left.
*
* @param write true if the channel must be writable
* @param trunc true if the writable channel does not require the previous content data
* @throws AccessDeniedException if this network file is read only
* @throws AlfrescoRuntimeException if this network file represents a directory
*
* @see NetworkFile#getGrantedAccess()
* @see NetworkFile#READONLY
* @see NetworkFile#WRITEONLY
* @see NetworkFile#READWRITE
*/
public void openContent(boolean write, boolean trunc)
throws AccessDeniedException, AlfrescoRuntimeException
{
// Check if the file is a directory
if (isDirectory())
{
throw new AlfrescoRuntimeException("Unable to open content for a directory network file: " + this);
}
if(channel != null && !write)
{
if(logger.isDebugEnabled())
{
logger.debug("channel is already open for read-only");
}
return;
}
// Check if write access is required and the current channel is read-only
// update of channel and content member variables need to be serialized
synchronized(this)
{
if ( write && writableChannel == false && channel != null)
{
// Close the existing read-only channel
try
{
channel.close();
channel = null;
content = null;
}
catch (IOException ex)
{
logger.error("Error closing read-only channel", ex);
}
// Debug
if ( logger.isDebugEnabled())
{
logger.debug("Switching to writable channel for " + getName());
}
}
else if (channel != null)
{
// Already have read/write channel open
return;
}
// We need to create the channel
if (write && !isWritable())
{
throw new AccessDeniedException("The network file was created for read-only: " + this);
}
preUpdateContentURL = null;
// Need to open content for write
if (write)
{
// Get a writeable channel to the content, along with the original content
if(logger.isDebugEnabled())
{
logger.debug("open writer for content property");
}
content = contentService.getWriter( getNodeRef(), ContentModel.PROP_CONTENT, false);
// Keep the original content for later comparison
ContentData preUpdateContentData = (ContentData) nodeService.getProperty( getNodeRef(), ContentModel.PROP_CONTENT);
if (preUpdateContentData != null)
{
preUpdateContentURL = preUpdateContentData.getContentUrl();
}
// Indicate that we have a writable channel to the file
writableChannel = true;
// Get the writable channel, do not copy existing content data if the file is to be truncated
channel = ((ContentWriter) content).getFileChannel( trunc);
}
else
{
// Get a read-only channel to the content
if(logger.isDebugEnabled())
{
logger.debug("open reader for content property");
}
content = contentService.getReader( getNodeRef(), ContentModel.PROP_CONTENT);
// Ensure that the content we are going to read is valid
content = FileContentReader.getSafeContentReader(
(ContentReader) content,
I18NUtil.getMessage(FileContentReader.MSG_MISSING_CONTENT),
getNodeRef(), content);
// Indicate that we only have a read-only channel to the data
writableChannel = false;
// Get the read-only channel
channel = ((ContentReader) content).getFileChannel();
}
// Update the current file size
if ( channel != null)
{
try
{
setFileSize(channel.size());
}
catch (IOException ex)
{
logger.error( ex);
}
// Indicate that the file is open
setClosed( false);
}
} // release lock
}
/**
* Close the file
*
* @exception IOException
*/
public void closeFile()
throws IOException
{
// Check if this is a directory
if(logger.isDebugEnabled())
{
logger.debug("closeFile");
}
if (isDirectory())
{
// Nothing to do
if(logger.isDebugEnabled())
{
logger.debug("file is a directory - nothing to do");
}
setClosed( true);
return;
}
else if (!hasContent()) {
// File was not read/written so content was not opened
if(logger.isDebugEnabled())
{
logger.debug("no content to write - nothing to do");
}
setClosed( true);
return;
}
// Check if the file has been modified
// update of channel and content member variables need to be serialized
synchronized(this)
{
if (modified)
{
if(logger.isDebugEnabled())
{
logger.debug("content has been modified");
}
NodeRef contentNodeRef = getNodeRef();
ContentWriter writer = (ContentWriter)content;
// We may be in a retry block, in which case this section will already have executed and channel will be null
if (channel != null)
{
// Close the channel
channel.close();
channel = null;
}
// Do we need the mimetype guessing for us when we're done?
if (content.getMimetype() == null || content.getMimetype().equals(MimetypeMap.MIMETYPE_BINARY) )
{
String filename = (String) nodeService.getProperty(contentNodeRef, ContentModel.PROP_NAME);
writer.guessMimetype(filename);
}
// We always want the encoding guessing
writer.guessEncoding();
// Retrieve the content data and stop the content URL from being 'eagerly deleted', in case we need to
// retry the transaction
final ContentData contentData = content.getContentData();
// Update node properties, but only if the binary has changed (ETHREEOH-1861)
ContentReader postUpdateContentReader = writer.getReader();
RunAsWork<ContentReader> getReader = new RunAsWork<ContentReader>()
{
public ContentReader doWork() throws Exception
{
return preUpdateContentURL == null ? null : contentService.getRawReader(preUpdateContentURL);
}
};
ContentReader preUpdateContentReader = AuthenticationUtil.runAs(getReader, AuthenticationUtil.getSystemUserName());
boolean contentChanged = preUpdateContentURL == null
|| !AbstractContentReader.compareContentReaders(preUpdateContentReader,
postUpdateContentReader);
if (contentChanged)
{
if(logger.isDebugEnabled())
{
logger.debug("content has changed - remove ASPECT_NO_CONTENT");
}
nodeService.removeAspect(contentNodeRef, ContentModel.ASPECT_NO_CONTENT);
try
{
nodeService.setProperty( contentNodeRef, ContentModel.PROP_CONTENT, contentData);
}
catch (ContentQuotaException qe)
{
content = null;
setClosed( true);
throw new DiskFullException(qe.getMessage());
}
}
// Tidy up after ourselves after a successful commit. Otherwise leave things to allow a retry.
AlfrescoTransactionSupport.bindListener(new TransactionListenerAdapter()
{
@Override
public void afterCommit()
{
synchronized(this)
{
if(channel == null)
{
content = null;
preUpdateContentURL = null;
setClosed( true);
}
}
}
});
}
else if (channel != null)
{
// Close it - it was not modified
if(logger.isDebugEnabled())
{
logger.debug("content not modified - simply close the channel");
}
channel.close();
channel = null;
content = null;
setClosed(true);
}
}
}
/**
* Truncate or extend the file to the specified length
*
* @param size long
* @exception IOException
*/
public void truncateFile(long size)
throws IOException
{
logger.debug("truncate file");
try
{
// If the content data channel has not been opened yet and the requested size is zero
// then this is an open for overwrite so the existing content data is not copied
if ( hasContent() == false && size == 0L)
{
// Open content for overwrite, no need to copy existing content data
openContent(true, true);
}
else
{
// Normal open for write
openContent(true, false);
// Truncate or extend the channel
channel.truncate(size);
}
}
catch ( ContentIOException ex) {
// DEBUG
if ( logger.isDebugEnabled())
logger.debug("Error opening file " + getFullName() + " for write", ex);
// Convert to a file server I/O error
throw new DiskFullException("Failed to open " + getFullName() + " for write");
}
// Set modification flag
modified = true;
// Set the new file size
setFileSize( size);
// Update the modification date/time
if ( getFileState() != null)
getFileState().updateModifyDateTime();
// DEBUG
if (logger.isDebugEnabled())
logger.debug("Truncate file=" + this + ", size=" + size);
}
/**
* Write a block of data to the file.
*
* @param buf byte[]
* @param len int
* @param pos int
* @param fileOff long
* @exception IOException
*/
public void writeFile(byte[] buffer, int length, int position, long fileOffset)
throws IOException
{
try
{
// Open the channel for writing
openContent(true, false);
}
catch ( ContentIOException ex) {
// DEBUG
if ( logger.isDebugEnabled())
logger.debug("Error opening file " + getFullName() + " for write", ex);
// Convert to a file server I/O error
throw new DiskFullException("Failed to open " + getFullName() + " for write");
}
// Write to the channel
ByteBuffer byteBuffer = ByteBuffer.wrap(buffer, position, length);
int count = channel.write(byteBuffer, fileOffset);
// Set modification flag
modified = true;
incrementWriteCount();
// Update the current file size
setFileSize(channel.size());
// Update the modification date/time and live file size
if ( getFileState() != null) {
getFileState().updateModifyDateTime();
getFileState().setFileSize( getFileSize());
}
// DEBUG
if (logger.isDebugEnabled())
logger.debug("Write file=" + this + ", size=" + count);
}
/**
* Read from the file.
*
* @param buf byte[]
* @param len int
* @param pos int
* @param fileOff long
* @return Length of data read.
* @exception IOException
*/
public int readFile(byte[] buffer, int length, int position, long fileOffset)
throws IOException
{
// Open the channel for reading
openContent(false, false);
// Read from the channel
ByteBuffer byteBuffer = ByteBuffer.wrap(buffer, position, length);
int count = channel.read(byteBuffer, fileOffset);
if (count < 0)
{
count = 0; // doesn't obey the same rules, i.e. just returns the bytes read
}
// Update the access date/time
if ( getFileState() != null)
getFileState().updateAccessDateTime();
// DEBUG
if (logger.isDebugEnabled())
logger.debug("Read file=" + this + " read=" + count);
// Return the actual count of bytes read
return count;
}
/**
* Open the file
*
* @param createFlag boolean
* @exception IOException
*/
@Override
public void openFile(boolean createFlag)
throws IOException
{
// Wait for read/write before opening the content channel
}
/**
* Seek to a new position in the file
*
* @param pos long
* @param typ int
* @return long
*/
@Override
public long seekFile(long pos, int typ)
throws IOException
{
// Open the file, if not already open
openContent( false, false);
// Check if the current file position is the required file position
long curPos = channel.position();
switch (typ) {
// From start of file
case SeekType.StartOfFile :
if (curPos != pos)
channel.position( pos);
break;
// From current position
case SeekType.CurrentPos :
channel.position( curPos + pos);
break;
// From end of file
case SeekType.EndOfFile :
{
long newPos = channel.size() + pos;
channel.position(newPos);
}
break;
}
// Update the access date/time
if ( getFileState() != null)
getFileState().updateAccessDateTime();
// DEBUG
if (logger.isDebugEnabled())
logger.debug("Seek file=" + this + ", pos=" + pos + ", type=" + typ);
// Return the new file position
return channel.position();
}
/**
* Flush and buffered data for this file
*
* @exception IOException
*/
@Override
public void flushFile()
throws IOException
{
// Open the channel for writing
openContent(true, false);
// Flush the channel - metadata flushing is not important
channel.force(false);
// Update the access date/time
if ( getFileState() != null)
getFileState().updateAccessDateTime();
// DEBUG
if (logger.isDebugEnabled())
logger.debug("Flush file=" + this);
}
/**
* Return the modified status
*
* @return boolean
*/
public final boolean isModified() {
return modified;
}
/**
* Check if the file is an MS Office document type that needs special processing
*
* @param path String
* @param sess SrvSession
* @param nodeService NodeService
* @param nodeRef NodeRef
* @return boolean
*/
private static final boolean isMSOfficeSpecialFile( String path, SrvSession sess, NodeService nodeService, NodeRef nodeRef) {
// Check if the file extension indicates a problem MS Office format
path = path.toLowerCase();
if ( path.endsWith( ".xls") && sess instanceof SMBSrvSession) {
// Check if the file is versionable
if ( nodeService.hasAspect( nodeRef, ContentModel.ASPECT_VERSIONABLE))
return true;
}
return false;
}
/**
* Check if the file is an OpenOffice document type that needs special processing
*
* @param path String
* @param sess SrvSession
* @param nodeService NodeService
* @param nodeRef NodeRef
* @return boolean
*/
private static final boolean isOpenOfficeSpecialFile( String path, SrvSession sess, NodeService nodeService, NodeRef nodeRef) {
// Check if the file extension indicates a problem OpenOffice format
path = path.toLowerCase();
if ( path.endsWith( ".odt") && sess instanceof SMBSrvSession) {
// Check if the file is versionable
if ( nodeService.hasAspect( nodeRef, ContentModel.ASPECT_VERSIONABLE))
return true;
}
return false;
}
}