Merged V3.3-BUG-FIX to HEAD

21242: ALF-2879: XAM Connector changes
      - Fixed setting of store name
      - Added properties:
         - xam.archive.retentionPeriodDays=0
         - xam.archive.addLock=true
   21244: ALF-2879: Updated readme.txt
   21262: ALF-3611 - tweak AVM orphan reaper test (PurgeTestP)
   21277: Fix ALF-889 - WCM/AVM folder disappears when cut-and-paste into itself
      - fixed cycle check before rename/move
      - added -ve unit test
      - externalized existing cycle error messages
   21284: ALF-2879: XAM Connector changes
      - Listen to store selector policies (incl. workaround for policy listening)
      - Set XAM retention (code is setting the value, but not successfully tested against test servers)
   21285: StoreSelectorPolicies.OnContentMovedPolicy is deprecated. Use StoreSelectorPolicies.AfterMoveContentPolicy.
      - Deprecated so old policy still exists and works
      - Will remove for 3.4 (maybe)
   21293: Fix ALF-3245: stream not closed in DictionaryBootstrap.onDictionaryInit()
   21303: ALF-2879: XAM Connector changes
      - Throw ContentIOException if setBaseRetention fails
   21313: ALF-2879: XAM Connector changes
      - Round ms to nearest second
      - Log actual ms value being set for retention
   21322: Fix AVMNodeService.createNode to close output stream and avoid "Too many open files" (also add example AVMFileFolderPerformanceTester)
   21331: ALF-2879: XAM Connector changes
      - Removed duplicate setting of base retention
      - Left code hooks for setting of other metadata
   21368: Merged V3.3 to V3.3-BUG-FIX
      21213: Merged DEV/TEMPORARY to V3.3
         21200: ALF-2978: IMAP cannot bind to all the interfaces (0.0.0.0)
            “imap.server.host” property can be used for setting IP address / network adapter to listen on for IMAP protocol.
      21219: Merged PATCHES/V3.2.1 to V3.3
         21216: ALF-3779: A few bug fixes to --FOREACH handling in SchemaBootstrap
            - New system.upgrade.default.batchsize property to control overall default batch size
            - Added in a few more missed --FOREACH markers
         21211: ALF-3779: Remaining scripts converted to use --FOREACH (as logs finally provided by test prove that they need it too!)
         21210: (RECORD ONLY) Incremented version label
         21209: ALF-3779: Solution to allow batching of mass updates in upgrade scripts into smaller transactions
            - A special preceding comment in this format specifies a numeric table column to control the batching and a global property specifying the batch size
               --FOREACH table.column batch.size.property
            - If the property isn't specified in alfresco-global.properties, the default batch size is 10,000
            - INSERT / UPDATE / DELETE statements can then tack on extra conditions on ${LOWERBOUND} and ${UPPERBOUND} variables. E.g.
               WHERE n.id >= ${LOWERBOUND} AND n.id <= ${UPPERBOUND}
            - The statements are substituted and executed for each batch range up to the maximum value of the column
            - 2.1 and 2.2 MySQL upgrades reimplemented this way
         21207: Extra debug logging to track index triggering activity
      21295: Merged HEAD to V3.3
         21255: Parameter encoding
      21298: Merged V3.2 to V3.3
         21297: ALF-3889: JBPMDeployProcessServlet is now disabled by default and enabled with this in alfresco-global.properties
            system.workflow.deployservlet.enabled=true
      21317: dod5015: Parameter encoding
      21363: Merged PATCHES/V3.2.1 to V3.3
         21338: (RECORD ONLY) Incremented version label
         21335: ALF-3779: Correction to error in --FOREACH range restriction for UPDATE statement
         21290: ALF-3960: ArrayIndexOutOfBoundsException when we set mergerMergeFactor > mergerTargetOverlays
         21278: (RECORD ONLY) Merged PATCHES/V3.1.2 to PATCHES/V3.2.1
            21264: ALF-3889: JBPMDeployProcessServlet not accessible by default
               - Should only be enabled in development environment
      21364: Merged PATCHES/V3.1.2 to V3.3 (RECORD ONLY)
         21264: ALF-3889: JBPMDeployProcessServlet not accessible by default
            - Should only be enabled in development environment
      21365: Merged PATCHES/V3.2.0 to V3.3 (RECORD ONLY)
         21276: Merged PATCHES/V3.1.2 to PATCHES/V3.2.0
            21264: ALF-3889: JBPMDeployProcessServlet not accessible by default
               - Should only be enabled in development environment
      21366: Merged PATCHES/V3.2.r to V3.3 (RECORD ONLY)
         21279: Merged PATCHES/V3.1.2 to PATCHES/V3.2.r
            21264: ALF-3889: JBPMDeployProcessServlet not accessible by default
               - Should only be enabled in development environment
      21367: Merged PATCHES/V3.3.1 to V3.3 (RECORD ONLY)
         21343: Incremented version label
         21342: ALF-3997: Merged V3.3-BUG-FIX to PATCHES/V3.3.1
            20623: Fix for ALF-3188 : Access Denied when updating doc via CIFS
         21282: Merged PATCHES/V3.1.2 to PATCHES/V3.3.1
            21264: ALF-3889: JBPMDeployProcessServlet not accessible by default
               - Should only be enabled in development environment
         21239: Created hotfix branch off ENTERPRISE/V3.3.1


git-svn-id: https://svn.alfresco.com/repos/alfresco-enterprise/alfresco/HEAD/root@21369 c4b6b30b-aa2e-2d43-bbcb-ca4b014f7261
This commit is contained in:
Dave Ward
2010-07-22 18:20:24 +00:00
parent e74eb09456
commit a2580451b9
25 changed files with 561 additions and 213 deletions

View File

@@ -98,6 +98,9 @@ public class SchemaBootstrap extends AbstractLifecycleBean
{
/** The placeholder for the configured <code>Dialect</code> class name: <b>${db.script.dialect}</b> */
private static final String PLACEHOLDER_SCRIPT_DIALECT = "\\$\\{db\\.script\\.dialect\\}";
/** The global property containing the default batch size used by --FOREACH */
private static final String PROPERTY_DEFAULT_BATCH_SIZE = "system.upgrade.default.batchsize";
private static final String MSG_DIALECT_USED = "schema.update.msg.dialect_used";
private static final String MSG_BYPASSING_SCHEMA_UPDATE = "schema.update.msg.bypassing";
@@ -203,6 +206,7 @@ public class SchemaBootstrap extends AbstractLifecycleBean
private int schemaUpdateLockRetryCount = DEFAULT_LOCK_RETRY_COUNT;
private int schemaUpdateLockRetryWaitSeconds = DEFAULT_LOCK_RETRY_WAIT_SECONDS;
private int maximumStringLength;
private Properties globalProperties;
private ThreadLocal<StringBuilder> executedStatementsThreadLocal = new ThreadLocal<StringBuilder>();
private File xmlPreSchemaOutputFile; // This must be set if there are any executed statements
@@ -215,6 +219,7 @@ public class SchemaBootstrap extends AbstractLifecycleBean
preUpdateScriptPatches = new ArrayList<SchemaUpgradeScriptPatch>(4);
postUpdateScriptPatches = new ArrayList<SchemaUpgradeScriptPatch>(4);
maximumStringLength = -1;
globalProperties = new Properties();
}
public void setLocalSessionFactory(LocalSessionFactoryBean localSessionFactory)
@@ -387,6 +392,18 @@ public class SchemaBootstrap extends AbstractLifecycleBean
ActionQueue.setMAX_EXECUTIONS_SIZE(hibernateMaxExecutions);
}
/**
* Sets the properties map from which we look up some configuration settings.
*
* @param globalProperties
* the global properties
*/
public void setGlobalProperties(Properties globalProperties)
{
this.globalProperties = globalProperties;
}
/**
* Helper method to generate a schema creation SQL script from the given Hibernate
* configuration.
@@ -968,6 +985,9 @@ public class SchemaBootstrap extends AbstractLifecycleBean
StringBuilder sb = new StringBuilder(1024);
String fetchVarName = null;
String fetchColumnName = null;
boolean doBatch = false;
int batchUpperLimit = 0;
int batchSize = 1;
Map<String, Object> varAssignments = new HashMap<String, Object>(13);
// Special variable assignments:
if (dialect instanceof PostgreSQLDialect)
@@ -1017,16 +1037,45 @@ public class SchemaBootstrap extends AbstractLifecycleBean
fetchVarName = assigns[0];
fetchColumnName = assigns[1];
continue;
}
// Handle looping control
else if (sql.startsWith("--FOREACH"))
{
// --FOREACH table.column batch.size.property
String[] args = sql.split("[ \\t]+");
int sepIndex;
if (args.length == 3 && (sepIndex = args[1].indexOf('.')) != -1)
{
doBatch = true;
// Select the upper bound of the table column
String stmt = "SELECT MAX(" + args[1].substring(sepIndex+1) + ") AS upper_limit FROM " + args[1].substring(0, sepIndex);
Object fetchedVal = executeStatement(connection, stmt, "upper_limit", false, line, scriptFile);
if (fetchedVal instanceof Number)
{
batchUpperLimit = ((Number)fetchedVal).intValue();
// Read the batch size from the named property
String batchSizeString = globalProperties.getProperty(args[2]);
// Fall back to the default property
if (batchSizeString == null)
{
batchSizeString = globalProperties.getProperty(PROPERTY_DEFAULT_BATCH_SIZE);
}
batchSize = batchSizeString == null ? 10000 : Integer.parseInt(batchSizeString);
}
}
continue;
}
// Allow transaction delineation
else if (sql.startsWith("--BEGIN TXN"))
{
connection.setAutoCommit(false);
continue;
}
else if (sql.startsWith("--END TXN"))
{
connection.commit();
connection.setAutoCommit(true);
connection.setAutoCommit(true);
continue;
}
// Check for comments
@@ -1084,34 +1133,49 @@ public class SchemaBootstrap extends AbstractLifecycleBean
// execute, if required
if (execute)
{
sql = sb.toString();
// Perform variable replacement using the ${var} format
for (Map.Entry<String, Object> entry : varAssignments.entrySet())
// Now substitute and execute the statement the appropriate number of times
String unsubstituted = sb.toString();
for(int lowerBound = 0; lowerBound <= batchUpperLimit; lowerBound += batchSize)
{
String var = entry.getKey();
Object val = entry.getValue();
sql = sql.replaceAll("\\$\\{" + var + "\\}", val.toString());
}
// Handle the 0/1 values that PostgreSQL doesn't translate to TRUE
if (this.dialect != null && this.dialect instanceof PostgreSQLDialect)
{
sql = sql.replaceAll("\\$\\{TRUE\\}", "TRUE");
}
else
{
sql = sql.replaceAll("\\$\\{TRUE\\}", "1");
}
Object fetchedVal = executeStatement(connection, sql, fetchColumnName, optional, line, scriptFile);
if (fetchVarName != null && fetchColumnName != null)
{
varAssignments.put(fetchVarName, fetchedVal);
}
sb = new StringBuilder(1024);
sql = unsubstituted;
// Substitute in the next pair of range parameters
if (doBatch)
{
varAssignments.put("LOWERBOUND", String.valueOf(lowerBound));
varAssignments.put("UPPERBOUND", String.valueOf(lowerBound + batchSize - 1));
}
// Perform variable replacement using the ${var} format
for (Map.Entry<String, Object> entry : varAssignments.entrySet())
{
String var = entry.getKey();
Object val = entry.getValue();
sql = sql.replaceAll("\\$\\{" + var + "\\}", val.toString());
}
// Handle the 0/1 values that PostgreSQL doesn't translate to TRUE
if (this.dialect != null && this.dialect instanceof PostgreSQLDialect)
{
sql = sql.replaceAll("\\$\\{TRUE\\}", "TRUE");
}
else
{
sql = sql.replaceAll("\\$\\{TRUE\\}", "1");
}
Object fetchedVal = executeStatement(connection, sql, fetchColumnName, optional, line, scriptFile);
if (fetchVarName != null && fetchColumnName != null)
{
varAssignments.put(fetchVarName, fetchedVal);
}
}
sb.setLength(0);
fetchVarName = null;
fetchColumnName = null;
doBatch = false;
batchUpperLimit = 0;
batchSize = 1;
}
}
}