Merged HEAD-BUG-FIX (5.0/Cloud) to HEAD (4.3/Cloud)

71594: Merged V4.2-BUG-FIX (4.2.3) to HEAD-BUG-FIX (4.3/Cloud)
      70332: Merged V4.1-BUG-FIX (4.1.9) to V4.2-BUG-FIX (4.2.3)
         70292: Merged DEV to V4.1-BUG-FIX (4.1.9)
            60077: Created branch for MNT-10067 work.
            60101: MNT-10067: first commit of a generic SQL script runner.
            Very rough and ready extraction of SchemaBootstrap SQL script execution functionality. Will serve as the basis for the implementation of MNT-10067.
            60147: MNT-10067: added tests for "dialect" placeholder resolution, including overriding of dialects.
            60180: MNT-10067: exception thrown when unable to find SQL script to execute
            60187: MNT-10067: renamed ScriptExecutor to ScriptExecutorImpl to make way for an interface definition.
            60188: MNT-10067: introduced a ScriptExecutor interface.
            60189: MNT-10067: renamed ScriptExecutorTest
            60190: MNT-10067: added ScriptExecutorImplIntegrationTest to repo test suite.
            60194: MNT-10067: a very simple initial implementation of a SQL script runner capable of running multiple scripts in a given
            directory.
            60195: MNT-10067: added integration test for ScriptBundleExecutorImpl.
            60196: MNT-10067: moved ScriptBundleExecutorImplTest to correct source tree.
            60197: MNT-10067: added ScriptBundleExecutorImplIntegrationTest to repo test suite.
            60263: MNT-10067: ScriptBundleExecutor(Impl) now stops executing the main batch of scripts upon failure and runs a post-script.
            60459: MNT-10067: minor change to test data to avoid implying that ScriptBundleExecutor.exec(String, String...) has an always-run
            final script.
            60482: MNT-10067: added integration test for ScriptBundleExecutor.execWithPostScript()
            60483: MNT-10067: committed missing files from r60482
            60488: MNT-10067: set appropriate log levels for log4j
            60620: MNT-10067: added alf_props_xxx clean-up script.
            60623: MNT-10067: minor tidy up of ScriptExecutorImpl (tidy imports, group fields at top of class)
            60625: MNT-10067: further tidy up ScriptExecutorImpl (removed redundant constants, made externally unused public constant private)
            60629: MNT-10067: fix tests broken by introduction of scriptExecutor bean in production code.
            60662: MNT-10067: added tests to check deletion of doubles, serializables and dates.
            61378: MNT-10067 : Cleanup alf_prop_XXX data
            Added MySQL, Oracle DB, MS SQL Server and IBM DB2 scripts.
            63371: MNT-10067: removed the vacuum and analyze statements from the postgresql script.
            63372: MNT-10067: replaced begin and commit statements (PostgreSQL script only) with --BEGIN TXN and --END TXN, handled by the
            script executor.
            63568: MNT-10067 : Cleanup alf_prop_XXX data
            Added start and end transaction marks to the scripts.
            64115: MNT-10067: added Quartz job that by default doesn't fire until 2099 and can be manually invoked over JMX.
            64223: MNT-10067: improved testing for AuditDAOTest and added PropertyValueDAOTest
            64685: MNT-10067: added AttributeServiceTest
            65796: MNT-10067 : Cleanup alf_prop_XXX data
            Implemented a performance test.
            65983: MNT-10067 : Cleanup alf_prop_XXX data
            Reworked the MySQL script.
            Added time measurements for entry creation.
            66116: MNT-10067 : Cleanup alf_prop_XXX data
            For MySQL:
            1) Renamed temp tables.
            2) Split the script into execution and cleanup of temp tables parts.
            67023: MNT-10067 : Cleanup alf_prop_XXX data
            Modified MySQL script to skip null values from alf_prop_unique_ctx.prop1_id.
            67199: MNT-10067 : Cleanup alf_prop_XXX data
            Implemented the latest changes of the script for other DB flavors.
            Removed the MS SQL Server transaction marks.
            67763: MNT-10067 : Cleanup alf_prop_XXX data
            Removed unnecessary temporary index dropping.
            Added additional cleanup before main script execution.
            68710: MNT-10067 : Cleanup alf_prop_XXX data
            Added batch logging.
            Moved clearCaches() statement in PropertyValueDAOImpl.cleanupUnusedValues() to finally block.
            68711: MNT-10067 : Cleanup alf_prop_XXX data
            Added batching for MySQL script.
            69602: MNT-10067 : Cleanup alf_prop_XXX data
            Updated scripts for all DB flavors with batching.
            69768: MNT-10067 : Cleanup alf_prop_XXX data
            Fixed failing ScriptBundleExecutorImplIntegrationTest and ScriptExecutorImplIntegrationTest.
            70058: Sync with latest changes in V4.1-BUG-FIX.


git-svn-id: https://svn.alfresco.com/repos/alfresco-enterprise/alfresco/HEAD/root@74691 c4b6b30b-aa2e-2d43-bbcb-ca4b014f7261
This commit is contained in:
Will Abson
2014-06-25 15:26:31 +00:00
parent b703c1de1f
commit 04cead57a6
39 changed files with 2139 additions and 2 deletions

View File

@@ -0,0 +1,49 @@
/*
* Copyright (C) 2005-2014 Alfresco Software Limited.
*
* This file is part of Alfresco
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
*/
package org.alfresco.repo.attributes;
import org.alfresco.repo.domain.propval.PropertyValueDAO;
import org.quartz.Job;
import org.quartz.JobDataMap;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
/**
* Cleanup job to initiate cleaning of unused values from the alf_prop_xxx tables.
*
* @author Matt Ward
*/
public class PropTablesCleanupJob implements Job
{
protected static final Object PROPERTY_VALUE_DAO_KEY = "propertyValueDAO";
@Override
public void execute(JobExecutionContext jobCtx) throws JobExecutionException
{
JobDataMap jobData = jobCtx.getJobDetail().getJobDataMap();
PropertyValueDAO propertyValueDAO = (PropertyValueDAO) jobData.get(PROPERTY_VALUE_DAO_KEY);
if (propertyValueDAO == null)
{
throw new IllegalArgumentException(PROPERTY_VALUE_DAO_KEY + " in job data map was null");
}
propertyValueDAO.cleanupUnusedValues();
}
}

View File

@@ -1588,4 +1588,15 @@ public abstract class AbstractPropertyValueDAOImpl implements PropertyValueDAO
// This will have put the values into the correct containers
return result;
}
protected void clearCaches()
{
propertyClassCache.clear();
propertyDateValueCache.clear();
propertyStringValueCache.clear();
propertyDoubleValueCache.clear();
propertySerializableValueCache.clear();
propertyCache.clear();
propertyValueCache.clear();
}
}

View File

@@ -362,4 +362,9 @@ public interface PropertyValueDAO
* @throws IllegalArgumentException if rows don't all share the same root property ID
*/
Serializable convertPropertyIdSearchRows(List<PropertyIdSearchRow> rows);
/**
* Remove orphaned properties.
*/
void cleanupUnusedValues();
}

View File

@@ -38,6 +38,7 @@ import org.alfresco.repo.domain.propval.PropertyStringValueEntity;
import org.alfresco.repo.domain.propval.PropertyUniqueContextEntity;
import org.alfresco.repo.domain.propval.PropertyValueEntity;
import org.alfresco.repo.domain.propval.PropertyValueEntity.PersistedType;
import org.alfresco.repo.domain.schema.script.ScriptBundleExecutor;
import org.alfresco.util.Pair;
import org.apache.ibatis.session.ResultContext;
import org.apache.ibatis.session.ResultHandler;
@@ -98,11 +99,18 @@ public class PropertyValueDAOImpl extends AbstractPropertyValueDAOImpl
private SqlSessionTemplate template;
private ScriptBundleExecutor scriptExecutor;
public final void setSqlSessionTemplate(SqlSessionTemplate sqlSessionTemplate)
{
this.template = sqlSessionTemplate;
}
public void setScriptExecutor(ScriptBundleExecutor scriptExecutor)
{
this.scriptExecutor = scriptExecutor;
}
//================================
// 'alf_prop_class' accessors
@@ -672,4 +680,31 @@ public class PropertyValueDAOImpl extends AbstractPropertyValueDAOImpl
entity.setId(rootPropId);
return template.delete(DELETE_PROPERTY_LINKS_BY_ROOT_ID, entity);
}
@Override
public void cleanupUnusedValues()
{
// execute clean up in case of previous failures
scriptExecutor.exec("alfresco/dbscripts/utility/${db.script.dialect}", "CleanAlfPropTablesPostExec.sql");
try
{
scriptExecutor.exec("alfresco/dbscripts/utility/${db.script.dialect}", "CleanAlfPropTables.sql");
}
finally
{
try
{
// execute clean up
scriptExecutor.exec("alfresco/dbscripts/utility/${db.script.dialect}", "CleanAlfPropTablesPostExec.sql");
}
catch (Exception e)
{
if (logger.isErrorEnabled())
{
logger.error("The cleanup failed with an error: ", e);
}
}
clearCaches();
}
}
}

View File

@@ -0,0 +1,45 @@
/*
* Copyright (C) 2005-2014 Alfresco Software Limited.
*
* This file is part of Alfresco
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
*/
package org.alfresco.repo.domain.schema.script;
/**
* Executes a set of zero or more SQL scripts.
*
* @author Matt Ward
*/
public interface ScriptBundleExecutor
{
/**
* Runs a bundle of scripts. If any script within the bundle fails, then the rest of the files are not run.
*
* @param dir Directory where the script bundle may be found.
* @param scripts Names of the SQL scripts to run, relative to the specified directory.
*/
void exec(String dir, String... scripts);
/**
* Runs a bundle of scripts. If any script within the bundle fails, then the rest of the files
* are not run, with the exception of postScript - which is always run (a clean-up script for example).
*
* @param dir Directory where the script bundle may be found.
* @param postScript A script that is always run after the other scripts.
* @param scripts Names of the SQL scripts to run, relative to the specified directory.
*/
void execWithPostScript(String dir, String postScript, String... scripts);
}

View File

@@ -0,0 +1,74 @@
/*
* Copyright (C) 2005-2014 Alfresco Software Limited.
*
* This file is part of Alfresco
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
*/
package org.alfresco.repo.domain.schema.script;
import java.io.File;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
/**
* {@link ScriptBundleExecutor} implementation. Uses the supplied {@link ScriptExecutor}
* to invoke multiple SQL scripts in a particular directory.
*
* @author Matt Ward
*/
public class ScriptBundleExecutorImpl implements ScriptBundleExecutor
{
private ScriptExecutor scriptExecutor;
protected Log log = LogFactory.getLog(ScriptBundleExecutorImpl.class);
public ScriptBundleExecutorImpl(ScriptExecutor scriptExecutor)
{
this.scriptExecutor = scriptExecutor;
}
@Override
public void exec(String dir, String... scripts)
{
for (String name : scripts)
{
File file = new File(dir, name);
try
{
scriptExecutor.executeScriptUrl(file.getPath());
}
catch (Throwable e)
{
log.error("Unable to run SQL script: dir=" + dir + ", name="+name, e);
// Do not run any more scripts.
break;
}
}
}
@Override
public void execWithPostScript(String dir, String postScript, String... scripts)
{
try
{
exec(dir, scripts);
}
finally
{
// Always run the post-script.
exec(dir, postScript);
}
}
}

View File

@@ -0,0 +1,29 @@
/*
* Copyright (C) 2005-2014 Alfresco Software Limited.
*
* This file is part of Alfresco
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
*/
package org.alfresco.repo.domain.schema.script;
/**
* Defines a SQL script executor that executes a single SQL script.
*
* @author Matt Ward
*/
public interface ScriptExecutor
{
void executeScriptUrl(String scriptUrl) throws Exception;
}

View File

@@ -0,0 +1,597 @@
/*
* Copyright (C) 2005-2014 Alfresco Software Limited.
*
* This file is part of Alfresco
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
*/
package org.alfresco.repo.domain.schema.script;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import javax.sql.DataSource;
import org.alfresco.error.AlfrescoRuntimeException;
import org.alfresco.repo.content.filestore.FileContentWriter;
import org.alfresco.service.cmr.repository.ContentWriter;
import org.alfresco.util.LogUtil;
import org.alfresco.util.TempFileProvider;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.hibernate.cfg.Configuration;
import org.hibernate.dialect.Dialect;
import org.hibernate.dialect.MySQLInnoDBDialect;
import org.hibernate.dialect.PostgreSQLDialect;
import org.springframework.core.io.Resource;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
import org.springframework.core.io.support.ResourcePatternResolver;
import org.springframework.orm.hibernate3.LocalSessionFactoryBean;
public class ScriptExecutorImpl implements ScriptExecutor
{
/** The placeholder for the configured <code>Dialect</code> class name: <b>${db.script.dialect}</b> */
private static final String PLACEHOLDER_DIALECT = "\\$\\{db\\.script\\.dialect\\}";
/** The global property containing the default batch size used by --FOREACH */
private static final String PROPERTY_DEFAULT_BATCH_SIZE = "system.upgrade.default.batchsize";
private static final String MSG_EXECUTING_GENERATED_SCRIPT = "schema.update.msg.executing_generated_script";
private static final String MSG_EXECUTING_COPIED_SCRIPT = "schema.update.msg.executing_copied_script";
private static final String MSG_EXECUTING_STATEMENT = "schema.update.msg.executing_statement";
private static final String MSG_OPTIONAL_STATEMENT_FAILED = "schema.update.msg.optional_statement_failed";
private static final String ERR_STATEMENT_FAILED = "schema.update.err.statement_failed";
private static final String ERR_SCRIPT_NOT_FOUND = "schema.update.err.script_not_found";
private static final String ERR_STATEMENT_INCLUDE_BEFORE_SQL = "schema.update.err.statement_include_before_sql";
private static final String ERR_STATEMENT_VAR_ASSIGNMENT_BEFORE_SQL = "schema.update.err.statement_var_assignment_before_sql";
private static final String ERR_STATEMENT_VAR_ASSIGNMENT_FORMAT = "schema.update.err.statement_var_assignment_format";
private static final String ERR_STATEMENT_TERMINATOR = "schema.update.err.statement_terminator";
private static final int DEFAULT_MAX_STRING_LENGTH = 1024;
private static volatile int maxStringLength = DEFAULT_MAX_STRING_LENGTH;
private Dialect dialect;
private ResourcePatternResolver rpr = new PathMatchingResourcePatternResolver(this.getClass().getClassLoader());
private static Log logger = LogFactory.getLog(ScriptExecutorImpl.class);
private LocalSessionFactoryBean localSessionFactory;
private Properties globalProperties;
private ThreadLocal<StringBuilder> executedStatementsThreadLocal = new ThreadLocal<StringBuilder>();
private DataSource dataSource;
/**
* @return Returns the maximum number of characters that a string field can be
*/
public static final int getMaxStringLength()
{
return ScriptExecutorImpl.maxStringLength;
}
/**
* Truncates or returns a string that will fit into the string columns in the schema. Text fields can
* either cope with arbitrarily long text fields or have the default limit, {@link #DEFAULT_MAX_STRING_LENGTH}.
*
* @param value the string to check
* @return Returns a string that is short enough for {@link ScriptExecutorImpl#getMaxStringLength()}
*
* @since 3.2
*/
public static final String trimStringForTextFields(String value)
{
if (value != null && value.length() > maxStringLength)
{
return value.substring(0, maxStringLength);
}
else
{
return value;
}
}
/**
* Sets the previously auto-detected Hibernate dialect.
*
* @param dialect
* the dialect
*/
public void setDialect(Dialect dialect)
{
this.dialect = dialect;
}
public ScriptExecutorImpl()
{
globalProperties = new Properties();
}
public void setLocalSessionFactory(LocalSessionFactoryBean localSessionFactory)
{
this.localSessionFactory = localSessionFactory;
}
public LocalSessionFactoryBean getLocalSessionFactory()
{
return localSessionFactory;
}
public void setDataSource(DataSource dataSource)
{
this.dataSource = dataSource;
}
/**
* Sets the properties map from which we look up some configuration settings.
*
* @param globalProperties
* the global properties
*/
public void setGlobalProperties(Properties globalProperties)
{
this.globalProperties = globalProperties;
}
@Override
public void executeScriptUrl(String scriptUrl) throws Exception
{
Configuration cfg = localSessionFactory.getConfiguration();
Connection connection = dataSource.getConnection();
connection.setAutoCommit(true);
try
{
executeScriptUrl(cfg, connection, scriptUrl);
}
finally
{
connection.close();
}
}
private void executeScriptUrl(Configuration cfg, Connection connection, String scriptUrl) throws Exception
{
Dialect dialect = Dialect.getDialect(cfg.getProperties());
String dialectStr = dialect.getClass().getSimpleName();
InputStream scriptInputStream = getScriptInputStream(dialect.getClass(), scriptUrl);
// check that it exists
if (scriptInputStream == null)
{
throw AlfrescoRuntimeException.create(ERR_SCRIPT_NOT_FOUND, scriptUrl);
}
// write the script to a temp location for future and failure reference
File tempFile = null;
try
{
tempFile = TempFileProvider.createTempFile("AlfrescoSchema-" + dialectStr + "-Update-", ".sql");
ContentWriter writer = new FileContentWriter(tempFile);
writer.putContent(scriptInputStream);
}
finally
{
try { scriptInputStream.close(); } catch (Throwable e) {} // usually a duplicate close
}
// now execute it
String dialectScriptUrl = scriptUrl.replaceAll(PLACEHOLDER_DIALECT, dialect.getClass().getName());
// Replace the script placeholders
executeScriptFile(cfg, connection, tempFile, dialectScriptUrl);
}
/**
* Replaces the dialect placeholder in the resource URL and attempts to find a file for
* it. If not found, the dialect hierarchy will be walked until a compatible resource is
* found. This makes it possible to have resources that are generic to all dialects.
*
* @return The Resource, otherwise null
*/
private Resource getDialectResource(Class dialectClass, String resourceUrl)
{
// replace the dialect placeholder
String dialectResourceUrl = resolveDialectUrl(dialectClass, resourceUrl);
// get a handle on the resource
Resource resource = rpr.getResource(dialectResourceUrl);
if (!resource.exists())
{
// it wasn't found. Get the superclass of the dialect and try again
Class superClass = dialectClass.getSuperclass();
if (Dialect.class.isAssignableFrom(superClass))
{
// we still have a Dialect - try again
return getDialectResource(superClass, resourceUrl);
}
else
{
// we have exhausted all options
return null;
}
}
else
{
// we have a handle to it
return resource;
}
}
/**
* Takes resource URL containing the {@link ScriptExecutorImpl#PLACEHOLDER_DIALECT dialect placeholder text}
* and substitutes the placeholder with the name of the given dialect's class.
* <p/>
* For example:
* <pre>
* resolveDialectUrl(MySQLInnoDBDialect.class, "classpath:alfresco/db/${db.script.dialect}/myfile.xml")
* </pre>
* would give the following String:
* <pre>
* classpath:alfresco/db/org.hibernate.dialect.MySQLInnoDBDialect/myfile.xml
* </pre>
*
* @param dialectClass
* @param resourceUrl
* @return
*/
private String resolveDialectUrl(Class dialectClass, String resourceUrl)
{
return resourceUrl.replaceAll(PLACEHOLDER_DIALECT, dialectClass.getName());
}
/**
* Replaces the dialect placeholder in the script URL and attempts to find a file for
* it. If not found, the dialect hierarchy will be walked until a compatible script is
* found. This makes it possible to have scripts that are generic to all dialects.
*
* @return Returns an input stream onto the script, otherwise null
*/
private InputStream getScriptInputStream(Class dialectClazz, String scriptUrl) throws Exception
{
Resource resource = getDialectResource(dialectClazz, scriptUrl);
if (resource == null)
{
return null;
}
return resource.getInputStream();
}
/**
* @param cfg the Hibernate configuration
* @param connection the DB connection to use
* @param scriptFile the file containing the statements
* @param scriptUrl the URL of the script to report. If this is null, the script
* is assumed to have been auto-generated.
*/
private void executeScriptFile(
Configuration cfg,
Connection connection,
File scriptFile,
String scriptUrl) throws Exception
{
final Dialect dialect = Dialect.getDialect(cfg.getProperties());
StringBuilder executedStatements = executedStatementsThreadLocal.get();
if (executedStatements == null)
{
executedStatements = new StringBuilder(8094);
executedStatementsThreadLocal.set(executedStatements);
}
if (scriptUrl == null)
{
LogUtil.info(logger, MSG_EXECUTING_GENERATED_SCRIPT, scriptFile);
}
else
{
LogUtil.info(logger, MSG_EXECUTING_COPIED_SCRIPT, scriptFile, scriptUrl);
}
InputStream scriptInputStream = new FileInputStream(scriptFile);
BufferedReader reader = new BufferedReader(new InputStreamReader(scriptInputStream, "UTF-8"));
try
{
int line = 0;
// loop through all statements
StringBuilder sb = new StringBuilder(1024);
String fetchVarName = null;
String fetchColumnName = null;
String batchTableName = null;
boolean doBatch = false;
int batchUpperLimit = 0;
int batchSize = 1;
Map<String, Object> varAssignments = new HashMap<String, Object>(13);
// Special variable assignments:
if (dialect instanceof PostgreSQLDialect)
{
// Needs 1/0 for true/false
varAssignments.put("true", "true");
varAssignments.put("false", "false");
varAssignments.put("TRUE", "TRUE");
varAssignments.put("FALSE", "FALSE");
}
else
{
// Needs true/false as strings
varAssignments.put("true", "1");
varAssignments.put("false", "0");
varAssignments.put("TRUE", "1");
varAssignments.put("FALSE", "0");
}
long now = System.currentTimeMillis();
varAssignments.put("now", new Long(now).toString());
varAssignments.put("NOW", new Long(now).toString());
while(true)
{
String sqlOriginal = reader.readLine();
line++;
if (sqlOriginal == null)
{
// nothing left in the file
break;
}
// trim it
String sql = sqlOriginal.trim();
// Check of includes
if (sql.startsWith("--INCLUDE:"))
{
if (sb.length() > 0)
{
// This can only be set before a new SQL statement
throw AlfrescoRuntimeException.create(ERR_STATEMENT_INCLUDE_BEFORE_SQL, (line - 1), scriptUrl);
}
String includedScriptUrl = sql.substring(10, sql.length());
// Execute the script in line
executeScriptUrl(cfg, connection, includedScriptUrl);
}
// Check for variable assignment
else if (sql.startsWith("--ASSIGN:"))
{
if (sb.length() > 0)
{
// This can only be set before a new SQL statement
throw AlfrescoRuntimeException.create(ERR_STATEMENT_VAR_ASSIGNMENT_BEFORE_SQL, (line - 1), scriptUrl);
}
String assignStr = sql.substring(9, sql.length());
String[] assigns = assignStr.split("=");
if (assigns.length != 2 || assigns[0].length() == 0 || assigns[1].length() == 0)
{
throw AlfrescoRuntimeException.create(ERR_STATEMENT_VAR_ASSIGNMENT_FORMAT, (line - 1), scriptUrl);
}
fetchVarName = assigns[0];
fetchColumnName = assigns[1];
continue;
}
// Handle looping control
else if (sql.startsWith("--FOREACH"))
{
// --FOREACH table.column batch.size.property
String[] args = sql.split("[ \\t]+");
int sepIndex;
if (args.length == 3 && (sepIndex = args[1].indexOf('.')) != -1)
{
doBatch = true;
// Select the upper bound of the table column
batchTableName = args[1].substring(0, sepIndex);
String stmt = "SELECT MAX(" + args[1].substring(sepIndex+1) + ") AS upper_limit FROM " + batchTableName;
Object fetchedVal = executeStatement(connection, stmt, "upper_limit", false, line, scriptFile);
if (fetchedVal instanceof Number)
{
batchUpperLimit = ((Number)fetchedVal).intValue();
// Read the batch size from the named property
String batchSizeString = globalProperties.getProperty(args[2]);
// Fall back to the default property
if (batchSizeString == null)
{
batchSizeString = globalProperties.getProperty(PROPERTY_DEFAULT_BATCH_SIZE);
}
batchSize = batchSizeString == null ? 10000 : Integer.parseInt(batchSizeString);
}
}
continue;
}
// Allow transaction delineation
else if (sql.startsWith("--BEGIN TXN"))
{
connection.setAutoCommit(false);
continue;
}
else if (sql.startsWith("--END TXN"))
{
connection.commit();
connection.setAutoCommit(true);
continue;
}
// Check for comments
if (sql.length() == 0 ||
sql.startsWith( "--" ) ||
sql.startsWith( "//" ) ||
sql.startsWith( "/*" ) )
{
if (sb.length() > 0)
{
// we have an unterminated statement
throw AlfrescoRuntimeException.create(ERR_STATEMENT_TERMINATOR, (line - 1), scriptUrl);
}
// there has not been anything to execute - it's just a comment line
continue;
}
// have we reached the end of a statement?
boolean execute = false;
boolean optional = false;
if (sql.endsWith(";"))
{
sql = sql.substring(0, sql.length() - 1);
execute = true;
optional = false;
}
else if (sql.endsWith("(optional)") || sql.endsWith("(OPTIONAL)"))
{
// Get the end of statement
int endIndex = sql.lastIndexOf(';');
if (endIndex > -1)
{
sql = sql.substring(0, endIndex);
execute = true;
optional = true;
}
else
{
// Ends with "(optional)" but there is no semi-colon.
// Just take it at face value and probably fail.
}
}
// Add newline
if (sb.length() > 0)
{
sb.append("\n");
}
// Add leading whitespace for formatting
int whitespaceCount = sqlOriginal.indexOf(sql);
for (int i = 0; i < whitespaceCount; i++)
{
sb.append(" ");
}
// append to the statement being built up
sb.append(sql);
// execute, if required
if (execute)
{
// Now substitute and execute the statement the appropriate number of times
String unsubstituted = sb.toString();
for(int lowerBound = 0; lowerBound <= batchUpperLimit; lowerBound += batchSize)
{
sql = unsubstituted;
// Substitute in the next pair of range parameters
if (doBatch)
{
logger.info("Processing from " + lowerBound + " to " + (lowerBound + batchSize) + " rows of " + batchUpperLimit + " rows from table " + batchTableName + ".");
varAssignments.put("LOWERBOUND", String.valueOf(lowerBound));
varAssignments.put("UPPERBOUND", String.valueOf(lowerBound + batchSize - 1));
}
// Perform variable replacement using the ${var} format
for (Map.Entry<String, Object> entry : varAssignments.entrySet())
{
String var = entry.getKey();
Object val = entry.getValue();
sql = sql.replaceAll("\\$\\{" + var + "\\}", val.toString());
}
// Handle the 0/1 values that PostgreSQL doesn't translate to TRUE
if (this.dialect != null && this.dialect instanceof PostgreSQLDialect)
{
sql = sql.replaceAll("\\$\\{TRUE\\}", "TRUE");
}
else
{
sql = sql.replaceAll("\\$\\{TRUE\\}", "1");
}
if (this.dialect != null && this.dialect instanceof MySQLInnoDBDialect)
{
// note: enable bootstrap on MySQL 5.5 (eg. for auto-generated SQL, such as JBPM)
sql = sql.replaceAll("(?i)TYPE=InnoDB", "ENGINE=InnoDB");
}
Object fetchedVal = executeStatement(connection, sql, fetchColumnName, optional, line, scriptFile);
if (fetchVarName != null && fetchColumnName != null)
{
varAssignments.put(fetchVarName, fetchedVal);
}
}
sb.setLength(0);
fetchVarName = null;
fetchColumnName = null;
batchTableName = null;
doBatch = false;
batchUpperLimit = 0;
batchSize = 1;
}
}
}
finally
{
try { reader.close(); } catch (Throwable e) {}
try { scriptInputStream.close(); } catch (Throwable e) {}
}
}
/**
* Execute the given SQL statement, absorbing exceptions that we expect during
* schema creation or upgrade.
*
* @param fetchColumnName the name of the column value to return
*/
private Object executeStatement(
Connection connection,
String sql,
String fetchColumnName,
boolean optional,
int line,
File file) throws Exception
{
StringBuilder executedStatements = executedStatementsThreadLocal.get();
if (executedStatements == null)
{
throw new IllegalArgumentException("The executedStatementsThreadLocal must be populated");
}
Statement stmt = connection.createStatement();
Object ret = null;
try
{
if (logger.isDebugEnabled())
{
LogUtil.debug(logger, MSG_EXECUTING_STATEMENT, sql);
}
boolean haveResults = stmt.execute(sql);
// Record the statement
executedStatements.append(sql).append(";\n\n");
if (haveResults && fetchColumnName != null)
{
ResultSet rs = stmt.getResultSet();
if (rs.next())
{
// Get the result value
ret = rs.getObject(fetchColumnName);
}
}
}
catch (SQLException e)
{
if (optional)
{
// it was marked as optional, so we just ignore it
LogUtil.debug(logger, MSG_OPTIONAL_STATEMENT_FAILED, sql, e.getMessage(), file.getAbsolutePath(), line);
}
else
{
LogUtil.error(logger, ERR_STATEMENT_FAILED, sql, e.getMessage(), file.getAbsolutePath(), line);
throw e;
}
}
finally
{
try { stmt.close(); } catch (Throwable e) {}
}
return ret;
}
}