mirror of
https://github.com/Alfresco/alfresco-community-repo.git
synced 2025-07-07 18:25:23 +00:00
47880: Create branch for Cloud Convergence from the latest state of HEAD (Revision 47874) 47886: Merged BRANCHES/DEV/CONV_V413 to BRANCHES/DEV/CONV_HEAD: 33052: (RECORD ONLY) Branch for Enterprise 4.0 service pack development 38002: (RECORD ONLY) Create branch for 4.1 Enterprise releases, based on 4.0.2 38003: (RECORD ONLY) Update version to 4.1.0 38079: (RECORD ONLY) Updated schema version to 5100 38536: (RECORD ONLY) Merged V4.1-BUG-FIX to V4.1 38219: ALF-14674: DOS voodoo to make start_deployment.bat work, as installed by Bitrock 38344: ALF-14674: Deployment installer still doesn't work - Use ${installdir.escape_backslashes} instead of ${installdir} 38471: ALF-14674: Deployment installer still doesn't work - Correction to use of ${installdir.escape_backslashes} 39519: (RECORD ONLY) Merged PATCHES/V4.0.2 to V4.1 38899: ALF-15005: Merged V4.0-BUG-FIX to PATCHES/V4.0.2 37920: ALF-13816: Permission Denied on web-client browsing if parent does not inherit permissions - FileFolderService getNamePath() now performs toFileInfo() as SystemUser. 38900: ALF-15005: Merged V4.1-BUG-FIX to PATCHES/V4.0.2 38549: ALF-11861: Maintain the same defuault root of WebDav for Alfresco 4.0 as was in pre-4.0 Removed overriding protocols.rootPath property from installer and enterprise overlay versions of alfresco-global.properties so that correct setting in repository.properties is used. 39494: ALF-15213 / ALF-15170: Can't change folder permissions in Private or Public-moderated sites - Fix by Dmitry V 44843: (RECORD ONLY) Created hotfix branch off V4.1 build 372 revision 44743 (candidate 4.1.2 release) 45708: (RECORD ONLY) Merged PATCHES/V4.1.2 to PATCHES/V4.1.3 45570: Merged V3.4-BUG-FIX to PATCHES/V4.1.2 43939: ALF-17197 / ALF-16917: Merged PATCHES/V3.4.11 to V3.4-BUG-FIX 43896: MNT-198: Activity feeds get not generated in private sites for added files if username in LDAP-AD contains uppercase letters - Now we can cope with a runAs where the username is in the wrong case 45714: (RECORD ONLY) Merged BRANCHES/DEV/V4.1-BUG-FIX to PATCHES/DEV/V4.1.3 45513: MNT-279: Use binary search in cached authority search to cut down search time when a group contains an astronomical number of authorities - Experimental fix to cut down on severe profiling hit 45715: (RECORD ONLY) Merged BRANCHES/DEV/V4.1-BUG-FIX to PATCHES/V4.1.3 44848: Fix for ALF-17178 SolrLuceneAnalyser.findAnalyser generating InavlidQNameExceptions wher they are easily protected. 46188: (RECORD ONLY) Merged BRANCHES/DEV/V4.1-BUG-FIX to PATCHES/V4.1.3 46014: Fix for ALF-17732 - SWF files are considered insecure content and should not be displayed directly in the browser. 46160: Fix for ALF-17759 - HTML files are stripped from metadata and style information after they are uploaded. 46165: Fix for ALF-17787 - Site Members 'All Members' link should not run query immediately 46169: Fix for ALF-17787 - Site Members 'All Members' link should not run query immediately - missing file 46186: Fix for ALF-17786 - Site dashboard page issues too many requests (Site Members dashlet issues avatar requests when it doesn't need too) 46242: (RECORD ONLY) Merged BRANCHES/DEV/V4.1-BUG-FIX to PATCHES/V4.1.3: 46184: Refactoring a test class to use JUnit Rules - as part of attempt to reproduce ALF-17797. 46192: Enhancement to JUnit Rule TemporaryNodes.java as required by fix for ALF-17797. 46194: Fix for ALF-17797. AddFailedThumbnailActionExecuter is failing. 46710: (RECORD ONLY) Create branch for Cloud Convergence from the latest state of 4.1.3 (RC5, Build 85, Revision 46648) 47908: Merged from DEV/CONV_V143 to DEV/CONV_HEAD 46788: Merged from BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413 Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30323: (RECORD ONLY) Merged HEAD to BRANCHES/DEV/THOR1: 30171: ALF-9613: caching content store. Various improvements and bug fixes. Including: 30325: THOR-114: S3 content store 30326: THOR-128: S3 content store 30333: THOR-139 F101: Get account for user e-mail id 30335: Merge from THOR0 to THOR1 r30274: THOR-135 is email address accepted by Alfresco? Part One. 30340: THOR-99: Thor module - enable tests 30341: Removing duplicate account-service-context.xml file. 30343: Merge THOR0 to THOR1 30339: Test email singup in Share complete 30338: New form runtime features: - Yellow background is displayed for mandatory fields without value - Red background dis displayed for fields with validation errors - Error message is displayed in a balloon when fields with error has focus - Using balloons is now the default method of displaying errors - Removed balloon code form create site menu since its now handled automatically - An alternative to balloons are "error containers" (div with clickable red text labels focusing the field): setErrorContainer(divEl) - Its possible to setMultipleErrors(true) to display all the forms/fields errors in the "error container"/ballon. - Its possible to turn of the balloons and error containers complete by setting setErrorContainer(null) - js validation handlers no longer needs to handle the messages OR the css classes for mandatory & invalid 30344: Missing value check caused js undefined error 30346: Minor css form fixes 30347: THOR-126: S3 content store - do not swallow exceptions 30348: THOR-66: disable unused services/features 30349: THOR-137 F88: Add existing external user (from another network) checkpoint 30350: THOR-135 Is email address accepted by Alfresco. 46789: Merged from BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413 35594: Fix merge issue 47930: Merged BRANCHES/DEV/CONV_V413 to BRANCHES/DEV/CONV_HEAD: 46762: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46768: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46769: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46778: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46780: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46786: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46791: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46792: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46808: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46809: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46819: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46829: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46839: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46842: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46844: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46846: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46847: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46876: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46877: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46878: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46879: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46880: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46881: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47947: Merged BRANCHES/DEV/CONV_V413 to BRANCHES/DEV/CONV_HEAD: 46737: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 35288: Alfresco Cloud (from BRANCHES/V4.0) 35389: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30170: Thor branch based on Swift feature complete 30185: Merged BRANCHES/DEV/THOR0 to BRANCHES/DEV/THOR1: 28973: THOR-1: verify ability to create DB schema programatically on AWS RDS (for MySQL & Oracle) 28999: THOR-3: Tenant Routing Data Source (dynamic tenant-aware DB connection pools) 29022: THOR-1: verify ability to create DB schema programatically on AWS RDS (for MySQL & Oracle) 29031: THOR-1: verify ability to create DB schema programatically on AWS RDS (for MySQL & Oracle) 30186: Merged BRANCHES/DEV/THOR0 to BRANCHES/DEV/THOR1: (3 conflicts resolved) 29116: THOR-3: Tenant Routing Data Source (dynamic tenant-aware DB connection pools) 29174: THOR-24 Set up new Alfresco AMP module project. 29186: THOR-25 Copy and refactor Account Service from SambaJAM 29193: ImporterComponent - prep for THOR-7 29198: THOR-7: Tenant Service API - Create Tenant (using separate DB schema) 29204: THOR-29 Account Type Registry 29234: THOR-7: Tenant Service API - Create Tenant (using separate DB schema) 29246: THOR-7: Tenant Service API - Create Tenant (using separate DB schema) 29251: THOR-30 Added AccountDAO interface along with two implementations: AccountDAOImpl (not implemented) which will manage Account data in an RDB via iBatis. AccountDAO_InMemory which manages AccountInfo in simple HashMaps for testing purposes only. 29258: THOR-28 29259: Addendum to THOR-25. Moved account-service spring config into a subfolder. (trivial) 35393: (RECORD ONLY) Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: - fix up classpath (remove mybatis 1.0.0 -> 1.0.1 and chemistry 0.4.0 -> 0.6.0) 35411: (RECORD ONLY) Merged BRANCHES/DEV/V4.0-BUG-FIX to BRANCHES/DEV/CLOUD1: 35409: Merged HEAD to BRANCHES/DEV/V4.0-BUG-FIX: 35399: ALF-12874: Schema reference files are out of date. 35452: (RECORD ONLY) Merged BRANCHES/DEV/V4.0-BUG-FIX to BRANCHES/DEV/CLOUD1: 34219: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/V4.0-BUG-FIX: 32096: THOR-429: Fix "MT: Thumbnail + Preview are not updated (after uploading new version)" 32125: THOR-429: Fix "MT: Thumbnail + Preview are not updated (after uploading new version)" 34220: Minor: follow-on to r34219 (ALF-11563) 34747: ALF-13262: adding missing indexes for new schema's (activiti-schema create) + schema patch for existing schema 35417: Merged BRANCHES/DEV/THOR0 to BRANCHES/DEV/V4.0-BUG-FIX: (THOR-6 / ALF-13755) 29356: THOR-6: MT is configured (but not enabled) by default - will be auto-enabled when first tenant is created 29455: THOR-6: build test/fix 29471: THOR-6: build test/fix 35423: Merged BRANCHES/DEV/THOR0 to BRANCHES/DEV/V4.0-BUG-FIX: (THOR-4 / ALF-13756) 29500: THOR-4: Replace Tenant attributes with Tenant table (alf_tenant) 29501: THOR-4: Replace Tenant attributes with Tenant table (alf_tenant) 29503: THOR-4: Replace Tenant attributes with Tenant table (alf_tenant) 47949: Merged HEAD to BRANCHES/DEV/CONV_HEAD: 47914: Merge fix for org.alfresco.repo.cache.AbstractAsynchronouslyRefreshedCache<T> R 46078, 46079, 46121 47958: Merged BRANCHES/DEV/CONV_V413 to BRANCHES/DEV/CONV_HEAD: 46746: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 35455: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30187: Merged BRANCHES/DEV/THOR0 to BRANCHES/DEV/THOR1: 29260: THOR: Initial Tenant Admin Service REST API - create, delete, get (list) web scripts 29356: THOR-6: MT is configured by default 29366: THOR-59: selectively disable certain test suites (for THOR dev build plan) 29377: THOR-59: selectively disable certain test suites (for THOR dev build plan) 29398: Refactoring of code to remove deprecation warnings. Replaced lots of object.field accesses with object.getField() calls.Trivial changes, but with so many warnings I can't see the wood for the trees. 29400: THOR-59: selectively disable certain test suites (for THOR dev build plan) 35456: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30188: Merged BRANCHES/DEV/THOR0 to BRANCHES/DEV/THOR1: 29442: THOR-59: selectively disable certain test suites (for THOR dev build plan) 29453: THOR-59: selectively disable certain test suites (for THOR dev build plan) 29455: THOR-76: track THOR build test failures and fix-up 29471: THOR-76: track THOR build test failures and fix-up 35459: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30189: Merged BRANCHES/DEV/THOR0 to BRANCHES/DEV/THOR1: 29473: Preliminary checkin for THOR-44. Created placeholder interface/impl/spring config for a new UserService. 29497: THOR-76: track THOR build test failures and fix-up ( LicenseComponentTest) 29500: THOR-4: Replace Tenant attributes with Tenant table (alf_tenant) 29501: THOR-4: Replace Tenant attributes with Tenant table (alf_tenant) 29503: THOR-4: Replace Tenant attributes with Tenant table (alf_tenant) 29511: THOR-59: selectively disable certain test suites (for THOR dev build plan) 29512: Adding a new JUnit4 test class with an @Ignore'd test in it - to see how Bamboo reports these. 29514: THOR: Initial Tenant Admin Service REST API - create, delete, list web scripts 29515: THOR-59: selectively disable certain test suites (for THOR dev build plan) 29521: THOR-79 - mark AVM sitestore as unindexed 35461: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30190: Merged BRANCHES/DEV/THOR0 to BRANCHES/DEV/THOR1: 29533: THOR-59: exclude certain N/A tests 29555: THOR-76: track THOR build test failures 29630: Added ant build targets for Cloud Module and a new executable for the Alfresco devenv. 29664: THOR-76: exclude system test suites 29667: THOR-64: add initial support for tenant routing data source 29676: THOR-76: exclude intermittent ActionTrackingServiceImplTest (pending ALF-9773 & ALF-9774) 29677: THOR-80: MT-aware S3 content store 29678: THOR-80: MT-aware S3 content store 29680: THOR-80: MT-aware S3 content store 29693: THOR-80: MT-aware S3 content store 29694: THOR-80: MT-aware S3 content store 47959: CONV_HEAD: CLOUD-1348 - comment back in MultiTDemoTest.testDeleteAllTenants 47967: Merged BRANCHES/DEV/CONV_V413 to BRANCHES/DEV/CONV_HEAD: 46748: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 35464: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30195: Merged BRANCHES/DEV/THOR0 to BRANCHES/DEV/THOR1: 29774: Refactor Account DAO and Service. Boost Tests. Add appropriate headers. 29776: THOR-76: exclude intermittent ActionTrackingServiceImplTest (pending ALF-9773 & ALF-9774) 29795: Implemented MyBatis-backed Account DAO: 29817: Move (and rename) user service from repository to thor 30196: Merged BRANCHES/DEV/THOR0 to BRANCHES/DEV/THOR1: 29827: THOR-69: TenantAdminDAO 29832: THOR-78: fix tenantEntityCache (shared) 29834: THOR-111: experimental config option for S3 content store to support flat root (ie. all tenant files in single folder) 29856: THOR updates 29857: THOR-76: exclude build components/projects 46761: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 35478: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30198: Merged BRANCHES/DEV/THOR0 to BRANCHES/DEV/THOR1: 29869: THOR-92. BPMN2.0 workflow definition for account self-signup. 29871: THOR-93. REST API for self signup (and miscellaneous related items). 29882: THOR-102: Faster CreateTenant 29888: THOR-95. Placeholder email template for self-signup. 29889: Completion of THOR-95. Placeholder emails for self-signup. Added a 'you've already registered' template. 29896: THOR-89F100: Create User Foundation API… 29912: Fix issue where module believed it was still executed after delete tenant 29940: THOR-96. First cut of a signup email sender delegate. This will be refined later - probably both in this sprint and the next. 29966: Fixing InvitationServiceImplTest failing tests, which are failing because the email templates are not there. 29978: THOR-89: Switch tenant for person creation 29982: THOR-89: Fix multi-domain account creation test after review with Jan 29983: THOR-102: Faster CreateTenant 29985: THOR-90: F99 Is email address already registred foundation API 29991: THOR-99: Thor module build/packaging 29994: Changes for THOR-92, THOR-93 and THOR-96. 30199: Merged BRANCHES/DEV/THOR0 to BRANCHES/DEV/THOR1: 29995: THOR-93. Use the proper sspring config in the test case. 29996: THOR-99: Thor module build/packaging 29997: Consolidated DaveC's EmailAddressService and my EMailUtil into a single feature. 29998: Follow-on to previous check-in (29997). Deletion of now-unused folder. 30000: Blatant attempt to get svn r=30k. Removing some dead config. 30001: THOR-96. Ensure that we get a meaningful exception when attempting to activate an account with no pending workflow for that email. 30036: Resolve issues with tenant-independent user store - can now login via Share 30041: Package and auto deploy of license with Thor module 30048: Ensure that when a duplicate email prevents a workflow from creating an account, that the workflow still ends gracefully. 30049: Removing a dead class that I'd used to see how our Bamboo handles @Ignore(message=msg) @Test annotations. 30054: THOR-84 F82: List Accounts Foundation API 30067: THOR-87 List Accounts REST API. 30069: THOR-87. Completion of listAccounts REST API. Fixed the problems in the JUnit test case and tweaked the FTL slightly. 30071: Cosmetic changes as part of THOR-93. 30072: Oops. Broke a test case. Follow-on to previous (30071) check-in which cosmetically changed JSON as part of THOR-93. 30073: As part of THOR-93 (REST API signup) I have made the 2 webscripts usable without any authentication. 30074: Trivial fix to an error string. 30076: THOR-93. The account-activation.post webscript now includes the provided workflowInstanceId when identifying the ongoing workflow. 30077: Fix Email validator to allow for example domains 30202: Merged BRANCHES/DEV/THOR0 to BRANCHES/DEV/THOR1: 30140: Refactor of account signup workflow 30142: No longer require email address for activation step of sign-up 30143: Remove use of task query in account signup workflow 30146: thor-share project structure 30147: Buildfix (removed modules not used by THOR) 30151: Incorporate already registered use case into account signup workflow 30152: Finally resolve license loading in Eclipse based tests 30203: Merged BRANCHES/DEV/THOR0 to BRANCHES/DEV/THOR1: 30184: Build box fix as a result of not including certian components 30206: Fix blatant merge issues 47972: Merged BRANCHES/DEV/CONV_V413 to BRANCHES/DEV/CONV_HEAD: 46766: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 35497: Merged BRANCHES/DEV/THOR0 to BRANCHES/DEV/CLOUD1: 29723: THOR-31: MT-aware shared caches 29749: THOR-5: MT-aware immutable singletons 29762: THOR-31: MT-aware shared cache 46767: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 35507: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30237: Merged BRANCHES/DEV/THOR0 to BRANCHES/DEV/THOR1: 29532: THOR-79 - add ability to disable Lucene indexes (so that IndexInfo / IndexInfoBackup files are not created per store per tenant) 29723: THOR-31: MT-aware shared caches 29749: THOR-5: MT-aware immutable singletons 29762: THOR-31: MT-aware shared cache 47973: CONV_HEAD: CLOUD-1348 - comment back in MultiTDemoTest tests (testNonSharedGroupDeletion & testSharedGroupDeletion) 47975: CONV_HEAD: CLOUD-1348 - comment back in FeedNotifierTest.testFailedNotifications 47988: Merged BRANCHES/DEV/CONV_V413 to BRANCHES/DEV/CONV_HEAD: 46775: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 35531: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30449: F66: add option to configure a common "contentRootContainerPath" 30564: THOR-156: prep - consolidate runAsSystemTenant/runAsPrimaryTenant 35532: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30777: THOR-201: temporarily comment-out MultiTDemoTest.testDeleteArchiveAndRestoreContent (pending fix for THOR-201) 48008: Merged BRANCHES/DEV/CONV_V413 to BRANCHES/DEV/CONV_HEAD: 46844: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46895: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46903: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46907: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46922: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46974: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46991: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46992: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46994: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47107: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47265: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47267: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47272: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47277: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47284: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47286: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47289: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47292: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 48009: Merged DEV/CONV_V413 to DEV/CONV_HEAD 46801: Merged from BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413 35602: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30513: Cloud Share module 30515: Fix issue with person replication between tenants. 30516: Slight mod to email validation web script response. 30518: Quick fix for workflow id generation in sign email 30534: THOR-163: Unable to get license file 30535: Fix Thor build process. 30536: Refine user's home site name and description 30539: THOR-96. When sending the signup email, execute the mail action asynchronously. 30542: Replace placeholder text in sign-up email 30543: Account Activation 46802: Merged from BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413 35643: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30544: Account activation 30545: Account activation 30550: AMP build targets 30554: THOR-94. Cloud site invitation workflow. 30555: AMP build targets - added client side resources 48011: Merged BRANCHES/DEV/CONV_V413 to BRANCHES/DEV/CONV_HEAD: 47056: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47087: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47228: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47271: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47297: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47299: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47300: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47301: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47304: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47328: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47330: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 47339: (RECORD ONLY) Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 48013: Merged DEV/CONV_V413 to DEV/CONV_HEAD (commiting the missing merge info for r48009) 46801: Merged from BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413 35602: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30513: Cloud Share module 30515: Fix issue with person replication between tenants. 30516: Slight mod to email validation web script response. 30518: Quick fix for workflow id generation in sign email 30534: THOR-163: Unable to get license file 30535: Fix Thor build process. 30536: Refine user's home site name and description 30539: THOR-96. When sending the signup email, execute the mail action asynchronously. 30542: Replace placeholder text in sign-up email 30543: Account Activation 46802: Merged from BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413 35643: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30544: Account activation 30545: Account activation 30550: AMP build targets 30554: THOR-94. Cloud site invitation workflow. 30555: AMP build targets - added client side resources 48015: Merged DEV/CONV_V413 to DEV/CONV_HEAD 46841: Merged from BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413 35684: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30904: (RECORD ONLY) Merged HEAD to BRANCHES/DEV/THOR1: 30270 ALF-9492 Can now update task properties through the Workflow JavaScript API. ALF-10087 Fixed failing Multi-tenancy tests. 30288 ALF-9492 Can now update task properties through the Workflow JavaScript API. 30309 Fixed failing MultiTDemoTest and re-enabled. 30356 ALF-10117: JBPM workflows should be hidden. 30358 Build fix, fallout from ALF-10117 (JBPM workflows should be hidden) 30415 Added parseRootElement() method to Activiti's BPMNParseListener. 30452 ALF-10276: Reject flow didn't set bpm_assignee property properly 30563 Added tests to ensure multi-tenancy works and fixed several multi-tenancy issues in workflow. 30698 ALF-9541: Fixed HistoricTaskEntity update when TaskEntity is loaded from DB 30699 ALF-10084, ALF-10242. Fixed issues and added WorkflowService methods to get workflow instances without filtering by definition id. 30750 ALF-10197, Added the ability to auto-complete Start Tasks in Activiti. If a start task extends the bpm:activitiStartTask type or implements the bpm:endAutomatically aspect then the task will be ended as soon as the workflow instance is started. 30796 ALF-10374 Fixed failing MultiTDemoTest 30908: Add logging for failed email domain lookups: 30922: Rolling back .classpath changes to Data Model. 30930: Basic version of site invite working 30931: THOR-172: Switch Tenant via public API 30936: Allow for repo web scripts to switch to user's default tenant via -default- tenant id: 30937: Implementation of THOR-214. There is now a new repo webscript to retrieve signup status for a given {id, key} pair. 30938: Allow dev email address to be specified in properties file: 30945: THOR-221: Add (EntityLookup) cache to AccountDAO 30946: Build fix. Renaming a test infrastructure class so that it doesn't get picked up by the ant test targets. 30955: THOR-222. Added inviter first and last name to invitation-status.get webscript. 46843: Merged from BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413 35694: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30997: Firefox scrollbars removed on "invitation" and "signup" pages (now using new helper method Alfresco.util.createYUIOverlay) 31001: Impl of THOR-223. Webscripts for getting pending invitations. 31002: Invite - redirect bug fixed, removed old code matching previous webscript api, email picker style fixes 31003: Addendum for THOR-223. I've added an explicit test to record the fact that pending-invitations.get to a non-existent site returns 200 and an empty collection rather than a 404. 31004: Adding REST-client .rcq files as part of THOR-223 46848: Merged from BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413 35700: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 31014: (RECORD ONLY) Merged HEAD to BRANCHES/DEV/THOR1: 30999: ALF-9957 - MT: test and fix subscriptions (followers) 31020: Update invite email template to bring in line with wireframe and text 31021: Apply latest sanitized email blacklist: 31030: Fixed THOR-226 "DocLib "Detailed View" (default) does not list items - note: "Simple View" seems to be OK" 31033: THOR-228: Update aws sample file with quota config for cachingcontentstore 31036: Fixed THOR-236 "Webscript URL clash in signup" 31037: THOR-175: set and enforce per-tenant quota 31043: Fixed THOR-174 "F27: User can switch between networks they belong to" 46854: Merged from BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413 35725: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 31124: Fix for THOR-145. This check-in makes the Cloud Signup and Invitation workflows hidden within Share - users can't initiate them via "Start workflow..." 48016: Merged BRANCHES/DEV/CONV_V413 to BRANCHES/DEV/CONV_HEAD: 46793: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46795: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46796: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 48030: Merged BRANCHES/DEV/CONV_V413 to BRANCHES/DEV/CONV_HEAD: 46820: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 35657: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30556: THOR-135F103: Is e-mail address accepted by Alfresco? 30562: Fixing a typo in the email-validation FTL. It was returning invalid JSON - no opening " on a string. 30569: THOR-156: switch to secondary tenant (initially via @@login) 30571: THOR-99: Thor build 48037: Merged BRANCHES/DEV/CONV_V413 to BRANCHES/DEV/CONV_HEAD: 46821: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 35659: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30586: THOR-166. I've added an additional check at the start of the signup workflow that checks if the email is blocked. 30587: THOR-163: S3ContentReader fails to getObjectDetails 30592: THOR-156: switch to secondary tenant (initially via @@login) 35660: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30607: (RECORD ONLY) Merged HEAD to BRANCHES/DEV/THOR1: 30208: Remaining commits for ALF-9510 30218: Fix build - add missing files 30254: Encryption related documentation, source code comments 30392: Fix for ALF-10205 30405: Fix for ALF-10189 30406: Fix for ALF-10189: part 2 - minor update 30613: THOR-148. The cloud test target was accidentally excluding *RestTest.java. 30613: THOR-148. The cloud test target was accidentally excluding *RestTest.java. 30614: Revert some of the additional email checks in registration process 30615: Set ignore patterns for build dir in thor module 30619: Merged HEAD to BRANCHES/DEV/THOR1: 30618: Additional test classes that allow for easier testing of Notifications (emails mostly). 30622: Ensure use of System user, not system user 30624: Removed deep merge info 30625: Switch off creation of missing people, use Admin instead of System 46824: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 46828: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: 48038: Merged DEV/CONV_V413 to DEV/CONV_HEAD (ui-only) 46830: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30737: (RECORD ONLY) Merged HEAD to BRANCHES/DEV/THOR1: 30736: ALF-6706 - MT: activities not generated (for tenants) 30738: Site invite (rough version, not finished) 30741: THOR-175: Set and enforce file space quota for tenant 30752: Site invite - added som padding to user suggestion list 30753: Disabling 2 tests while I fix them. 30758: THOR-172 F63: Switch Tenant via public REST API: 30764: Tweak to Activiti integration code to prevent it from trying to create person nodes for the System user. 30766: Implementation of THOR-196. Inviting multiple email addresses in a single call. 30769: Re-enable MultiTDemoTest 30775: Site invite 30776: THOR-172: Switch Tenant via public API 30785: Add tenant id to account info returned in Thor responses 48043: Merged DEV/CONV_V413 to DEV/CONV_HEAD 46831: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30813: Add Eclipse project for Thor-Share module: 30815: THOR-175: Set and enforce file space quota for tenant 30817: Switch network skeleton code and minor fixes 30818: Update Share Node Browser (at least for THOR) to allow option to retrieve "storeroot" via DB query 30826: Add distribute-solr to Thor builds 48045: Merged BRANCHES/DEV/CLOUD2 to BRANCHES/DEV/CONV_V413: Merged BRANCHES/DEV/THOR1 to BRANCHES/DEV/CLOUD1: 30737: (RECORD ONLY) Merged HEAD to BRANCHES/DEV/THOR1: 30736: ALF-6706 - MT: activities not generated (for tenants) 30738: Site invite (rough version, not finished) 30741: THOR-175: Set and enforce file space quota for tenant 30752: Site invite - added som padding to user suggestion list 30753: Disabling 2 tests while I fix them. 30758: THOR-172 F63: Switch Tenant via public REST API: 30764: Tweak to Activiti integration code to prevent it from trying to create person nodes for the System user. 30766: Implementation of THOR-196. Inviting multiple email addresses in a single call. 30769: Re-enable MultiTDemoTest 30775: Site invite 30776: THOR-172: Switch Tenant via public API 30785: Add tenant id to account info returned in Thor responses git-svn-id: https://svn.alfresco.com/repos/alfresco-enterprise/alfresco/HEAD/root@48251 c4b6b30b-aa2e-2d43-bbcb-ca4b014f7261
2341 lines
86 KiB
Java
2341 lines
86 KiB
Java
/*
|
|
* Copyright (C) 2005-2010 Alfresco Software Limited.
|
|
*
|
|
* This file is part of Alfresco
|
|
*
|
|
* Alfresco is free software: you can redistribute it and/or modify
|
|
* it under the terms of the GNU Lesser General Public License as published by
|
|
* the Free Software Foundation, either version 3 of the License, or
|
|
* (at your option) any later version.
|
|
*
|
|
* Alfresco is distributed in the hope that it will be useful,
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
* GNU Lesser General Public License for more details.
|
|
*
|
|
* You should have received a copy of the GNU Lesser General Public License
|
|
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
|
|
*/
|
|
package org.alfresco.repo.domain.schema;
|
|
|
|
import java.io.BufferedInputStream;
|
|
import java.io.BufferedReader;
|
|
import java.io.BufferedWriter;
|
|
import java.io.File;
|
|
import java.io.FileInputStream;
|
|
import java.io.FileNotFoundException;
|
|
import java.io.FileWriter;
|
|
import java.io.IOException;
|
|
import java.io.InputStream;
|
|
import java.io.InputStreamReader;
|
|
import java.io.PrintWriter;
|
|
import java.io.UnsupportedEncodingException;
|
|
import java.io.Writer;
|
|
import java.sql.Array;
|
|
import java.sql.Blob;
|
|
import java.sql.CallableStatement;
|
|
import java.sql.Clob;
|
|
import java.sql.Connection;
|
|
import java.sql.DatabaseMetaData;
|
|
import java.sql.DriverManager;
|
|
import java.sql.NClob;
|
|
import java.sql.PreparedStatement;
|
|
import java.sql.ResultSet;
|
|
import java.sql.SQLClientInfoException;
|
|
import java.sql.SQLException;
|
|
import java.sql.SQLWarning;
|
|
import java.sql.SQLXML;
|
|
import java.sql.Savepoint;
|
|
import java.sql.Statement;
|
|
import java.sql.Struct;
|
|
import java.sql.Types;
|
|
import java.text.MessageFormat;
|
|
import java.util.ArrayList;
|
|
import java.util.Date;
|
|
import java.util.HashMap;
|
|
import java.util.List;
|
|
import java.util.Map;
|
|
import java.util.Properties;
|
|
import java.util.concurrent.Executor;
|
|
|
|
import javax.sql.DataSource;
|
|
|
|
import org.activiti.engine.ProcessEngine;
|
|
import org.activiti.engine.ProcessEngineConfiguration;
|
|
import org.alfresco.error.AlfrescoRuntimeException;
|
|
import org.alfresco.ibatis.SerializableTypeHandler;
|
|
import org.alfresco.repo.admin.patch.Patch;
|
|
import org.alfresco.repo.admin.patch.impl.SchemaUpgradeScriptPatch;
|
|
import org.alfresco.repo.content.filestore.FileContentWriter;
|
|
import org.alfresco.repo.domain.PropertyValue;
|
|
import org.alfresco.repo.domain.hibernate.dialect.AlfrescoOracle9Dialect;
|
|
import org.alfresco.repo.domain.hibernate.dialect.AlfrescoSQLServerDialect;
|
|
import org.alfresco.repo.domain.hibernate.dialect.AlfrescoSybaseAnywhereDialect;
|
|
import org.alfresco.service.cmr.repository.ContentWriter;
|
|
import org.alfresco.service.descriptor.DescriptorService;
|
|
import org.alfresco.util.DatabaseMetaDataHelper;
|
|
import org.alfresco.util.LogUtil;
|
|
import org.alfresco.util.TempFileProvider;
|
|
import org.alfresco.util.schemacomp.ExportDb;
|
|
import org.alfresco.util.schemacomp.MultiFileDumper;
|
|
import org.alfresco.util.schemacomp.MultiFileDumper.DbToXMLFactory;
|
|
import org.alfresco.util.schemacomp.Result;
|
|
import org.alfresco.util.schemacomp.Results;
|
|
import org.alfresco.util.schemacomp.SchemaComparator;
|
|
import org.alfresco.util.schemacomp.XMLToSchema;
|
|
import org.alfresco.util.schemacomp.model.Schema;
|
|
import org.apache.commons.logging.Log;
|
|
import org.apache.commons.logging.LogFactory;
|
|
import org.hibernate.HibernateException;
|
|
import org.hibernate.Session;
|
|
import org.hibernate.SessionFactory;
|
|
import org.hibernate.cfg.Configuration;
|
|
import org.hibernate.cfg.Environment;
|
|
import org.hibernate.connection.UserSuppliedConnectionProvider;
|
|
import org.hibernate.dialect.DB2Dialect;
|
|
import org.hibernate.dialect.DerbyDialect;
|
|
import org.hibernate.dialect.Dialect;
|
|
import org.hibernate.dialect.HSQLDialect;
|
|
import org.hibernate.dialect.MySQL5Dialect;
|
|
import org.hibernate.dialect.MySQLDialect;
|
|
import org.hibernate.dialect.MySQLInnoDBDialect;
|
|
import org.hibernate.dialect.Oracle10gDialect;
|
|
import org.hibernate.dialect.Oracle9Dialect;
|
|
import org.hibernate.dialect.Oracle9iDialect;
|
|
import org.hibernate.dialect.OracleDialect;
|
|
import org.hibernate.dialect.PostgreSQLDialect;
|
|
import org.hibernate.engine.ActionQueue;
|
|
import org.hibernate.tool.hbm2ddl.DatabaseMetadata;
|
|
import org.springframework.context.ApplicationContext;
|
|
import org.springframework.context.ApplicationEvent;
|
|
import org.springframework.core.io.Resource;
|
|
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
|
|
import org.springframework.core.io.support.ResourcePatternResolver;
|
|
import org.springframework.extensions.surf.util.AbstractLifecycleBean;
|
|
import org.springframework.extensions.surf.util.I18NUtil;
|
|
import org.springframework.orm.hibernate3.LocalSessionFactoryBean;
|
|
|
|
|
|
/**
|
|
* Bootstraps the schema and schema update. The schema is considered missing if the applied patch table
|
|
* is not present, and the schema is considered empty if the applied patch table is empty.
|
|
*
|
|
* @author Derek Hulley
|
|
*/
|
|
public class SchemaBootstrap extends AbstractLifecycleBean
|
|
{
|
|
/** The placeholder for the configured <code>Dialect</code> class name: <b>${db.script.dialect}</b> */
|
|
private static final String PLACEHOLDER_DIALECT = "\\$\\{db\\.script\\.dialect\\}";
|
|
|
|
/** The global property containing the default batch size used by --FOREACH */
|
|
private static final String PROPERTY_DEFAULT_BATCH_SIZE = "system.upgrade.default.batchsize";
|
|
|
|
private static final String MSG_DIALECT_USED = "schema.update.msg.dialect_used";
|
|
private static final String MSG_DATABASE_USED = "schema.update.msg.database_used";
|
|
private static final String MSG_BYPASSING_SCHEMA_UPDATE = "schema.update.msg.bypassing";
|
|
private static final String MSG_NORMALIZED_SCHEMA = "schema.update.msg.normalized_schema";
|
|
private static final String MSG_NO_CHANGES = "schema.update.msg.no_changes";
|
|
private static final String MSG_ALL_STATEMENTS = "schema.update.msg.all_statements";
|
|
private static final String MSG_EXECUTING_GENERATED_SCRIPT = "schema.update.msg.executing_generated_script";
|
|
private static final String MSG_EXECUTING_COPIED_SCRIPT = "schema.update.msg.executing_copied_script";
|
|
private static final String MSG_EXECUTING_STATEMENT = "schema.update.msg.executing_statement";
|
|
private static final String MSG_OPTIONAL_STATEMENT_FAILED = "schema.update.msg.optional_statement_failed";
|
|
private static final String WARN_DIALECT_UNSUPPORTED = "schema.update.warn.dialect_unsupported";
|
|
private static final String WARN_DIALECT_HSQL = "schema.update.warn.dialect_hsql";
|
|
private static final String WARN_DIALECT_DERBY = "schema.update.warn.dialect_derby";
|
|
private static final String ERR_FORCED_STOP = "schema.update.err.forced_stop";
|
|
private static final String ERR_DIALECT_SHOULD_USE = "schema.update.err.dialect_should_use";
|
|
private static final String ERR_MULTIPLE_SCHEMAS = "schema.update.err.found_multiple";
|
|
private static final String ERR_PREVIOUS_FAILED_BOOTSTRAP = "schema.update.err.previous_failed";
|
|
private static final String ERR_STATEMENT_FAILED = "schema.update.err.statement_failed";
|
|
private static final String ERR_UPDATE_FAILED = "schema.update.err.update_failed";
|
|
private static final String ERR_VALIDATION_FAILED = "schema.update.err.validation_failed";
|
|
private static final String ERR_SCRIPT_NOT_RUN = "schema.update.err.update_script_not_run";
|
|
private static final String ERR_SCRIPT_NOT_FOUND = "schema.update.err.script_not_found";
|
|
private static final String ERR_STATEMENT_INCLUDE_BEFORE_SQL = "schema.update.err.statement_include_before_sql";
|
|
private static final String ERR_STATEMENT_VAR_ASSIGNMENT_BEFORE_SQL = "schema.update.err.statement_var_assignment_before_sql";
|
|
private static final String ERR_STATEMENT_VAR_ASSIGNMENT_FORMAT = "schema.update.err.statement_var_assignment_format";
|
|
private static final String ERR_STATEMENT_TERMINATOR = "schema.update.err.statement_terminator";
|
|
private static final String DEBUG_SCHEMA_COMP_NO_REF_FILE = "system.schema_comp.debug.no_ref_file";
|
|
private static final String INFO_SCHEMA_COMP_ALL_OK = "system.schema_comp.info.all_ok";
|
|
private static final String WARN_SCHEMA_COMP_PROBLEMS_FOUND = "system.schema_comp.warn.problems_found";
|
|
private static final String DEBUG_SCHEMA_COMP_TIME_TAKEN = "system.schema_comp.debug.time_taken";
|
|
|
|
public static final int DEFAULT_LOCK_RETRY_COUNT = 24;
|
|
public static final int DEFAULT_LOCK_RETRY_WAIT_SECONDS = 5;
|
|
|
|
public static final int DEFAULT_MAX_STRING_LENGTH = 1024;
|
|
|
|
|
|
|
|
|
|
private static volatile int maxStringLength = DEFAULT_MAX_STRING_LENGTH;
|
|
private Dialect dialect;
|
|
|
|
private ResourcePatternResolver rpr = new PathMatchingResourcePatternResolver(this.getClass().getClassLoader());
|
|
|
|
|
|
/**
|
|
* @see PropertyValue#DEFAULT_MAX_STRING_LENGTH
|
|
*/
|
|
private static final void setMaxStringLength(int length)
|
|
{
|
|
if (length < 1024)
|
|
{
|
|
throw new AlfrescoRuntimeException("The maximum string length must >= 1024 characters.");
|
|
}
|
|
SchemaBootstrap.maxStringLength = length;
|
|
}
|
|
|
|
/**
|
|
* @return Returns the maximum number of characters that a string field can be
|
|
*/
|
|
public static final int getMaxStringLength()
|
|
{
|
|
return SchemaBootstrap.maxStringLength;
|
|
}
|
|
|
|
/**
|
|
* Truncates or returns a string that will fit into the string columns in the schema. Text fields can
|
|
* either cope with arbitrarily long text fields or have the default limit, {@link #DEFAULT_MAX_STRING_LENGTH}.
|
|
*
|
|
* @param value the string to check
|
|
* @return Returns a string that is short enough for {@link SchemaBootstrap#getMaxStringLength()}
|
|
*
|
|
* @since 3.2
|
|
*/
|
|
public static final String trimStringForTextFields(String value)
|
|
{
|
|
if (value != null && value.length() > maxStringLength)
|
|
{
|
|
return value.substring(0, maxStringLength);
|
|
}
|
|
else
|
|
{
|
|
return value;
|
|
}
|
|
}
|
|
|
|
|
|
/**
|
|
* Provide a reference to the DescriptorService, used to provide information
|
|
* about the repository such as the database schema version number.
|
|
*
|
|
* @param descriptorService the descriptorService to set
|
|
*/
|
|
public void setDescriptorService(DescriptorService descriptorService)
|
|
{
|
|
this.descriptorService = descriptorService;
|
|
}
|
|
|
|
/**
|
|
* Sets the previously auto-detected Hibernate dialect.
|
|
*
|
|
* @param dialect
|
|
* the dialect
|
|
*/
|
|
public void setDialect(Dialect dialect)
|
|
{
|
|
this.dialect = dialect;
|
|
}
|
|
|
|
private static Log logger = LogFactory.getLog(SchemaBootstrap.class);
|
|
|
|
private DescriptorService descriptorService;
|
|
private DataSource dataSource;
|
|
private LocalSessionFactoryBean localSessionFactory;
|
|
private String schemaOuputFilename;
|
|
private boolean updateSchema;
|
|
private boolean stopAfterSchemaBootstrap;
|
|
private List<String> preCreateScriptUrls;
|
|
private List<String> postCreateScriptUrls;
|
|
private List<String> schemaReferenceUrls;
|
|
private List<SchemaUpgradeScriptPatch> validateUpdateScriptPatches;
|
|
private List<SchemaUpgradeScriptPatch> preUpdateScriptPatches;
|
|
private List<SchemaUpgradeScriptPatch> postUpdateScriptPatches;
|
|
private int schemaUpdateLockRetryCount = DEFAULT_LOCK_RETRY_COUNT;
|
|
private int schemaUpdateLockRetryWaitSeconds = DEFAULT_LOCK_RETRY_WAIT_SECONDS;
|
|
private int maximumStringLength;
|
|
private Properties globalProperties;
|
|
|
|
private ThreadLocal<StringBuilder> executedStatementsThreadLocal = new ThreadLocal<StringBuilder>();
|
|
|
|
public SchemaBootstrap()
|
|
{
|
|
preCreateScriptUrls = new ArrayList<String>(1);
|
|
postCreateScriptUrls = new ArrayList<String>(1);
|
|
validateUpdateScriptPatches = new ArrayList<SchemaUpgradeScriptPatch>(4);
|
|
preUpdateScriptPatches = new ArrayList<SchemaUpgradeScriptPatch>(4);
|
|
postUpdateScriptPatches = new ArrayList<SchemaUpgradeScriptPatch>(4);
|
|
maximumStringLength = -1;
|
|
globalProperties = new Properties();
|
|
}
|
|
|
|
public void setDataSource(DataSource dataSource)
|
|
{
|
|
this.dataSource = dataSource;
|
|
}
|
|
|
|
public void setLocalSessionFactory(LocalSessionFactoryBean localSessionFactory)
|
|
{
|
|
this.localSessionFactory = localSessionFactory;
|
|
}
|
|
|
|
public LocalSessionFactoryBean getLocalSessionFactory()
|
|
{
|
|
return localSessionFactory;
|
|
}
|
|
|
|
/**
|
|
* Set this to output the full database creation script
|
|
*
|
|
* @param schemaOuputFilename the name of a file to dump the schema to, or null to ignore
|
|
*/
|
|
public void setSchemaOuputFilename(String schemaOuputFilename)
|
|
{
|
|
this.schemaOuputFilename = schemaOuputFilename;
|
|
}
|
|
|
|
/**
|
|
* Set whether to modify the schema or not. Either way, the schema will be validated.
|
|
*
|
|
* @param updateSchema true to update and validate the schema, otherwise false to just
|
|
* validate the schema. Default is <b>true</b>.
|
|
*/
|
|
public void setUpdateSchema(boolean updateSchema)
|
|
{
|
|
this.updateSchema = updateSchema;
|
|
}
|
|
|
|
/**
|
|
* Set whether this component should terminate the bootstrap process after running all the
|
|
* usual checks and scripts. This has the additional effect of dumping a final schema
|
|
* structure file just before exiting.
|
|
* <p>
|
|
* <b>WARNING: </b>USE FOR DEBUG AND UPGRADE TESTING ONLY
|
|
*
|
|
* @param stopAfterSchemaBootstrap <tt>true</tt> to terminate (with exception) after
|
|
* running all the usual schema updates and checks.
|
|
*/
|
|
public void setStopAfterSchemaBootstrap(boolean stopAfterSchemaBootstrap)
|
|
{
|
|
this.stopAfterSchemaBootstrap = stopAfterSchemaBootstrap;
|
|
}
|
|
|
|
/**
|
|
* Set the scripts that must be executed <b>before</b> the schema has been created.
|
|
*
|
|
* @param postCreateScriptUrls file URLs
|
|
*
|
|
* @see #PLACEHOLDER_DIALECT
|
|
*/
|
|
public void setPreCreateScriptUrls(List<String> preUpdateScriptUrls)
|
|
{
|
|
this.preCreateScriptUrls = preUpdateScriptUrls;
|
|
}
|
|
|
|
/**
|
|
* Set the scripts that must be executed <b>after</b> the schema has been created.
|
|
*
|
|
* @param postCreateScriptUrls file URLs
|
|
*
|
|
* @see #PLACEHOLDER_DIALECT
|
|
*/
|
|
public void setPostCreateScriptUrls(List<String> postUpdateScriptUrls)
|
|
{
|
|
this.postCreateScriptUrls = postUpdateScriptUrls;
|
|
}
|
|
|
|
|
|
/**
|
|
* Specifies the schema reference files that will be used to validate the repository
|
|
* schema whenever changes have been made. The database dialect placeholder will be
|
|
* resolved so that the correct reference files are loaded for the current database
|
|
* type (e.g. PostgreSQL)
|
|
*
|
|
* @param schemaReferenceUrls the schemaReferenceUrls to set
|
|
* @see #PLACEHOLDER_DIALECT
|
|
*/
|
|
public void setSchemaReferenceUrls(List<String> schemaReferenceUrls)
|
|
{
|
|
this.schemaReferenceUrls = schemaReferenceUrls;
|
|
}
|
|
|
|
/**
|
|
* Set the schema script patches that must have been applied. These will not be
|
|
* applied to the database. These can be used where the script <u>cannot</u> be
|
|
* applied automatically or where a particular upgrade path is no longer supported.
|
|
* For example, at version 3.0, the upgrade scripts for version 1.4 may be considered
|
|
* unsupported - this doesn't prevent the manual application of the scripts, though.
|
|
*
|
|
* @param scriptPatches a list of schema patches to check
|
|
*/
|
|
public void setValidateUpdateScriptPatches(List<SchemaUpgradeScriptPatch> scriptPatches)
|
|
{
|
|
this.validateUpdateScriptPatches = scriptPatches;
|
|
}
|
|
|
|
/**
|
|
* Set the schema script patches that may be applied prior to the auto-update process.
|
|
*
|
|
* @param scriptPatches a list of schema patches to check
|
|
*/
|
|
public void setPreUpdateScriptPatches(List<SchemaUpgradeScriptPatch> scriptPatches)
|
|
{
|
|
this.preUpdateScriptPatches = scriptPatches;
|
|
}
|
|
|
|
/**
|
|
* Set the schema script patches that may be applied after the auto-update process.
|
|
*
|
|
* @param postUpdateScriptPatches a list of schema patches to check
|
|
*/
|
|
public void setPostUpdateScriptPatches(List<SchemaUpgradeScriptPatch> scriptPatches)
|
|
{
|
|
this.postUpdateScriptPatches = scriptPatches;
|
|
}
|
|
|
|
/**
|
|
* Set the number times that the DB must be checked for the presence of the table
|
|
* indicating that a schema change is in progress.
|
|
*
|
|
* @param schemaUpdateLockRetryCount the number of times to retry (default 24)
|
|
*/
|
|
public void setSchemaUpdateLockRetryCount(int schemaUpdateLockRetryCount)
|
|
{
|
|
this.schemaUpdateLockRetryCount = schemaUpdateLockRetryCount;
|
|
}
|
|
|
|
/**
|
|
* Set the wait time (seconds) between checks for the schema update lock.
|
|
*
|
|
* @param schemaUpdateLockRetryWaitSeconds the number of seconds between checks (default 5 seconds)
|
|
*/
|
|
public void setSchemaUpdateLockRetryWaitSeconds(int schemaUpdateLockRetryWaitSeconds)
|
|
{
|
|
this.schemaUpdateLockRetryWaitSeconds = schemaUpdateLockRetryWaitSeconds;
|
|
}
|
|
|
|
/**
|
|
* Optionally override the system's default maximum string length. Some databases have
|
|
* limitations on how long the <b>string_value</b> columns can be while other do not.
|
|
* Some parts of the persistence have alternatives when the string values exceed this
|
|
* length while others do not. Either way, it is possible to adjust the text column sizes
|
|
* and adjust this value manually to override the default associated with the database
|
|
* being used.
|
|
* <p>
|
|
* The system - as of V2.1.2 - will attempt to adjust the maximum string length size
|
|
* automatically and therefore this method is not normally required. But it is possible
|
|
* to manually override the value if, for example, the system doesn't guess the correct
|
|
* maximum length or if the dialect is not explicitly catered for.
|
|
* <p>
|
|
* All negative or zero values are ignored and the system defaults to its best guess based
|
|
* on the dialect being used.
|
|
*
|
|
* @param maximumStringLength the maximum length of the <b>string_value</b> columns
|
|
*/
|
|
public void setMaximumStringLength(int maximumStringLength)
|
|
{
|
|
if (maximumStringLength > 0)
|
|
{
|
|
this.maximumStringLength = maximumStringLength;
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Get the limit for the hibernate executions queue
|
|
*/
|
|
public int getHibernateMaxExecutions()
|
|
{
|
|
return ActionQueue.getMAX_EXECUTIONS_SIZE();
|
|
}
|
|
|
|
/**
|
|
* Set the limit for the hibernate executions queue
|
|
* Less than zero always uses event amalgamation
|
|
*/
|
|
public void setHibernateMaxExecutions(int hibernateMaxExecutions)
|
|
{
|
|
ActionQueue.setMAX_EXECUTIONS_SIZE(hibernateMaxExecutions);
|
|
}
|
|
|
|
|
|
/**
|
|
* Sets the properties map from which we look up some configuration settings.
|
|
*
|
|
* @param globalProperties
|
|
* the global properties
|
|
*/
|
|
public void setGlobalProperties(Properties globalProperties)
|
|
{
|
|
this.globalProperties = globalProperties;
|
|
}
|
|
|
|
private SessionFactory getSessionFactory()
|
|
{
|
|
return (SessionFactory) localSessionFactory.getObject();
|
|
}
|
|
|
|
private static class NoSchemaException extends Exception
|
|
{
|
|
private static final long serialVersionUID = 5574280159910824660L;
|
|
}
|
|
|
|
/**
|
|
* Used to indicate a forced stop of the bootstrap.
|
|
*
|
|
* @see SchemaBootstrap#setStopAfterSchemaBootstrap(boolean)
|
|
*
|
|
* @author Derek Hulley
|
|
* @since 3.1.1
|
|
*/
|
|
private static class BootstrapStopException extends RuntimeException
|
|
{
|
|
private static final long serialVersionUID = 4250016675538442181L;
|
|
private BootstrapStopException()
|
|
{
|
|
super(I18NUtil.getMessage(ERR_FORCED_STOP));
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Count applied patches. This fails if multiple applied patch tables are found,
|
|
* which normally indicates that the schema view needs to be limited.
|
|
*
|
|
* @param cfg The Hibernate config
|
|
* @param connection a valid database connection
|
|
* @return Returns the number of applied patches
|
|
* @throws NoSchemaException if the table of applied patches can't be found
|
|
*/
|
|
private int countAppliedPatches(Configuration cfg, Connection connection) throws Exception
|
|
{
|
|
String defaultSchema = DatabaseMetaDataHelper.getSchema(connection);
|
|
if (defaultSchema != null && defaultSchema.length() == 0)
|
|
{
|
|
defaultSchema = null;
|
|
}
|
|
String defaultCatalog = cfg.getProperty("hibernate.default_catalog");
|
|
if (defaultCatalog != null && defaultCatalog.length() == 0)
|
|
{
|
|
defaultCatalog = null;
|
|
}
|
|
DatabaseMetaData dbMetadata = connection.getMetaData();
|
|
|
|
ResultSet tableRs = dbMetadata.getTables(defaultCatalog, defaultSchema, "%", null);
|
|
boolean newPatchTable = false;
|
|
boolean oldPatchTable = false;
|
|
try
|
|
{
|
|
boolean multipleSchemas = false;
|
|
while (tableRs.next())
|
|
{
|
|
String tableName = tableRs.getString("TABLE_NAME");
|
|
if (tableName.equalsIgnoreCase("applied_patch"))
|
|
{
|
|
if (oldPatchTable || newPatchTable)
|
|
{
|
|
// Found earlier
|
|
multipleSchemas = true;
|
|
}
|
|
oldPatchTable = true;
|
|
}
|
|
else if (tableName.equalsIgnoreCase("alf_applied_patch"))
|
|
{
|
|
if (oldPatchTable || newPatchTable)
|
|
{
|
|
// Found earlier
|
|
multipleSchemas = true;
|
|
}
|
|
newPatchTable = true;
|
|
}
|
|
}
|
|
// We go through all the tables so that multiple visible schemas are detected
|
|
if (multipleSchemas)
|
|
{
|
|
throw new AlfrescoRuntimeException(ERR_MULTIPLE_SCHEMAS);
|
|
}
|
|
}
|
|
finally
|
|
{
|
|
try { tableRs.close(); } catch (Throwable e) {e.printStackTrace(); }
|
|
}
|
|
|
|
if (newPatchTable)
|
|
{
|
|
Statement stmt = connection.createStatement();
|
|
try
|
|
{
|
|
ResultSet rs = stmt.executeQuery("select count(id) from alf_applied_patch");
|
|
rs.next();
|
|
int count = rs.getInt(1);
|
|
return count;
|
|
}
|
|
catch (SQLException e)
|
|
{
|
|
// This should work at least and is probably an indication of the user viewing multiple schemas
|
|
throw new AlfrescoRuntimeException(ERR_MULTIPLE_SCHEMAS);
|
|
}
|
|
finally
|
|
{
|
|
try { stmt.close(); } catch (Throwable e) {}
|
|
}
|
|
}
|
|
else if (oldPatchTable)
|
|
{
|
|
// found the old style table name
|
|
Statement stmt = connection.createStatement();
|
|
try
|
|
{
|
|
ResultSet rs = stmt.executeQuery("select count(id) from applied_patch");
|
|
rs.next();
|
|
int count = rs.getInt(1);
|
|
return count;
|
|
}
|
|
finally
|
|
{
|
|
try { stmt.close(); } catch (Throwable e) {}
|
|
}
|
|
}
|
|
else
|
|
{
|
|
// The applied patches table is not present
|
|
throw new NoSchemaException();
|
|
}
|
|
}
|
|
|
|
/**
|
|
* @return Returns the name of the applied patch table, or <tt>null</tt> if the table doesn't exist
|
|
*/
|
|
private String getAppliedPatchTableName(Connection connection) throws Exception
|
|
{
|
|
Statement stmt = connection.createStatement();
|
|
try
|
|
{
|
|
stmt.executeQuery("select * from alf_applied_patch");
|
|
return "alf_applied_patch";
|
|
}
|
|
catch (Throwable e)
|
|
{
|
|
// we'll try another table name
|
|
}
|
|
finally
|
|
{
|
|
try { stmt.close(); } catch (Throwable e) {}
|
|
}
|
|
// for pre-1.4 databases, the table was named differently
|
|
stmt = connection.createStatement();
|
|
try
|
|
{
|
|
stmt.executeQuery("select * from applied_patch");
|
|
return "applied_patch";
|
|
}
|
|
catch (Throwable e)
|
|
{
|
|
// It is not there
|
|
return null;
|
|
}
|
|
finally
|
|
{
|
|
try { stmt.close(); } catch (Throwable e) {}
|
|
}
|
|
}
|
|
|
|
/**
|
|
* @return Returns the number of applied patches
|
|
*/
|
|
private boolean didPatchSucceed(Connection connection, String patchId, boolean alternative) throws Exception
|
|
{
|
|
String patchTableName = getAppliedPatchTableName(connection);
|
|
if (patchTableName == null)
|
|
{
|
|
// Table doesn't exist, yet
|
|
return false;
|
|
}
|
|
Statement stmt = connection.createStatement();
|
|
try
|
|
{
|
|
ResultSet rs = stmt.executeQuery("select succeeded, was_executed from " + patchTableName + " where id = '" + patchId + "'");
|
|
if (!rs.next())
|
|
{
|
|
return false;
|
|
}
|
|
boolean succeeded = rs.getBoolean(1);
|
|
boolean wasExecuted = rs.getBoolean(2);
|
|
|
|
if (alternative)
|
|
{
|
|
return succeeded && wasExecuted;
|
|
}
|
|
else
|
|
{
|
|
return succeeded;
|
|
}
|
|
}
|
|
finally
|
|
{
|
|
try { stmt.close(); } catch (Throwable e) {}
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Finds the <b>version.properties</b> file and determines the installed <b>version.schema</b>.<br>
|
|
* The only way to determine the original installed schema number is by quering the for the minimum value in
|
|
* <b>alf_applied_patch.applied_to_schema</b>. This might not work if an upgrade is attempted straight from
|
|
* Alfresco v1.0!
|
|
*
|
|
* @return the installed schema number or <tt>-1</tt> if the installation is new.
|
|
*/
|
|
private int getInstalledSchemaNumber(Connection connection) throws Exception
|
|
{
|
|
Statement stmt = connection.createStatement();
|
|
try
|
|
{
|
|
ResultSet rs = stmt.executeQuery(
|
|
"select min(applied_to_schema) from alf_applied_patch where applied_to_schema > -1");
|
|
if (!rs.next())
|
|
{
|
|
// Nothing in the table
|
|
return -1;
|
|
}
|
|
if (rs.getObject(1) == null)
|
|
{
|
|
// Nothing in the table
|
|
return -1;
|
|
}
|
|
int installedSchema = rs.getInt(1);
|
|
return installedSchema;
|
|
}
|
|
finally
|
|
{
|
|
try { stmt.close(); } catch (Throwable e) {}
|
|
}
|
|
}
|
|
|
|
private static class LockFailedException extends Exception
|
|
{
|
|
private static final long serialVersionUID = -6676398230191205456L;
|
|
}
|
|
|
|
|
|
/**
|
|
* Records that the bootstrap process has started
|
|
*/
|
|
private synchronized void setBootstrapStarted(Connection connection) throws Exception
|
|
{
|
|
// Create the marker table
|
|
Statement stmt = connection.createStatement();
|
|
try
|
|
{
|
|
stmt.executeUpdate("create table alf_bootstrap_lock (charval CHAR(1) NOT NULL)");
|
|
// Success
|
|
return;
|
|
}
|
|
catch (Throwable e)
|
|
{
|
|
// We throw a well-known exception to be handled by retrying code if required
|
|
throw new LockFailedException();
|
|
}
|
|
finally
|
|
{
|
|
try { stmt.close(); } catch (Throwable e) {}
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Records that the bootstrap process has finished
|
|
*/
|
|
private void setBootstrapCompleted(Connection connection) throws Exception
|
|
{
|
|
// Create the marker table
|
|
Statement stmt = connection.createStatement();
|
|
try
|
|
{
|
|
stmt.executeUpdate("drop table alf_bootstrap_lock");
|
|
|
|
// from Thor
|
|
executedStatementsThreadLocal.set(null);
|
|
}
|
|
catch (Throwable e)
|
|
{
|
|
// Table exists
|
|
throw AlfrescoRuntimeException.create(ERR_PREVIOUS_FAILED_BOOTSTRAP);
|
|
}
|
|
finally
|
|
{
|
|
try { stmt.close(); } catch (Throwable e) {}
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Builds the schema from scratch or applies the necessary patches to the schema.
|
|
*/
|
|
private boolean updateSchema(Configuration cfg, Session session, Connection connection) throws Exception
|
|
{
|
|
boolean create = false;
|
|
try
|
|
{
|
|
countAppliedPatches(cfg, connection);
|
|
}
|
|
catch (NoSchemaException e)
|
|
{
|
|
create = true;
|
|
}
|
|
// Get the dialect
|
|
final Dialect dialect = Dialect.getDialect(cfg.getProperties());
|
|
String dialectStr = dialect.getClass().getSimpleName();
|
|
|
|
// Initialise Activiti DB, using an unclosable connection
|
|
if(create)
|
|
{
|
|
// Activiti DB updates are performed as patches in alfresco, only give
|
|
// control to activiti when creating new one.
|
|
initialiseActivitiDBSchema(new UnclosableConnection(connection));
|
|
}
|
|
|
|
if (create)
|
|
{
|
|
long start = System.currentTimeMillis();
|
|
|
|
// execute pre-create scripts (not patches)
|
|
for (String scriptUrl : this.preCreateScriptUrls)
|
|
{
|
|
executeScriptUrl(cfg, connection, scriptUrl);
|
|
}
|
|
// Build and execute changes generated by Hibernate
|
|
File tempFile = null;
|
|
Writer writer = null;
|
|
try
|
|
{
|
|
DatabaseMetadata metadata = new DatabaseMetadata(connection, dialect);
|
|
String[] sqls = cfg.generateSchemaUpdateScript(dialect, metadata);
|
|
if (sqls.length > 0)
|
|
{
|
|
tempFile = TempFileProvider.createTempFile("AlfrescoSchema-" + dialectStr + "-Update-", ".sql");
|
|
writer = new BufferedWriter(new FileWriter(tempFile));
|
|
for (String sql : sqls)
|
|
{
|
|
writer.append(sql);
|
|
writer.append(";\n");
|
|
}
|
|
try {writer.close();} catch (Throwable e) {}
|
|
executeScriptFile(cfg, connection, tempFile, null);
|
|
}
|
|
}
|
|
finally
|
|
{
|
|
if (writer != null)
|
|
{
|
|
try {writer.close();} catch (Throwable e) {}
|
|
}
|
|
}
|
|
// execute post-create scripts (not patches)
|
|
for (String scriptUrl : this.postCreateScriptUrls)
|
|
{
|
|
executeScriptUrl(cfg, connection, scriptUrl);
|
|
}
|
|
|
|
if (logger.isInfoEnabled())
|
|
{
|
|
logger.info("Create scripts executed in "+(System.currentTimeMillis()-start)+" ms");
|
|
}
|
|
}
|
|
else
|
|
{
|
|
// Check for scripts that must have been run
|
|
checkSchemaPatchScripts(cfg, connection, validateUpdateScriptPatches, false);
|
|
// Execute any pre-auto-update scripts
|
|
checkSchemaPatchScripts(cfg, connection, preUpdateScriptPatches, true);
|
|
|
|
// Build and execute changes generated by Hibernate
|
|
File tempFile = null;
|
|
Writer writer = null;
|
|
try
|
|
{
|
|
DatabaseMetadata metadata = new DatabaseMetadata(connection, dialect);
|
|
String[] sqls = cfg.generateSchemaUpdateScript(dialect, metadata);
|
|
if (sqls.length > 0)
|
|
{
|
|
tempFile = TempFileProvider.createTempFile("AlfrescoSchema-" + dialectStr + "-Update-", ".sql");
|
|
writer = new BufferedWriter(new FileWriter(tempFile));
|
|
for (String sql : sqls)
|
|
{
|
|
writer.append(sql);
|
|
writer.append(";\n");
|
|
}
|
|
}
|
|
}
|
|
finally
|
|
{
|
|
if (writer != null)
|
|
{
|
|
try {writer.close();} catch (Throwable e) {}
|
|
}
|
|
}
|
|
// execute if there were changes raised by Hibernate
|
|
if (tempFile != null)
|
|
{
|
|
executeScriptFile(cfg, connection, tempFile, null);
|
|
}
|
|
|
|
// Execute any post-auto-update scripts
|
|
checkSchemaPatchScripts(cfg, connection, postUpdateScriptPatches, true);
|
|
}
|
|
|
|
return create;
|
|
}
|
|
|
|
/**
|
|
* Initialises the Activiti DB schema, if not present it's created.
|
|
*
|
|
* @param connection Connection to use the initialise DB schema
|
|
*/
|
|
private void initialiseActivitiDBSchema(Connection connection)
|
|
{
|
|
// create instance of activiti engine to initialise schema
|
|
ProcessEngine engine = null;
|
|
ProcessEngineConfiguration engineConfig = ProcessEngineConfiguration.createStandaloneProcessEngineConfiguration();
|
|
try
|
|
{
|
|
// build the engine
|
|
engine = engineConfig.setDataSource(dataSource).
|
|
setDatabaseSchemaUpdate("none").
|
|
setProcessEngineName("activitiBootstrapEngine").
|
|
setHistory("full").
|
|
setJobExecutorActivate(false).
|
|
buildProcessEngine();
|
|
|
|
// create or upgrade the DB schema
|
|
engine.getManagementService().databaseSchemaUpgrade(connection, null, DatabaseMetaDataHelper.getSchema(connection));
|
|
}
|
|
finally
|
|
{
|
|
if (engine != null)
|
|
{
|
|
// close the process engine
|
|
engine.close();
|
|
}
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Check that the necessary scripts have been executed against the database
|
|
*/
|
|
private void checkSchemaPatchScripts(
|
|
Configuration cfg,
|
|
Connection connection,
|
|
List<SchemaUpgradeScriptPatch> scriptPatches,
|
|
boolean apply) throws Exception
|
|
{
|
|
// first check if there have been any applied patches
|
|
int appliedPatchCount = countAppliedPatches(cfg, connection);
|
|
if (appliedPatchCount == 0)
|
|
{
|
|
// This is a new schema, so upgrade scripts are irrelevant
|
|
// and patches will not have been applied yet
|
|
return;
|
|
}
|
|
// Retrieve the first installed schema number
|
|
int installedSchema = getInstalledSchemaNumber(connection);
|
|
|
|
nextPatch:
|
|
for (SchemaUpgradeScriptPatch patch : scriptPatches)
|
|
{
|
|
final String patchId = patch.getId();
|
|
final String scriptUrl = patch.getScriptUrl();
|
|
|
|
// Check if any of the alternative patches were executed
|
|
List<Patch> alternatives = patch.getAlternatives();
|
|
for (Patch alternativePatch : alternatives)
|
|
{
|
|
String alternativePatchId = alternativePatch.getId();
|
|
boolean alternativeSucceeded = didPatchSucceed(connection, alternativePatchId, true);
|
|
if (alternativeSucceeded)
|
|
{
|
|
continue nextPatch;
|
|
}
|
|
}
|
|
|
|
// check if the script was successfully executed
|
|
boolean wasSuccessfullyApplied = didPatchSucceed(connection, patchId, false);
|
|
if (wasSuccessfullyApplied)
|
|
{
|
|
// Either the patch was executed before or the system was bootstrapped
|
|
// with the patch bean present.
|
|
continue;
|
|
}
|
|
else if (!patch.applies(installedSchema))
|
|
{
|
|
// Patch does not apply to the installed schema number
|
|
continue;
|
|
}
|
|
else if (!apply)
|
|
{
|
|
// the script was not run and may not be run automatically
|
|
throw AlfrescoRuntimeException.create(ERR_SCRIPT_NOT_RUN, scriptUrl);
|
|
}
|
|
// it wasn't run and it can be run now
|
|
executeScriptUrl(cfg, connection, scriptUrl);
|
|
}
|
|
}
|
|
|
|
private void executeScriptUrl(Configuration cfg, Connection connection, String scriptUrl) throws Exception
|
|
{
|
|
Dialect dialect = Dialect.getDialect(cfg.getProperties());
|
|
String dialectStr = dialect.getClass().getSimpleName();
|
|
InputStream scriptInputStream = getScriptInputStream(dialect.getClass(), scriptUrl);
|
|
// check that it exists
|
|
if (scriptInputStream == null)
|
|
{
|
|
throw AlfrescoRuntimeException.create(ERR_SCRIPT_NOT_FOUND, scriptUrl);
|
|
}
|
|
// write the script to a temp location for future and failure reference
|
|
File tempFile = null;
|
|
try
|
|
{
|
|
tempFile = TempFileProvider.createTempFile("AlfrescoSchema-" + dialectStr + "-Update-", ".sql");
|
|
ContentWriter writer = new FileContentWriter(tempFile);
|
|
writer.putContent(scriptInputStream);
|
|
}
|
|
finally
|
|
{
|
|
try { scriptInputStream.close(); } catch (Throwable e) {} // usually a duplicate close
|
|
}
|
|
// now execute it
|
|
String dialectScriptUrl = scriptUrl.replaceAll(PLACEHOLDER_DIALECT, dialect.getClass().getName());
|
|
// Replace the script placeholders
|
|
executeScriptFile(cfg, connection, tempFile, dialectScriptUrl);
|
|
}
|
|
|
|
/**
|
|
* Replaces the dialect placeholder in the resource URL and attempts to find a file for
|
|
* it. If not found, the dialect hierarchy will be walked until a compatible resource is
|
|
* found. This makes it possible to have resources that are generic to all dialects.
|
|
*
|
|
* @return The Resource, otherwise null
|
|
*/
|
|
private Resource getDialectResource(Class<?> dialectClass, String resourceUrl)
|
|
{
|
|
// replace the dialect placeholder
|
|
String dialectResourceUrl = resolveDialectUrl(dialectClass, resourceUrl);
|
|
// get a handle on the resource
|
|
Resource resource = rpr.getResource(dialectResourceUrl);
|
|
if (!resource.exists())
|
|
{
|
|
// it wasn't found. Get the superclass of the dialect and try again
|
|
Class<?> superClass = dialectClass.getSuperclass();
|
|
if (Dialect.class.isAssignableFrom(superClass))
|
|
{
|
|
// we still have a Dialect - try again
|
|
return getDialectResource(superClass, resourceUrl);
|
|
}
|
|
else
|
|
{
|
|
// we have exhausted all options
|
|
return null;
|
|
}
|
|
}
|
|
else
|
|
{
|
|
// we have a handle to it
|
|
return resource;
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Takes resource URL containing the {@link SchemaBootstrap#PLACEHOLDER_DIALECT dialect placeholder text}
|
|
* and substitutes the placeholder with the name of the given dialect's class.
|
|
* <p/>
|
|
* For example:
|
|
* <pre>
|
|
* resolveDialectUrl(MySQLInnoDBDialect.class, "classpath:alfresco/db/${db.script.dialect}/myfile.xml")
|
|
* </pre>
|
|
* would give the following String:
|
|
* <pre>
|
|
* classpath:alfresco/db/org.hibernate.dialect.MySQLInnoDBDialect/myfile.xml
|
|
* </pre>
|
|
*/
|
|
private String resolveDialectUrl(Class<?> dialectClass, String resourceUrl)
|
|
{
|
|
return resourceUrl.replaceAll(PLACEHOLDER_DIALECT, dialectClass.getName());
|
|
}
|
|
|
|
/**
|
|
* Replaces the dialect placeholder in the script URL and attempts to find a file for
|
|
* it. If not found, the dialect hierarchy will be walked until a compatible script is
|
|
* found. This makes it possible to have scripts that are generic to all dialects.
|
|
*
|
|
* @return Returns an input stream onto the script, otherwise null
|
|
*/
|
|
private InputStream getScriptInputStream(Class<?> dialectClazz, String scriptUrl) throws Exception
|
|
{
|
|
return getDialectResource(dialectClazz, scriptUrl).getInputStream();
|
|
}
|
|
|
|
/**
|
|
* @param cfg the Hibernate configuration
|
|
* @param connection the DB connection to use
|
|
* @param scriptFile the file containing the statements
|
|
* @param scriptUrl the URL of the script to report. If this is null, the script
|
|
* is assumed to have been auto-generated.
|
|
*/
|
|
private void executeScriptFile(
|
|
Configuration cfg,
|
|
Connection connection,
|
|
File scriptFile,
|
|
String scriptUrl) throws Exception
|
|
{
|
|
final Dialect dialect = Dialect.getDialect(cfg.getProperties());
|
|
|
|
StringBuilder executedStatements = executedStatementsThreadLocal.get();
|
|
if (executedStatements == null)
|
|
{
|
|
// Validate the schema, pre-upgrade
|
|
validateSchema("Alfresco-{0}-Validation-Pre-Upgrade-{1}-");
|
|
|
|
dumpSchema("pre-upgrade");
|
|
|
|
// There is no lock at this stage. This process can fall out if the lock can't be applied.
|
|
setBootstrapStarted(connection);
|
|
executedStatements = new StringBuilder(8094);
|
|
executedStatementsThreadLocal.set(executedStatements);
|
|
}
|
|
|
|
if (scriptUrl == null)
|
|
{
|
|
LogUtil.info(logger, MSG_EXECUTING_GENERATED_SCRIPT, scriptFile);
|
|
}
|
|
else
|
|
{
|
|
LogUtil.info(logger, MSG_EXECUTING_COPIED_SCRIPT, scriptFile, scriptUrl);
|
|
}
|
|
|
|
InputStream scriptInputStream = new FileInputStream(scriptFile);
|
|
BufferedReader reader = new BufferedReader(new InputStreamReader(scriptInputStream, "UTF-8"));
|
|
try
|
|
{
|
|
int line = 0;
|
|
// loop through all statements
|
|
StringBuilder sb = new StringBuilder(1024);
|
|
String fetchVarName = null;
|
|
String fetchColumnName = null;
|
|
boolean doBatch = false;
|
|
int batchUpperLimit = 0;
|
|
int batchSize = 1;
|
|
Map<String, Object> varAssignments = new HashMap<String, Object>(13);
|
|
// Special variable assignments:
|
|
if (dialect instanceof PostgreSQLDialect)
|
|
{
|
|
// Needs 1/0 for true/false
|
|
varAssignments.put("true", "true");
|
|
varAssignments.put("false", "false");
|
|
varAssignments.put("TRUE", "TRUE");
|
|
varAssignments.put("FALSE", "FALSE");
|
|
}
|
|
else
|
|
{
|
|
// Needs true/false as strings
|
|
varAssignments.put("true", "1");
|
|
varAssignments.put("false", "0");
|
|
varAssignments.put("TRUE", "1");
|
|
varAssignments.put("FALSE", "0");
|
|
}
|
|
long now = System.currentTimeMillis();
|
|
varAssignments.put("now", new Long(now).toString());
|
|
varAssignments.put("NOW", new Long(now).toString());
|
|
|
|
while(true)
|
|
{
|
|
String sqlOriginal = reader.readLine();
|
|
line++;
|
|
|
|
if (sqlOriginal == null)
|
|
{
|
|
// nothing left in the file
|
|
break;
|
|
}
|
|
|
|
// trim it
|
|
String sql = sqlOriginal.trim();
|
|
// Check of includes
|
|
if (sql.startsWith("--INCLUDE:"))
|
|
{
|
|
if (sb.length() > 0)
|
|
{
|
|
// This can only be set before a new SQL statement
|
|
throw AlfrescoRuntimeException.create(ERR_STATEMENT_INCLUDE_BEFORE_SQL, (line - 1), scriptUrl);
|
|
}
|
|
String includedScriptUrl = sql.substring(10, sql.length());
|
|
// Execute the script in line
|
|
executeScriptUrl(cfg, connection, includedScriptUrl);
|
|
}
|
|
// Check for variable assignment
|
|
else if (sql.startsWith("--ASSIGN:"))
|
|
{
|
|
if (sb.length() > 0)
|
|
{
|
|
// This can only be set before a new SQL statement
|
|
throw AlfrescoRuntimeException.create(ERR_STATEMENT_VAR_ASSIGNMENT_BEFORE_SQL, (line - 1), scriptUrl);
|
|
}
|
|
String assignStr = sql.substring(9, sql.length());
|
|
String[] assigns = assignStr.split("=");
|
|
if (assigns.length != 2 || assigns[0].length() == 0 || assigns[1].length() == 0)
|
|
{
|
|
throw AlfrescoRuntimeException.create(ERR_STATEMENT_VAR_ASSIGNMENT_FORMAT, (line - 1), scriptUrl);
|
|
}
|
|
fetchVarName = assigns[0];
|
|
fetchColumnName = assigns[1];
|
|
continue;
|
|
}
|
|
// Handle looping control
|
|
else if (sql.startsWith("--FOREACH"))
|
|
{
|
|
// --FOREACH table.column batch.size.property
|
|
String[] args = sql.split("[ \\t]+");
|
|
int sepIndex;
|
|
if (args.length == 3 && (sepIndex = args[1].indexOf('.')) != -1)
|
|
{
|
|
doBatch = true;
|
|
// Select the upper bound of the table column
|
|
String stmt = "SELECT MAX(" + args[1].substring(sepIndex+1) + ") AS upper_limit FROM " + args[1].substring(0, sepIndex);
|
|
Object fetchedVal = executeStatement(connection, stmt, "upper_limit", false, line, scriptFile);
|
|
if (fetchedVal instanceof Number)
|
|
{
|
|
batchUpperLimit = ((Number)fetchedVal).intValue();
|
|
// Read the batch size from the named property
|
|
String batchSizeString = globalProperties.getProperty(args[2]);
|
|
// Fall back to the default property
|
|
if (batchSizeString == null)
|
|
{
|
|
batchSizeString = globalProperties.getProperty(PROPERTY_DEFAULT_BATCH_SIZE);
|
|
}
|
|
batchSize = batchSizeString == null ? 10000 : Integer.parseInt(batchSizeString);
|
|
}
|
|
}
|
|
continue;
|
|
}
|
|
// Allow transaction delineation
|
|
else if (sql.startsWith("--BEGIN TXN"))
|
|
{
|
|
connection.setAutoCommit(false);
|
|
continue;
|
|
}
|
|
else if (sql.startsWith("--END TXN"))
|
|
{
|
|
connection.commit();
|
|
connection.setAutoCommit(true);
|
|
continue;
|
|
}
|
|
|
|
// Check for comments
|
|
if (sql.length() == 0 ||
|
|
sql.startsWith( "--" ) ||
|
|
sql.startsWith( "//" ) ||
|
|
sql.startsWith( "/*" ) )
|
|
{
|
|
if (sb.length() > 0)
|
|
{
|
|
// we have an unterminated statement
|
|
throw AlfrescoRuntimeException.create(ERR_STATEMENT_TERMINATOR, (line - 1), scriptUrl);
|
|
}
|
|
// there has not been anything to execute - it's just a comment line
|
|
continue;
|
|
}
|
|
// have we reached the end of a statement?
|
|
boolean execute = false;
|
|
boolean optional = false;
|
|
if (sql.endsWith(";"))
|
|
{
|
|
sql = sql.substring(0, sql.length() - 1);
|
|
execute = true;
|
|
optional = false;
|
|
}
|
|
else if (sql.endsWith("(optional)") || sql.endsWith("(OPTIONAL)"))
|
|
{
|
|
// Get the end of statement
|
|
int endIndex = sql.lastIndexOf(';');
|
|
if (endIndex > -1)
|
|
{
|
|
sql = sql.substring(0, endIndex);
|
|
execute = true;
|
|
optional = true;
|
|
}
|
|
else
|
|
{
|
|
// Ends with "(optional)" but there is no semi-colon.
|
|
// Just take it at face value and probably fail.
|
|
}
|
|
}
|
|
// Add newline
|
|
if (sb.length() > 0)
|
|
{
|
|
sb.append("\n");
|
|
}
|
|
// Add leading whitespace for formatting
|
|
int whitespaceCount = sqlOriginal.indexOf(sql);
|
|
for (int i = 0; i < whitespaceCount; i++)
|
|
{
|
|
sb.append(" ");
|
|
}
|
|
// append to the statement being built up
|
|
sb.append(sql);
|
|
// execute, if required
|
|
if (execute)
|
|
{
|
|
// Now substitute and execute the statement the appropriate number of times
|
|
String unsubstituted = sb.toString();
|
|
for(int lowerBound = 0; lowerBound <= batchUpperLimit; lowerBound += batchSize)
|
|
{
|
|
sql = unsubstituted;
|
|
|
|
// Substitute in the next pair of range parameters
|
|
if (doBatch)
|
|
{
|
|
varAssignments.put("LOWERBOUND", String.valueOf(lowerBound));
|
|
varAssignments.put("UPPERBOUND", String.valueOf(lowerBound + batchSize - 1));
|
|
}
|
|
|
|
// Perform variable replacement using the ${var} format
|
|
for (Map.Entry<String, Object> entry : varAssignments.entrySet())
|
|
{
|
|
String var = entry.getKey();
|
|
Object val = entry.getValue();
|
|
sql = sql.replaceAll("\\$\\{" + var + "\\}", val.toString());
|
|
}
|
|
|
|
// Handle the 0/1 values that PostgreSQL doesn't translate to TRUE
|
|
if (this.dialect != null && this.dialect instanceof PostgreSQLDialect)
|
|
{
|
|
sql = sql.replaceAll("\\$\\{TRUE\\}", "TRUE");
|
|
}
|
|
else
|
|
{
|
|
sql = sql.replaceAll("\\$\\{TRUE\\}", "1");
|
|
}
|
|
|
|
if (this.dialect != null && this.dialect instanceof MySQLInnoDBDialect)
|
|
{
|
|
// note: enable bootstrap on MySQL 5.5 (eg. for auto-generated SQL, such as JBPM)
|
|
sql = sql.replaceAll("(?i)TYPE=InnoDB", "ENGINE=InnoDB");
|
|
}
|
|
|
|
Object fetchedVal = executeStatement(connection, sql, fetchColumnName, optional, line, scriptFile);
|
|
if (fetchVarName != null && fetchColumnName != null)
|
|
{
|
|
varAssignments.put(fetchVarName, fetchedVal);
|
|
}
|
|
}
|
|
sb.setLength(0);
|
|
fetchVarName = null;
|
|
fetchColumnName = null;
|
|
doBatch = false;
|
|
batchUpperLimit = 0;
|
|
batchSize = 1;
|
|
}
|
|
}
|
|
}
|
|
finally
|
|
{
|
|
try { reader.close(); } catch (Throwable e) {}
|
|
try { scriptInputStream.close(); } catch (Throwable e) {}
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Execute the given SQL statement, absorbing exceptions that we expect during
|
|
* schema creation or upgrade.
|
|
*
|
|
* @param fetchColumnName the name of the column value to return
|
|
*/
|
|
private Object executeStatement(
|
|
Connection connection,
|
|
String sql,
|
|
String fetchColumnName,
|
|
boolean optional,
|
|
int line,
|
|
File file) throws Exception
|
|
{
|
|
StringBuilder executedStatements = executedStatementsThreadLocal.get();
|
|
if (executedStatements == null)
|
|
{
|
|
throw new IllegalArgumentException("The executedStatementsThreadLocal must be populated");
|
|
}
|
|
|
|
Statement stmt = connection.createStatement();
|
|
Object ret = null;
|
|
try
|
|
{
|
|
if (logger.isDebugEnabled())
|
|
{
|
|
LogUtil.debug(logger, MSG_EXECUTING_STATEMENT, sql);
|
|
}
|
|
boolean haveResults = stmt.execute(sql);
|
|
// Record the statement
|
|
executedStatements.append(sql).append(";\n\n");
|
|
if (haveResults && fetchColumnName != null)
|
|
{
|
|
ResultSet rs = stmt.getResultSet();
|
|
if (rs.next())
|
|
{
|
|
// Get the result value
|
|
ret = rs.getObject(fetchColumnName);
|
|
}
|
|
}
|
|
}
|
|
catch (SQLException e)
|
|
{
|
|
if (optional)
|
|
{
|
|
// it was marked as optional, so we just ignore it
|
|
LogUtil.debug(logger, MSG_OPTIONAL_STATEMENT_FAILED, sql, e.getMessage(), file.getAbsolutePath(), line);
|
|
}
|
|
else
|
|
{
|
|
LogUtil.error(logger, ERR_STATEMENT_FAILED, sql, e.getMessage(), file.getAbsolutePath(), line);
|
|
throw e;
|
|
}
|
|
}
|
|
finally
|
|
{
|
|
try { stmt.close(); } catch (Throwable e) {}
|
|
}
|
|
return ret;
|
|
}
|
|
|
|
/**
|
|
* Performs dialect-specific checking. This includes checking for InnoDB, dumping the dialect being used
|
|
* as well as setting any runtime, dialect-specific properties.
|
|
*/
|
|
private void checkDialect(Dialect dialect)
|
|
{
|
|
Class<?> dialectClazz = dialect.getClass();
|
|
LogUtil.info(logger, MSG_DIALECT_USED, dialectClazz.getName());
|
|
if (dialectClazz.equals(MySQLDialect.class) || dialectClazz.equals(MySQL5Dialect.class))
|
|
{
|
|
LogUtil.error(logger, ERR_DIALECT_SHOULD_USE, dialectClazz.getName(), MySQLInnoDBDialect.class.getName());
|
|
throw AlfrescoRuntimeException.create(WARN_DIALECT_UNSUPPORTED, dialectClazz.getName());
|
|
}
|
|
else if (dialectClazz.equals(HSQLDialect.class))
|
|
{
|
|
LogUtil.info(logger, WARN_DIALECT_HSQL);
|
|
}
|
|
else if (dialectClazz.equals(DerbyDialect.class))
|
|
{
|
|
LogUtil.info(logger, WARN_DIALECT_DERBY);
|
|
}
|
|
else if (dialectClazz.equals(Oracle9iDialect.class) || dialectClazz.equals(Oracle10gDialect.class))
|
|
{
|
|
LogUtil.error(logger, ERR_DIALECT_SHOULD_USE, dialectClazz.getName(), AlfrescoOracle9Dialect.class.getName());
|
|
throw AlfrescoRuntimeException.create(WARN_DIALECT_UNSUPPORTED, dialectClazz.getName());
|
|
}
|
|
else if (dialectClazz.equals(OracleDialect.class) || dialectClazz.equals(Oracle9Dialect.class))
|
|
{
|
|
LogUtil.error(logger, ERR_DIALECT_SHOULD_USE, dialectClazz.getName(), AlfrescoOracle9Dialect.class.getName());
|
|
throw AlfrescoRuntimeException.create(WARN_DIALECT_UNSUPPORTED, dialectClazz.getName());
|
|
}
|
|
|
|
int maxStringLength = SchemaBootstrap.DEFAULT_MAX_STRING_LENGTH;
|
|
int serializableType = SerializableTypeHandler.getSerializableType();
|
|
// Adjust the maximum allowable String length according to the dialect
|
|
if (dialect instanceof AlfrescoSQLServerDialect)
|
|
{
|
|
// string_value nvarchar(1024) null,
|
|
// serializable_value image null,
|
|
maxStringLength = SchemaBootstrap.DEFAULT_MAX_STRING_LENGTH;
|
|
}
|
|
else if (dialect instanceof AlfrescoSybaseAnywhereDialect)
|
|
{
|
|
// string_value text null,
|
|
// serializable_value varbinary(8192) null,
|
|
maxStringLength = Integer.MAX_VALUE;
|
|
}
|
|
else if (dialect instanceof DB2Dialect)
|
|
{
|
|
// string_value varchar(1024),
|
|
// serializable_value varchar(8192) for bit data,
|
|
maxStringLength = SchemaBootstrap.DEFAULT_MAX_STRING_LENGTH;
|
|
serializableType = Types.BLOB;
|
|
}
|
|
else if (dialect instanceof HSQLDialect)
|
|
{
|
|
// string_value varchar(1024),
|
|
// serializable_value varbinary(8192),
|
|
maxStringLength = SchemaBootstrap.DEFAULT_MAX_STRING_LENGTH;
|
|
}
|
|
else if (dialect instanceof MySQLInnoDBDialect)
|
|
{
|
|
// string_value text,
|
|
// serializable_value blob,
|
|
maxStringLength = Integer.MAX_VALUE;
|
|
}
|
|
else if (dialect instanceof AlfrescoOracle9Dialect)
|
|
{
|
|
// string_value varchar2(1024 char),
|
|
// serializable_value blob,
|
|
maxStringLength = SchemaBootstrap.DEFAULT_MAX_STRING_LENGTH;
|
|
}
|
|
else if (dialect instanceof PostgreSQLDialect)
|
|
{
|
|
// string_value varchar(1024),
|
|
// serializable_value bytea,
|
|
maxStringLength = SchemaBootstrap.DEFAULT_MAX_STRING_LENGTH;
|
|
}
|
|
SchemaBootstrap.setMaxStringLength(maxStringLength);
|
|
SerializableTypeHandler.setSerializableType(serializableType);
|
|
|
|
// Now override the maximum string length if it was set directly
|
|
if (maximumStringLength > 0)
|
|
{
|
|
SchemaBootstrap.setMaxStringLength(maximumStringLength);
|
|
}
|
|
}
|
|
|
|
@Override
|
|
public synchronized void onBootstrap(ApplicationEvent event)
|
|
{
|
|
// from Thor
|
|
if (event != null)
|
|
{
|
|
// Use the application context to load resources
|
|
rpr = (ApplicationContext)event.getSource();
|
|
}
|
|
|
|
// do everything in a transaction
|
|
Session session = getSessionFactory().openSession();
|
|
Connection connection = null;
|
|
try
|
|
{
|
|
// make sure that we AUTO-COMMIT
|
|
connection = dataSource.getConnection();
|
|
connection.setAutoCommit(true);
|
|
LogUtil.info(logger, MSG_DATABASE_USED, connection);
|
|
|
|
Configuration cfg = localSessionFactory.getConfiguration();
|
|
|
|
// Check and dump the dialect being used
|
|
checkDialect(this.dialect);
|
|
|
|
// Ensure that our static connection provider is used
|
|
String defaultConnectionProviderFactoryClass = cfg.getProperty(Environment.CONNECTION_PROVIDER);
|
|
cfg.setProperty(Environment.CONNECTION_PROVIDER, SchemaBootstrapConnectionProvider.class.getName());
|
|
SchemaBootstrapConnectionProvider.setBootstrapConnection(connection);
|
|
|
|
// Update the schema, if required.
|
|
if (updateSchema)
|
|
{
|
|
// Retries are required here as the DB lock will be applied lazily upon first statement execution.
|
|
// So if the schema is up to date (no statements executed) then the LockFailException cannot be
|
|
// thrown. If it is thrown, the the update needs to be rerun as it will probably generate no SQL
|
|
// statements the second time around.
|
|
boolean updatedSchema = false;
|
|
boolean createdSchema = false;
|
|
|
|
for (int i = 0; i < schemaUpdateLockRetryCount; i++)
|
|
{
|
|
try
|
|
{
|
|
createdSchema = updateSchema(cfg, session, connection);
|
|
updatedSchema = true;
|
|
break;
|
|
}
|
|
catch (LockFailedException e)
|
|
{
|
|
try { this.wait(schemaUpdateLockRetryWaitSeconds * 1000L); } catch (InterruptedException ee) {}
|
|
}
|
|
}
|
|
|
|
if (!updatedSchema)
|
|
{
|
|
// The retries were exceeded
|
|
throw new AlfrescoRuntimeException(ERR_PREVIOUS_FAILED_BOOTSTRAP);
|
|
}
|
|
|
|
// Copy the executed statements to the output file
|
|
File schemaOutputFile = null;
|
|
if (schemaOuputFilename != null)
|
|
{
|
|
schemaOutputFile = new File(schemaOuputFilename);
|
|
}
|
|
else
|
|
{
|
|
schemaOutputFile = TempFileProvider.createTempFile(
|
|
"AlfrescoSchema-" + this.dialect.getClass().getSimpleName() + "-All_Statements-",
|
|
".sql");
|
|
}
|
|
|
|
StringBuilder executedStatements = executedStatementsThreadLocal.get();
|
|
if (executedStatements == null)
|
|
{
|
|
LogUtil.info(logger, MSG_NO_CHANGES);
|
|
}
|
|
else
|
|
{
|
|
FileContentWriter writer = new FileContentWriter(schemaOutputFile);
|
|
writer.setEncoding("UTF-8");
|
|
String executedStatementsStr = executedStatements.toString();
|
|
writer.putContent(executedStatementsStr);
|
|
LogUtil.info(logger, MSG_ALL_STATEMENTS, schemaOutputFile.getPath());
|
|
}
|
|
|
|
if (! createdSchema)
|
|
{
|
|
// verify that all patches have been applied correctly
|
|
checkSchemaPatchScripts(cfg, connection, validateUpdateScriptPatches, false); // check scripts
|
|
checkSchemaPatchScripts(cfg, connection, preUpdateScriptPatches, false); // check scripts
|
|
checkSchemaPatchScripts(cfg, connection, postUpdateScriptPatches, false); // check scripts
|
|
}
|
|
|
|
if (executedStatements != null)
|
|
{
|
|
// Remove the flag indicating a running bootstrap
|
|
setBootstrapCompleted(connection);
|
|
}
|
|
|
|
|
|
// Report normalized dumps
|
|
if (executedStatements != null)
|
|
{
|
|
// Validate the schema, post-upgrade
|
|
validateSchema("Alfresco-{0}-Validation-Post-Upgrade-{1}-");
|
|
|
|
// 4.0+ schema dump
|
|
dumpSchema("post-upgrade");
|
|
}
|
|
}
|
|
else
|
|
{
|
|
LogUtil.info(logger, MSG_BYPASSING_SCHEMA_UPDATE);
|
|
}
|
|
|
|
if (stopAfterSchemaBootstrap)
|
|
{
|
|
// 4.0+ schema dump
|
|
dumpSchema("forced-exit");
|
|
LogUtil.error(logger, ERR_FORCED_STOP);
|
|
throw new BootstrapStopException();
|
|
}
|
|
|
|
// Reset the configuration
|
|
cfg.setProperty(Environment.CONNECTION_PROVIDER, defaultConnectionProviderFactoryClass);
|
|
|
|
if (event != null)
|
|
{
|
|
// all done successfully
|
|
((ApplicationContext) event.getSource()).publishEvent(new SchemaAvailableEvent(this));
|
|
}
|
|
}
|
|
catch (BootstrapStopException e)
|
|
{
|
|
// We just let this out
|
|
throw e;
|
|
}
|
|
catch (Throwable e)
|
|
{
|
|
LogUtil.error(logger, e, ERR_UPDATE_FAILED);
|
|
if (updateSchema)
|
|
{
|
|
throw new AlfrescoRuntimeException(ERR_UPDATE_FAILED, e);
|
|
}
|
|
else
|
|
{
|
|
throw new AlfrescoRuntimeException(ERR_VALIDATION_FAILED, e);
|
|
}
|
|
}
|
|
finally
|
|
{
|
|
try
|
|
{
|
|
if (connection != null)
|
|
{
|
|
connection.close();
|
|
}
|
|
}
|
|
catch (Throwable e)
|
|
{
|
|
logger.warn("Error closing DB connection: " + e.getMessage());
|
|
}
|
|
// Remove the connection reference from the threadlocal boostrap
|
|
SchemaBootstrapConnectionProvider.setBootstrapConnection(null);
|
|
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Collate differences and validation problems with the schema with respect to an appropriate
|
|
* reference schema.
|
|
*
|
|
* @param outputFileNameTemplate
|
|
* @return the number of potential problems found.
|
|
*/
|
|
public synchronized int validateSchema(String outputFileNameTemplate)
|
|
{
|
|
int totalProblems = 0;
|
|
|
|
// Discover available reference files (e.g. for prefixes alf_, avm_ etc.)
|
|
// and process each in turn.
|
|
for (String schemaReferenceUrl : schemaReferenceUrls)
|
|
{
|
|
Resource referenceResource = getDialectResource(dialect.getClass(), schemaReferenceUrl);
|
|
|
|
if (referenceResource == null || !referenceResource.exists())
|
|
{
|
|
String resourceUrl = resolveDialectUrl(dialect.getClass(), schemaReferenceUrl);
|
|
LogUtil.debug(logger, DEBUG_SCHEMA_COMP_NO_REF_FILE, resourceUrl);
|
|
}
|
|
else
|
|
{
|
|
// Validate schema against each reference file
|
|
int problems = validateSchema(referenceResource, outputFileNameTemplate);
|
|
totalProblems += problems;
|
|
}
|
|
}
|
|
|
|
// Return number of problems found across all reference files.
|
|
return totalProblems;
|
|
}
|
|
|
|
private int validateSchema(Resource referenceResource, String outputFileNameTemplate)
|
|
{
|
|
try
|
|
{
|
|
return attemptValidateSchema(referenceResource, outputFileNameTemplate);
|
|
}
|
|
catch (Throwable e)
|
|
{
|
|
if (logger.isErrorEnabled())
|
|
{
|
|
logger.error("Unable to validate database schema.", e);
|
|
}
|
|
return 0;
|
|
}
|
|
}
|
|
|
|
private int attemptValidateSchema(Resource referenceResource, String outputFileNameTemplate)
|
|
{
|
|
Date startTime = new Date();
|
|
|
|
InputStream is = null;
|
|
try
|
|
{
|
|
is = new BufferedInputStream(referenceResource.getInputStream());
|
|
}
|
|
catch (IOException e)
|
|
{
|
|
throw new RuntimeException("Unable to open schema reference file: " + referenceResource);
|
|
}
|
|
|
|
XMLToSchema xmlToSchema = new XMLToSchema(is);
|
|
xmlToSchema.parse();
|
|
Schema reference = xmlToSchema.getSchema();
|
|
|
|
ExportDb exporter = new ExportDb(dataSource, dialect, descriptorService);
|
|
// Ensure that the database objects we're validating are filtered
|
|
// by the same prefix as the reference file.
|
|
exporter.setNamePrefix(reference.getDbPrefix());
|
|
exporter.execute();
|
|
Schema target = exporter.getSchema();
|
|
|
|
SchemaComparator schemaComparator = new SchemaComparator(reference, target, dialect);
|
|
|
|
schemaComparator.validateAndCompare();
|
|
|
|
Results results = schemaComparator.getComparisonResults();
|
|
|
|
Object[] outputFileNameParams = new Object[]
|
|
{
|
|
dialect.getClass().getSimpleName(),
|
|
reference.getDbPrefix()
|
|
};
|
|
String outputFileName = MessageFormat.format(outputFileNameTemplate, outputFileNameParams);
|
|
|
|
File outputFile = TempFileProvider.createTempFile(outputFileName, ".txt");
|
|
|
|
PrintWriter pw = null;
|
|
try
|
|
{
|
|
pw = new PrintWriter(outputFile, SchemaComparator.CHAR_SET);
|
|
}
|
|
catch (FileNotFoundException error)
|
|
{
|
|
throw new RuntimeException("Unable to open file for writing: " + outputFile);
|
|
}
|
|
catch (UnsupportedEncodingException error)
|
|
{
|
|
throw new RuntimeException("Unsupported char set: " + SchemaComparator.CHAR_SET, error);
|
|
}
|
|
|
|
// Populate the file with details of the comparison's results.
|
|
for (Result result : results)
|
|
{
|
|
pw.print(result.describe());
|
|
pw.print(SchemaComparator.LINE_SEPARATOR);
|
|
}
|
|
|
|
pw.close();
|
|
|
|
if (results.size() == 0)
|
|
{
|
|
LogUtil.info(logger, INFO_SCHEMA_COMP_ALL_OK, referenceResource);
|
|
}
|
|
else
|
|
{
|
|
int numProblems = results.size();
|
|
LogUtil.warn(logger, WARN_SCHEMA_COMP_PROBLEMS_FOUND, numProblems, outputFile);
|
|
}
|
|
Date endTime = new Date();
|
|
long durationMillis = endTime.getTime() - startTime.getTime();
|
|
LogUtil.debug(logger, DEBUG_SCHEMA_COMP_TIME_TAKEN, durationMillis);
|
|
|
|
return results.size();
|
|
}
|
|
|
|
/**
|
|
* Produces schema dump in XML format: this is performed pre- and post-upgrade (i.e. if
|
|
* changes are made to the schema) and can made upon demand via JMX.
|
|
*
|
|
* @return List of output files.
|
|
*/
|
|
public List<File> dumpSchema()
|
|
{
|
|
return dumpSchema("", null);
|
|
}
|
|
|
|
/**
|
|
* Produces schema dump in XML format: this is performed pre- and post-upgrade (i.e. if
|
|
* changes are made to the schema) and can made upon demand via JMX.
|
|
*
|
|
* @param dbPrefixes Array of database object prefixes to produce the dump for, e.g. "alf_".
|
|
* @return List of output files.
|
|
*/
|
|
public List<File> dumpSchema(String[] dbPrefixes)
|
|
{
|
|
return dumpSchema("", dbPrefixes);
|
|
}
|
|
|
|
/**
|
|
* Dumps the DB schema to temporary file(s), named similarly to:
|
|
* <pre>
|
|
* Alfresco-schema-DialectName-whenDumped-dbPrefix-23498732.xml
|
|
* </pre>
|
|
* Where the digits serve to create a unique temp file name. If whenDumped is empty or null,
|
|
* then the output is similar to:
|
|
* <pre>
|
|
* Alfresco-schema-DialectName-dbPrefix-23498732.xml
|
|
* </pre>
|
|
* If dbPrefixes is null, then the default list is used (see {@link MultiFileDumper#DEFAULT_PREFIXES})
|
|
* The dump files' paths are logged at info level.
|
|
*
|
|
* @param whenDumped
|
|
* @param dbPrefixes Array of database object prefixes to filter by, e.g. "alf_"
|
|
* @return List of output files.
|
|
*/
|
|
private List<File> dumpSchema(String whenDumped, String[] dbPrefixes)
|
|
{
|
|
// Build a string to use as the file name template,
|
|
// e.g. "Alfresco-schema-MySQLDialect-pre-upgrade-{0}-"
|
|
StringBuilder sb = new StringBuilder(64);
|
|
sb.append("Alfresco-schema-").
|
|
append(dialect.getClass().getSimpleName());
|
|
if (whenDumped != null && whenDumped.length() > 0)
|
|
{
|
|
sb.append("-");
|
|
sb.append(whenDumped);
|
|
}
|
|
sb.append("-{0}-");
|
|
|
|
File outputDir = TempFileProvider.getTempDir();
|
|
String fileNameTemplate = sb.toString();
|
|
return dumpSchema(outputDir, fileNameTemplate, dbPrefixes);
|
|
}
|
|
|
|
/**
|
|
* Same as for {@link #dumpSchema(String, String[])} - except that the default list
|
|
* of database object prefixes is used for filtering.
|
|
*
|
|
* @see #dumpSchema(String, String[])
|
|
* @param whenDumped
|
|
* @return
|
|
*/
|
|
private List<File> dumpSchema(String whenDumped)
|
|
{
|
|
return dumpSchema(whenDumped, null);
|
|
}
|
|
|
|
private List<File> dumpSchema(File outputDir, String fileNameTemplate, String[] dbPrefixes)
|
|
{
|
|
try
|
|
{
|
|
return attemptDumpSchema(outputDir, fileNameTemplate, dbPrefixes);
|
|
}
|
|
catch (Throwable e)
|
|
{
|
|
if (logger.isErrorEnabled())
|
|
{
|
|
logger.error("Unable to dump schema to directory " + outputDir, e);
|
|
}
|
|
return null;
|
|
}
|
|
}
|
|
|
|
private List<File> attemptDumpSchema(File outputDir, String fileNameTemplate, String[] dbPrefixes)
|
|
{
|
|
DbToXMLFactory dbToXMLFactory = new MultiFileDumper.DbToXMLFactoryImpl(getApplicationContext());
|
|
|
|
MultiFileDumper dumper;
|
|
|
|
if (dbPrefixes == null)
|
|
{
|
|
dumper = new MultiFileDumper(outputDir, fileNameTemplate, dbToXMLFactory);
|
|
}
|
|
else
|
|
{
|
|
dumper = new MultiFileDumper(dbPrefixes, outputDir, fileNameTemplate, dbToXMLFactory);
|
|
}
|
|
List<File> files = dumper.dumpFiles();
|
|
|
|
for (File file : files)
|
|
{
|
|
if (logger.isInfoEnabled())
|
|
{
|
|
LogUtil.info(logger, MSG_NORMALIZED_SCHEMA, file.getAbsolutePath());
|
|
}
|
|
}
|
|
|
|
return files;
|
|
}
|
|
|
|
@Override
|
|
protected void onShutdown(ApplicationEvent event)
|
|
{
|
|
// Shut down DB, if required
|
|
Class<?> dialectClazz = this.dialect.getClass();
|
|
if (dialectClazz.equals(DerbyDialect.class))
|
|
{
|
|
try
|
|
{
|
|
DriverManager.getConnection("jdbc:derby:;shutdown=true");
|
|
}
|
|
// Derby shutdown always triggers an exception, even when clean
|
|
catch (Throwable e)
|
|
{
|
|
}
|
|
}
|
|
}
|
|
|
|
/**
|
|
* This is a workaround for the odd Spring-Hibernate interaction during configuration.
|
|
* The Hibernate code assumes that schema scripts will be generated during context
|
|
* initialization. We want to do it afterwards and have a little more control. Hence this class.
|
|
* <p>
|
|
* The connection that is used will not be closed or manipulated in any way. This class
|
|
* merely serves to give the connection to Hibernate.
|
|
*
|
|
* @author Derek Hulley
|
|
*/
|
|
public static class SchemaBootstrapConnectionProvider extends UserSuppliedConnectionProvider
|
|
{
|
|
private static ThreadLocal<Connection> threadLocalConnection = new ThreadLocal<Connection>();
|
|
|
|
public SchemaBootstrapConnectionProvider()
|
|
{
|
|
}
|
|
|
|
/**
|
|
* Set the connection for Hibernate to use for schema generation.
|
|
*/
|
|
public static void setBootstrapConnection(Connection connection)
|
|
{
|
|
threadLocalConnection.set(connection);
|
|
}
|
|
|
|
/**
|
|
* Unsets the connection.
|
|
*/
|
|
@Override
|
|
public void close()
|
|
{
|
|
// Leave the connection well alone, just remove it
|
|
threadLocalConnection.set(null);
|
|
}
|
|
|
|
/**
|
|
* Does nothing. The connection was given by a 3rd party and they can close it.
|
|
*/
|
|
@Override
|
|
public void closeConnection(Connection conn)
|
|
{
|
|
}
|
|
|
|
/**
|
|
* Does nothing.
|
|
*/
|
|
@Override
|
|
public void configure(Properties props) throws HibernateException
|
|
{
|
|
}
|
|
|
|
/**
|
|
* @see #setBootstrapConnection(Connection)
|
|
*/
|
|
@Override
|
|
public Connection getConnection()
|
|
{
|
|
return threadLocalConnection.get();
|
|
}
|
|
|
|
@Override
|
|
public boolean supportsAggressiveRelease()
|
|
{
|
|
return false;
|
|
}
|
|
}
|
|
|
|
/**
|
|
* A {@link Connection} wrapper that delegates all calls to the wrapped class
|
|
* except for the close methods, which are ignored.
|
|
*
|
|
* @author Frederik Heremans
|
|
*/
|
|
public class UnclosableConnection implements Connection
|
|
{
|
|
private Connection wrapped;
|
|
|
|
public UnclosableConnection(Connection wrappedConnection)
|
|
{
|
|
this.wrapped = wrappedConnection;
|
|
}
|
|
|
|
@Override
|
|
public boolean isWrapperFor(Class<?> iface) throws SQLException
|
|
{
|
|
return wrapped.isWrapperFor(iface);
|
|
}
|
|
|
|
@Override
|
|
public <T> T unwrap(Class<T> iface) throws SQLException
|
|
{
|
|
return wrapped.unwrap(iface);
|
|
}
|
|
|
|
@Override
|
|
public void clearWarnings() throws SQLException
|
|
{
|
|
wrapped.clearWarnings();
|
|
}
|
|
|
|
@Override
|
|
public void close() throws SQLException
|
|
{
|
|
// Ignore this call
|
|
}
|
|
|
|
@Override
|
|
public void commit() throws SQLException
|
|
{
|
|
wrapped.commit();
|
|
}
|
|
|
|
@Override
|
|
public Array createArrayOf(String typeName, Object[] elements) throws SQLException
|
|
{
|
|
return wrapped.createArrayOf(typeName, elements);
|
|
}
|
|
|
|
@Override
|
|
public Blob createBlob() throws SQLException
|
|
{
|
|
return wrapped.createBlob();
|
|
}
|
|
|
|
@Override
|
|
public Clob createClob() throws SQLException
|
|
{
|
|
return wrapped.createClob();
|
|
}
|
|
|
|
@Override
|
|
public NClob createNClob() throws SQLException
|
|
{
|
|
return wrapped.createNClob();
|
|
}
|
|
|
|
@Override
|
|
public SQLXML createSQLXML() throws SQLException
|
|
{
|
|
return wrapped.createSQLXML();
|
|
}
|
|
|
|
@Override
|
|
public Statement createStatement() throws SQLException
|
|
{
|
|
return wrapped.createStatement();
|
|
}
|
|
|
|
@Override
|
|
public Statement createStatement(int resultSetType, int resultSetConcurrency)
|
|
throws SQLException
|
|
{
|
|
return wrapped.createStatement(resultSetType, resultSetConcurrency);
|
|
}
|
|
|
|
@Override
|
|
public Statement createStatement(int resultSetType, int resultSetConcurrency,
|
|
int resultSetHoldability) throws SQLException
|
|
{
|
|
return wrapped.createStatement(resultSetType, resultSetConcurrency, resultSetHoldability);
|
|
}
|
|
|
|
@Override
|
|
public Struct createStruct(String typeName, Object[] attributes) throws SQLException
|
|
{
|
|
return wrapped.createStruct(typeName, attributes);
|
|
}
|
|
|
|
@Override
|
|
public boolean getAutoCommit() throws SQLException
|
|
{
|
|
return wrapped.getAutoCommit();
|
|
}
|
|
|
|
@Override
|
|
public String getCatalog() throws SQLException
|
|
{
|
|
return wrapped.getCatalog();
|
|
}
|
|
|
|
@Override
|
|
public Properties getClientInfo() throws SQLException
|
|
{
|
|
return wrapped.getClientInfo();
|
|
}
|
|
|
|
@Override
|
|
public String getClientInfo(String name) throws SQLException
|
|
{
|
|
return wrapped.getClientInfo(name);
|
|
}
|
|
|
|
@Override
|
|
public int getHoldability() throws SQLException
|
|
{
|
|
return wrapped.getHoldability();
|
|
}
|
|
|
|
@Override
|
|
public DatabaseMetaData getMetaData() throws SQLException
|
|
{
|
|
return wrapped.getMetaData();
|
|
}
|
|
|
|
@Override
|
|
public int getTransactionIsolation() throws SQLException
|
|
{
|
|
return wrapped.getTransactionIsolation();
|
|
}
|
|
|
|
@Override
|
|
public Map<String, Class<?>> getTypeMap() throws SQLException
|
|
{
|
|
return wrapped.getTypeMap();
|
|
}
|
|
|
|
@Override
|
|
public SQLWarning getWarnings() throws SQLException
|
|
{
|
|
return wrapped.getWarnings();
|
|
}
|
|
|
|
@Override
|
|
public boolean isClosed() throws SQLException
|
|
{
|
|
return wrapped.isClosed();
|
|
}
|
|
|
|
@Override
|
|
public boolean isReadOnly() throws SQLException
|
|
{
|
|
return wrapped.isReadOnly();
|
|
}
|
|
|
|
@Override
|
|
public boolean isValid(int timeout) throws SQLException
|
|
{
|
|
return wrapped.isValid(timeout);
|
|
}
|
|
|
|
@Override
|
|
public String nativeSQL(String sql) throws SQLException
|
|
{
|
|
return wrapped.nativeSQL(sql);
|
|
}
|
|
|
|
@Override
|
|
public CallableStatement prepareCall(String sql) throws SQLException
|
|
{
|
|
return wrapped.prepareCall(sql);
|
|
}
|
|
|
|
@Override
|
|
public CallableStatement prepareCall(String sql, int resultSetType, int resultSetConcurrency)
|
|
throws SQLException
|
|
{
|
|
return wrapped.prepareCall(sql, resultSetType, resultSetConcurrency);
|
|
}
|
|
|
|
@Override
|
|
public CallableStatement prepareCall(String sql, int resultSetType,
|
|
int resultSetConcurrency, int resultSetHoldability) throws SQLException
|
|
{
|
|
return wrapped.prepareCall(sql, resultSetType, resultSetConcurrency, resultSetHoldability);
|
|
}
|
|
|
|
@Override
|
|
public PreparedStatement prepareStatement(String sql) throws SQLException
|
|
{
|
|
return wrapped.prepareStatement(sql);
|
|
}
|
|
|
|
@Override
|
|
public PreparedStatement prepareStatement(String sql, int autoGeneratedKeys)
|
|
throws SQLException
|
|
{
|
|
return wrapped.prepareStatement(sql, autoGeneratedKeys);
|
|
}
|
|
|
|
@Override
|
|
public PreparedStatement prepareStatement(String sql, int[] columnIndexes)
|
|
throws SQLException
|
|
{
|
|
return wrapped.prepareStatement(sql, columnIndexes);
|
|
}
|
|
|
|
@Override
|
|
public PreparedStatement prepareStatement(String sql, String[] columnNames)
|
|
throws SQLException
|
|
{
|
|
return wrapped.prepareStatement(sql, columnNames);
|
|
}
|
|
|
|
@Override
|
|
public PreparedStatement prepareStatement(String sql, int resultSetType,
|
|
int resultSetConcurrency) throws SQLException
|
|
{
|
|
return wrapped.prepareStatement(sql, resultSetType, resultSetConcurrency);
|
|
}
|
|
|
|
@Override
|
|
public PreparedStatement prepareStatement(String sql, int resultSetType,
|
|
int resultSetConcurrency, int resultSetHoldability) throws SQLException
|
|
{
|
|
return wrapped.prepareStatement(sql, resultSetType, resultSetConcurrency, resultSetHoldability);
|
|
}
|
|
|
|
@Override
|
|
public void releaseSavepoint(Savepoint savepoint) throws SQLException
|
|
{
|
|
wrapped.releaseSavepoint(savepoint);
|
|
}
|
|
|
|
@Override
|
|
public void rollback() throws SQLException
|
|
{
|
|
wrapped.rollback();
|
|
}
|
|
|
|
@Override
|
|
public void rollback(Savepoint savepoint) throws SQLException
|
|
{
|
|
wrapped.rollback(savepoint);
|
|
}
|
|
|
|
@Override
|
|
public void setAutoCommit(boolean autoCommit) throws SQLException
|
|
{
|
|
wrapped.setAutoCommit(autoCommit);
|
|
}
|
|
|
|
@Override
|
|
public void setCatalog(String catalog) throws SQLException
|
|
{
|
|
wrapped.setCatalog(catalog);
|
|
}
|
|
|
|
@Override
|
|
public void setClientInfo(Properties properties) throws SQLClientInfoException
|
|
{
|
|
wrapped.setClientInfo(properties);
|
|
}
|
|
|
|
@Override
|
|
public void setClientInfo(String name, String value) throws SQLClientInfoException
|
|
{
|
|
wrapped.setClientInfo(name, value);
|
|
}
|
|
|
|
@Override
|
|
public void setHoldability(int holdability) throws SQLException
|
|
{
|
|
wrapped.setHoldability(holdability);
|
|
}
|
|
|
|
@Override
|
|
public void setReadOnly(boolean readOnly) throws SQLException
|
|
{
|
|
wrapped.setReadOnly(readOnly);
|
|
}
|
|
|
|
@Override
|
|
public Savepoint setSavepoint() throws SQLException
|
|
{
|
|
return wrapped.setSavepoint();
|
|
}
|
|
|
|
@Override
|
|
public Savepoint setSavepoint(String name) throws SQLException
|
|
{
|
|
return wrapped.setSavepoint(name);
|
|
}
|
|
|
|
@Override
|
|
public void setTransactionIsolation(int level) throws SQLException
|
|
{
|
|
wrapped.setTransactionIsolation(level);
|
|
}
|
|
|
|
@Override
|
|
public void setTypeMap(Map<String, Class<?>> map) throws SQLException
|
|
{
|
|
wrapped.setTypeMap(map);
|
|
}
|
|
|
|
@Override
|
|
public void setSchema(String schema) throws SQLException
|
|
{
|
|
wrapped.setSchema(schema);
|
|
}
|
|
|
|
@Override
|
|
public String getSchema() throws SQLException
|
|
{
|
|
return wrapped.getSchema();
|
|
}
|
|
|
|
@Override
|
|
public void abort(Executor executor) throws SQLException
|
|
{
|
|
wrapped.abort(executor);
|
|
}
|
|
|
|
@Override
|
|
public void setNetworkTimeout(Executor executor, int milliseconds) throws SQLException
|
|
{
|
|
wrapped.setNetworkTimeout(executor, milliseconds);
|
|
}
|
|
|
|
@Override
|
|
public int getNetworkTimeout() throws SQLException
|
|
{
|
|
return wrapped.getNetworkTimeout();
|
|
}
|
|
}
|
|
}
|