Compare commits

...

91 Commits

Author SHA1 Message Date
alfresco-build
eda782382c [maven-release-plugin] prepare for next development iteration 2020-08-13 17:28:41 +00:00
alfresco-build
1467482c62 [maven-release-plugin] prepare release 2.0.0-RC2 2020-08-13 17:28:32 +00:00
Angel Borroy
30d12ce917 Merge pull request #896 from Alfresco/fix/SEARCH_2126_RemoveCountZeroQueryFacets
SEARCH-2126: Remove facet query results having count equals to zero.
2020-08-13 17:05:44 +01:00
alfresco-build
48140004fe [maven-release-plugin] prepare for next development iteration 2020-08-10 13:39:00 +00:00
alfresco-build
3856ef5138 [maven-release-plugin] prepare release 2.0.0-RC1 2020-08-10 13:38:50 +00:00
Alex Mukha
2c6dd4e414 SEARCH-2256 Update pom version according to the new branch 2020-08-10 12:08:17 +01:00
dependabot[bot]
db5b422e02 Bump alfresco-data-model from 8.135 to 8.145 (#880)
Bumps [alfresco-data-model](https://github.com/Alfresco/alfresco-data-model) from 8.135 to 8.145.
- [Release notes](https://github.com/Alfresco/alfresco-data-model/releases)
- [Commits](https://github.com/Alfresco/alfresco-data-model/compare/8.135...8.145)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2020-08-07 16:11:44 +01:00
dependabot[bot]
24757549d6 Bump jackson-databind from 2.7.7 to 2.9.10.5 in /e2e-test (#854)
* Bump jackson-databind from 2.7.7 to 2.9.10.5 in /e2e-test

Bumps [jackson-databind](https://github.com/FasterXML/jackson) from 2.7.7 to 2.9.10.5.
- [Release notes](https://github.com/FasterXML/jackson/releases)
- [Commits](https://github.com/FasterXML/jackson/commits)

Signed-off-by: dependabot[bot] <support@github.com>

* Correct tests according to new jackson parsing

* Correct tests according to new jackson parsing

* Fix StatsSearchTest (jackson config in framework)

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Alex Mukha <alex.mukha@alfresco.com>
Co-authored-by: Alex Mukha <killerboot@users.noreply.github.com>
2020-08-06 15:15:57 +01:00
dependabot[bot]
204e9e2cf1 Bump cxf.version from 3.2.13 to 3.2.14 (#891) 2020-08-05 16:55:20 +00:00
dependabot[bot]
bffed89a74 Bump jaxb-xjc from 2.3.2 to 2.3.3 (#888) 2020-08-05 15:39:41 +00:00
dependabot[bot]
80943e704e Bump commons-lang3 from 3.10 to 3.11 (#887) 2020-08-05 15:24:47 +00:00
Elia Porciani
3edf9c3196 [SEARCH-2354] (#876)
* [SEARCH-2354]
- Added support for DAYOFWEEK and DAYOFYEAR sql functions.
- Added unit tests
- Added e2e tests

* [SEARCH-2354]
updated alfresco-data-model dependency

* [SEARCH-2354] dep update

* [SEARCH-2354] review comments

* [SEARCH-2354] removed syntax not supported for EXTRACT

* [SEARCH-2354] removed syntax not supported for EXTRACT

Co-authored-by: Alessandro Benedetti <a.benedetti@sease.io>
2020-08-05 16:11:13 +02:00
Alex Mukha
8bb939b706 SEARCH-2372 Update base docker image (#882) 2020-08-05 14:57:16 +01:00
dependabot[bot]
5ba8772110 Bump dependency.jackson.version from 2.11.1 to 2.11.2 (#889) 2020-08-05 13:53:14 +00:00
dependabot[bot]
1bf444f41f Bump alfresco-super-pom from 10 to 12 (#886) 2020-08-05 12:38:48 +00:00
dependabot[bot]
7b2f653944 Bump utility from 3.0.26 to 3.0.27 (#885) 2020-08-05 12:38:06 +00:00
Alex Mukha
d8888e303d Limit Servlet API updates in dependabot 2020-08-05 12:36:26 +01:00
dependabot[bot]
e990c2e565 Bump restapi from 1.42 to 1.46 (#884) 2020-08-05 11:29:38 +00:00
Alex Mukha
6dbcbfe2ca Limit cxf updates in dependabot 2020-08-05 12:25:44 +01:00
Alex Mukha
ae473429cb Revert "Limit cxf updates in dependabot"
This reverts commit 1f6d07c7865618995e3136d4f11055ee03b93841.
2020-08-05 12:24:50 +01:00
Alex Mukha
ed67ffc279 Limit cxf updates in dependabot 2020-08-05 12:21:05 +01:00
Alex Mukha
ebf8af3dd7 Increase the limit of dependabot PRs to 15 2020-08-05 12:07:02 +01:00
dependabot[bot]
119bf96852 Bump mockito-core from 3.4.0 to 3.4.6 (#883) 2020-08-05 10:56:25 +00:00
dependabot[bot]
6edae0a22b Bump dependency.jackson.version from 2.10.3 to 2.11.1 (#847) 2020-08-05 10:36:25 +00:00
dependabot[bot]
a6c09b8bfc Merge pull request #855 from Alfresco/dependabot/npm_and_yarn/e2e-test/generator-alfresco-docker-compose/lodash-4.17.19 2020-08-05 10:05:08 +00:00
Alex Mukha
91f23886c4 Exclude calcite from dependabot 2020-08-05 10:41:11 +01:00
Alex Mukha
680e91ee3c Exclude zeppelin from dependabot 2020-08-05 10:38:21 +01:00
Alex Mukha
b1a44816b9 Revert "SEARCH-2318: Add Travis configuration to Insight Engine" (#872) 2020-07-31 16:10:39 +01:00
Travis CI User
395dae1788 [maven-release-plugin][skip ci]prepare for next development iteration 2020-07-31 14:17:24 +00:00
Travis CI User
ac3f3694ca [maven-release-plugin][skip ci]prepare release alfresco-search-and-insight-parent-2.0.0-A3 2020-07-31 14:17:17 +00:00
Alex Mukha
9b67ef3dcb Merge 9e26681dcde8cba891e1b7f83b850ba82f07fceb into 0aba9bcf92d6ae1e6222c2dcb248d0f0d6e76921 2020-07-31 12:51:46 +00:00
Alex Mukha
09ba8918e1 Fix a typo in the scm url 2020-07-31 13:51:37 +01:00
Alex Mukha
e54325dd17 Merge branch 'master' into feature/search-2318-travis-build 2020-07-31 12:10:37 +01:00
Alex Mukha
7d1a85dac4 Change scm tags to https 2020-07-31 12:04:21 +01:00
Tom Page
ac5ff73ba7 Merge pull request #868 from Alfresco/feature/SEARCH-2213_UseTLS
SEARCH-2213 Replace SSL with TLS to use TLS 1.2 by default when accessing Repo.
2020-07-31 11:30:22 +01:00
Elia Porciani
d274cf3197 [SEARCH-2330] (#866)
changed log messages at tracking enabling and disabling
2020-07-30 15:39:41 +02:00
elia
5882bbb773 [SEARCH-2330]
fix formatting
2020-07-30 15:38:34 +02:00
elia
91707f947d [SEARCH-2330]
changed log messages at tracking enabling and disabling
2020-07-30 14:17:06 +02:00
Alex Mukha
deef10f1ee Modify release command 2020-07-30 11:57:40 +01:00
Elia Porciani
7084459818 Fix/search 2330 state persisten across reloads (#865)
* [SEARCH-2330]
managed persistence accross reloads with static variable

* [SEARCH-2330]
fix unit test
2020-07-30 11:41:40 +02:00
Alex Mukha
8e405dbb30 Modify release command 2020-07-30 10:07:09 +01:00
Alex Mukha
d3bc0b4d8c Add addional checks for the release stage 2020-07-29 17:09:21 +01:00
Alex Mukha
b7a21cfb73 Merge branch 'master' into feature/search-2318-travis-build 2020-07-29 16:40:56 +01:00
Tom Page
59c45b6638 Merge pull request #864 from Alfresco/feature/SEARCH-2324_UpdateBaseDockerImage
SEARCH-2324 Update base Docker image.
2020-07-29 15:38:21 +01:00
Tom Page
8e2327b1d1 SEARCH-2324 Update base Docker image. 2020-07-29 14:05:59 +01:00
Elia Porciani
2f16e5f0b9 SEARCH-2317: Add support of nested queries (#833)
* [SEARCH-2317]
- date field substitutions in nested and join queries
- optimized code with the use of streams

* [SEARCH-2317]
code refactoring

* [SEARCH-2317]
Added unit tests

* [SEARCH-2317]
fixed broken tests

* [SEARCH-2317]
removed duplicated test

* SEARCH-2317 Adding e2e-test coverage for date filter fields

* [SEARCH-2317]
initial work on translation of timestamp in having clause

* [SEARCH-2317]
update calcite to 1.15

* [SEARCH-2317]
- timestamp tranlation into long value
- fix aliasing in nested queries

* [SEARCH-2317]
add test for timestamp translation

* [SEARCH-2317]
downgrade calcite to 1.13

* [SEARCH-2317]
fix error in complex where clauses

* [SEARCH-2317]
Coder refactoring.
Updated notice.txt with the new version of calcite and avatica

* [SEARCH-2317]
fix E2E tests

* [SEARCH-2317]
Managed group by buckets with no fields defined.

* [SEARCH-2317]
Added e2e tests

* [SEARCH-2317]
modified tuple values from NaN to 0 at retrieving time

* [SEARCH-2317]
- Fix tests accoring with new aggregation results (0 instead of NaN)
- Added e2e tests

* [SEARCH-2317]
fixed test

* [SEARCH-2317]
translated date into epoch in comparison with aggregate functions

* [SEARCH-2317]
fix wrong number of results in test

* [SEARCH-2317]
added sql date to epoch translation in having clause

* [SEARCH-2317]
Fix indentations

* [SEARCH-2317]
date are converted from UTC into epoch

Co-authored-by: Keerat <keerat.lalia@alfresco.com>
2020-07-29 12:44:27 +02:00
Alex Mukha
e4b6502103 Add (internal) release stage 2020-07-27 17:45:06 +01:00
Alex Mukha
b1e4342be8 Merge branch 'master' into feature/search-2318-travis-build 2020-07-27 16:01:51 +01:00
Alessandro Benedetti
604ac1a8f0 Fix/search 2304 (#853)
* [SEARCH-2304]
using correct parser for solr dates

* [SEARCH-2304]
fix bug then fieldList is empty

* [SEARCH-2304] minor

* [SEARCH-2304] select, grouping and where support + tests

* [SEARCH-2304] code clean-up + tests

* [SEARCH-2304] SqlDateTransformer code conventions + review feedback

* [SEARCH-2304] DistributedCastTests code conventions + review feedback

* [SEARCH-2304] Distributed Grouping tests code conventions + review feedback

* [SEARCH-2304] e2e test

* [SEARCH-2304] e2e test

* [SEARCH-2304] e2e test

Co-authored-by: elia <e.porciani@sease.io>
2020-07-27 14:52:52 +02:00
Andrea Gazzarini
a3d0986125 SEARCH-2330: enable/disable indexing (#856)
* [ SEARCH-2330 ] AlfrescoCoreAdminHandler START/STOP (no logic) + Unit tests

* [ SEARCH-2330 ] Activatable + ActivatableTracker

* [ SEARCH-2330 ] AbstractShardInformationPublisher and NodeStatePublisher removed (replaced by ShardStatePublisher)

* [ SEARCH-2330 ] Test fixes

* [ SEARCH-2330 ] Fix ClassCastException on ShardStatePublisher

* [ SEARCH-2330 ] Fix Unnecessary Stubbing

* [ SEARCH-2330 ] ActivatableTrackerTest

* [ SEARCH-2330 ] Fix action changes (it depends on the indexing state)

* [ SEARCH-2330 ] Purge on stop/start

* [ SEARCH-2330 ] Rollback on scheduled maintenance + Unit tests

* [ SEARCH-2330 ] Persist indexing state across core reloads

* [ SEARCH-2330 ] Minor renaming on tests

* [ SEARCH-2330 ] Fix Unit test failures

* [ SEARCH-2330 ] Fix E2E

* [ SEARCH-2330 ] Fix Retry expectation

* [ SEARCH-2330 ] Fix Json Path in retry expectaction

* [ SEARCH-2330 ] Fix ClassCastException

* [ SEARCH-2330 ] Review comments addressed

* [ SEARCH-2330 ] Review comments addressed (additional comments in the code)
2020-07-24 11:41:40 +02:00
dependabot[bot]
0f427a32ec Bump lodash in /e2e-test/generator-alfresco-docker-compose
Bumps [lodash](https://github.com/lodash/lodash) from 4.17.15 to 4.17.19.
- [Release notes](https://github.com/lodash/lodash/releases)
- [Commits](https://github.com/lodash/lodash/compare/4.17.15...4.17.19)

Signed-off-by: dependabot[bot] <support@github.com>
2020-07-23 15:18:29 +00:00
dependabot[bot]
169da685da Merge pull request #848 from Alfresco/dependabot/maven/com.googlecode.maven-download-plugin-download-maven-plugin-1.6.0 2020-07-23 15:17:46 +00:00
Alex Mukha
32b77d7577 SEARCH-2198 Exclude solr dependencies 2020-07-23 14:29:31 +01:00
dependabot[bot]
823b306f2e Bump download-maven-plugin from 1.5.0 to 1.6.0
Bumps [download-maven-plugin](https://github.com/maven-download-plugin/maven-download-plugin) from 1.5.0 to 1.6.0.
- [Release notes](https://github.com/maven-download-plugin/maven-download-plugin/releases)
- [Commits](https://github.com/maven-download-plugin/maven-download-plugin/compare/1.5.0...1.6.0)

Signed-off-by: dependabot[bot] <support@github.com>
2020-07-23 12:35:26 +00:00
Alex Mukha
888fb690e1 SEARCH-2198 Move dependabot config in .github 2020-07-23 13:34:40 +01:00
Alex Mukha
66c89ec5ae Merge branch 'master' into feature/search-2318-travis-build 2020-07-21 09:35:44 +01:00
Alex Mukha
5b5fab47ad Update name of a job 2020-07-20 08:59:35 +01:00
Alex Mukha
ec12b702a0 Allow configuring the search log level in the Python generator. 2020-07-17 16:41:56 +01:00
Alex Mukha
6be78159b1 Merge branch 'master' into feature/search-2318-travis-build 2020-07-17 16:40:10 +01:00
Alex Mukha
1d0762d49c Add retry to docker push 2020-07-17 16:32:59 +01:00
Alex Mukha
8e6f33767c Rename deploy job 2020-07-17 16:16:30 +01:00
Alex Mukha
4d2152ad51 Change alpha version 2020-07-17 16:15:24 +01:00
Alex Mukha
3f0b95e965 Add deploy stage 2020-07-13 17:28:57 +01:00
Alex Mukha
9bdf0b4e56 Change the location of zepplin zip 2020-07-10 20:02:59 +01:00
Alex Mukha
5788dc6561 Rename whitesource config to be picked up 2020-07-10 19:42:54 +01:00
Alex Mukha
c5e827afc8 Add defaut config file for whitesource 2020-07-10 19:41:51 +01:00
Alex Mukha
f47f8e85fb Correct zeppelin folder location 2020-07-10 18:13:41 +01:00
Alex Mukha
6266e7018e Correct whitesource cli params 2020-07-10 18:11:24 +01:00
Alex Mukha
9e5d5347e4 Add whitesource scans 2020-07-10 17:35:33 +01:00
Alex Mukha
ccdda07ec1 Cleanup unnecessary docker image creation 2020-07-10 16:23:15 +01:00
Alex Mukha
b79554f3b9 Add docker images verification 2020-07-09 14:04:53 +01:00
Alex Mukha
992357c90e Fix unit test mvn cleanup 2020-07-09 13:33:52 +01:00
Alex Mukha
a2414f37ee Cleanup maven commands 2020-07-09 13:23:02 +01:00
Alex Mukha
0e8568f20c Add Integration tests 2020-07-09 13:21:39 +01:00
Alex Mukha
45930b9b5e Add SQL API tests 2020-07-09 13:14:31 +01:00
Alex Mukha
f592ce4c65 Enable branch restrictions 2020-07-08 20:40:06 +01:00
Alex Mukha
30a2cb46e6 Correct maven command for Search API tests 2020-07-08 20:38:49 +01:00
Alex Mukha
77c7d6eeb1 Add Search API tests 2020-07-08 20:11:05 +01:00
Alex Mukha
89721afe93 Add quay.io login 2020-07-08 19:44:43 +01:00
Alex Mukha
1027da8be5 Add HF repository 2020-07-08 17:52:02 +01:00
Alex Mukha
fcba92611a Rename test stage 2020-07-08 17:05:14 +01:00
Alex Mukha
121a045e78 Increase timeouts for install scripts 2020-07-08 16:27:08 +01:00
Alex Mukha
d191bc85ca Add internal releases repository 2020-07-08 15:44:04 +01:00
Alex Mukha
5ad53074f1 Change distribution management and repository URls 2020-07-08 14:35:47 +01:00
Alex Mukha
7c5d22b39f Rename stages 2020-07-08 12:37:26 +01:00
Alex Mukha
071baea49e Add quiet flag to install commands 2020-07-08 12:13:47 +01:00
Alex Mukha
3bf3677779 Add config for CMIS API tests 2020-07-08 12:11:35 +01:00
Alex Mukha
6c717e91d5 Add settings.xml 2020-07-08 11:24:48 +01:00
Alex Mukha
1aeb1bc426 Comment out settings.xml 2020-07-08 10:09:34 +01:00
Alex Mukha
c5c02bfbb5 Remove deprecated sudo 2020-07-08 10:08:39 +01:00
Alex Mukha
cb32feb1fd Add simple unit test job 2020-07-08 10:02:28 +01:00
41 changed files with 2172 additions and 587 deletions

54
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,54 @@
# see https://docs.github.com/en/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "maven"
directory: "/"
schedule:
interval: "daily"
time: "22:00"
timezone: "Europe/London"
open-pull-requests-limit: 15
ignore:
# Solr dependencies
- dependency-name: "org.apache.lucene:lucene-analyzers-common"
- dependency-name: "org.apache.solr:solr-core"
- dependency-name: "org.apache.solr:solr-analysis-extras"
- dependency-name: "org.apache.solr:solr-langid"
- dependency-name: "org.apache.solr:solr-clustering"
- dependency-name: "org.apache.solr:solr-test-framework"
- dependency-name: "org.apache.solr:solr-solrj"
# Zeppelin
- dependency-name: "org.apache.zeppelin:zeppelin-web"
# Calcite
- dependency-name: "org.apache.calcite:calcite-core"
- dependency-name: "org.apache.calcite:calcite-linq4j"
- dependency-name: "org.apache.calcite.avatica:avatica-core"
# cxf lib updates should not be higher than 3.2
- dependency-name: "org.apache.cxf:*"
versions: "[3.3,)"
# Servlet API
- dependency-name: "javax.servlet:javax.servlet-api"
- package-ecosystem: "docker"
directory: "search-services/packaging/src/docker/"
schedule:
interval: "weekly"
day: "saturday"
time: "22:00"
timezone: "Europe/London"
- package-ecosystem: "docker"
directory: "insight-engine/packaging/src/docker/"
schedule:
interval: "weekly"
day: "saturday"
time: "22:00"
timezone: "Europe/London"
- package-ecosystem: "docker"
directory: "insight-engine/alfresco-insight-zeppelin/src/docker/"
schedule:
interval: "weekly"
day: "saturday"
time: "22:00"
timezone: "Europe/London"

View File

@@ -1,33 +0,0 @@
# see https://docs.github.com/en/github/administering-a-repository/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "maven"
directory: "/"
schedule:
interval: "daily"
time: "22:00"
timezone: "Europe/London"
- package-ecosystem: "docker"
directory: "search-services/packaging/src/docker/"
schedule:
interval: "weekly"
day: "saturday"
time: "22:00"
timezone: "Europe/London"
- package-ecosystem: "docker"
directory: "insight-engine/packaging/src/docker/"
schedule:
interval: "weekly"
day: "saturday"
time: "22:00"
timezone: "Europe/London"
- package-ecosystem: "docker"
directory: "insight-engine/alfresco-insight-zeppelin/src/docker/"
schedule:
interval: "weekly"
day: "saturday"
time: "22:00"
timezone: "Europe/London"

View File

@@ -2093,9 +2093,9 @@
}
},
"lodash": {
"version": "4.17.15",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz",
"integrity": "sha512-8xOcRHvCjnocdS5cpwXQXVzmmh5e5+saE2QGoeQmbKmRS6J3VQppPOIt0MnmE+4xlZoumy0GPG0D0MVIQbNA1A=="
"version": "4.17.19",
"resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.19.tgz",
"integrity": "sha512-JNvd8XER9GQX0v2qJgsaN/mzFCNA5BRe/j8JN9d+tWyGLSodKQHKFicdwNYzWwI3wjRnaKPsGj1XkBjx/F96DQ=="
},
"lodash.debounce": {
"version": "4.0.8",

View File

@@ -3,21 +3,21 @@
<parent>
<groupId>org.alfresco</groupId>
<artifactId>alfresco-search-and-insight-parent</artifactId>
<version>2.0.0-SNAPSHOT</version>
<version>2.0.0.0-SNAPSHOT</version>
</parent>
<groupId>search-analytics-e2e-test</groupId>
<artifactId>search-analytics-e2e-test</artifactId>
<name>Search Analytics E2E Tests</name>
<description>Test Project to test Search Service and Analytics Features on a complete setup of Alfresco, Share</description>
<properties>
<tas.rest.api.version>1.42</tas.rest.api.version>
<tas.rest.api.version>1.47</tas.rest.api.version>
<tas.cmis.api.version>1.13</tas.cmis.api.version>
<tas.utility.version>3.0.26</tas.utility.version>
<tas.utility.version>3.0.27</tas.utility.version>
<rm.version>3.3.1</rm.version>
<suiteXmlFile>src/test/resources/SearchSuite.xml</suiteXmlFile>
<test.exclude />
<test.include />
<jackson.databind.version>2.7.7</jackson.databind.version>
<jackson.databind.version>2.9.10.5</jackson.databind.version>
<licenseName>community</licenseName>
</properties>
<build>

View File

@@ -150,7 +150,7 @@ public class FacetIntervalSearchTest extends AbstractSearchServicesE2ETest
bucket = facetResponseModel.getBuckets().get(1);
bucket.assertThat().field("label").is("theRest");
bucket.assertThat().field("filterQuery").is("creator:<\"user\" TO \"z\"]");
bucket.assertThat().field("filterQuery").is("creator:[\"user\" TO \"z\"]");
bucket.getMetrics().get(0).assertThat().field("type").is("count");
bucket.getMetrics().get(0).assertThat().field("value").is("{count=0}");
}
@@ -196,7 +196,7 @@ public class FacetIntervalSearchTest extends AbstractSearchServicesE2ETest
bucket = facetResponseModel.getBuckets().get(1);
bucket.assertThat().field("label").is("Before2016");
bucket.assertThat().field("filterQuery").is("cm:modified:[\"*\" TO \"2016\">");
bucket.assertThat().field("filterQuery").is("cm:modified:[\"*\" TO \"2016\"]");
bucket.getMetrics().get(0).assertThat().field("type").is("count");
bucket.getMetrics().get(0).assertThat().field("value").is("{count=0}");
}

View File

@@ -139,16 +139,23 @@ public class FacetedSearchTest extends AbstractSearchServicesE2ETest
FacetFieldBucket facet = response.getContext().getFacetQueries().get(0);
facet.assertThat().field("label").contains("small").and().field("count").isGreaterThan(0);
facet.assertThat().field("label").contains("small").and().field("filterQuery").is("content.size:[0 TO 102400]");
response.getContext().getFacetQueries().get(1).assertThat().field("label").contains("large")
.and().field("count").isLessThan(1)
.and().field("filterQuery").is("content.size:[1048576 TO 16777216]");
response.getContext().getFacetQueries().get(2).assertThat().field("label").contains("medium")
.and().field("count").isLessThan(1)
.and().field("filterQuery").is("content.size:[102400 TO 1048576]");
//We don't expect to see the FacetFields if group is being used.
Assert.assertEquals(response.getContext().getFacetQueries().size(), 1, "Results with count=0 must be omitted");
// We don't expect to see the FacetFields if group is being used.
Assert.assertNull(response.getContext().getFacetsFields());
Assert.assertNull(response.getContext().getFacets());
}
/**
* Verify this query is returning the same results for both single server and shard environments.
* @throws Exception
*/
@Test(groups={TestGroup.CONFIG_SHARDING})
@TestRail(section = { TestGroup.REST_API, TestGroup.SEARCH}, executionType = ExecutionType.ACCEPTANCE, description = "Checks facet queries for the Search api in Shard environments")
public void searchWithQueryFacetingCluster() throws Exception
{
searchWithQueryFaceting();
}
/**
* * Perform a group by faceting, below test groups the facet by group name foo.

View File

@@ -28,6 +28,7 @@ package org.alfresco.test.search.functional.searchServices.solr.admin;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import java.util.Map;
@@ -38,6 +39,8 @@ import org.springframework.context.annotation.Configuration;
import org.testng.Assert;
import org.testng.annotations.Test;
import static java.util.Collections.emptyList;
/**
* End to end tests for SOLR Admin actions REST API, available from:
*
@@ -556,9 +559,11 @@ public class SolrE2eAdminTest extends AbstractE2EFunctionalTest
RestResponse response = restClient.withParams("txid=" + txid).withSolrAdminAPI().getAction("purge");
checkResponseStatusOk(response);
String actionStatus = response.getResponse().body().jsonPath().get("action.status");
Assert.assertEquals(actionStatus, "scheduled");
DEFAULT_CORE_NAMES.forEach(core -> {
String actionStatus = response.getResponse().body().jsonPath().get("action." + core + ".status");
Assert.assertEquals(actionStatus, "scheduled");
});
}
/**
@@ -566,7 +571,7 @@ public class SolrE2eAdminTest extends AbstractE2EFunctionalTest
* @throws Exception
*/
@Test(priority = 25)
public void testPurgeCore() throws Exception
public void testPurgeCore()
{
final Integer txid = 1;
@@ -578,7 +583,7 @@ public class SolrE2eAdminTest extends AbstractE2EFunctionalTest
checkResponseStatusOk(response);
String actionStatus = response.getResponse().body().jsonPath().get("action.status");
String actionStatus = response.getResponse().body().jsonPath().get("action." + core + ".status");
Assert.assertEquals(actionStatus, "scheduled");
}
catch (Exception e)
@@ -599,9 +604,11 @@ public class SolrE2eAdminTest extends AbstractE2EFunctionalTest
RestResponse response = restClient.withSolrAdminAPI().getAction("purge");
checkResponseStatusOk(response);
String actionStatus = response.getResponse().body().jsonPath().get("action.status");
Assert.assertEquals(actionStatus, "scheduled");
DEFAULT_CORE_NAMES.forEach(core -> {
String actionStatus = response.getResponse().body().jsonPath().get("action." + core + ".status");
Assert.assertEquals(actionStatus, "scheduled");
});
}
/**
@@ -661,9 +668,11 @@ public class SolrE2eAdminTest extends AbstractE2EFunctionalTest
RestResponse response = restClient.withParams("txid=" + txid).withSolrAdminAPI().getAction("reindex");
checkResponseStatusOk(response);
String actionStatus = response.getResponse().body().jsonPath().get("action.status");
Assert.assertEquals(actionStatus, "scheduled");
DEFAULT_CORE_NAMES.forEach(core -> {
String actionStatus = response.getResponse().body().jsonPath().get("action." + core + ".status");
Assert.assertEquals(actionStatus, "scheduled");
});
}
/**
@@ -671,7 +680,7 @@ public class SolrE2eAdminTest extends AbstractE2EFunctionalTest
* @throws Exception
*/
@Test(priority = 30)
public void testReindexCore() throws Exception
public void testReindexCore()
{
Integer txid = 1;
@@ -683,7 +692,7 @@ public class SolrE2eAdminTest extends AbstractE2EFunctionalTest
checkResponseStatusOk(response);
String actionStatus = response.getResponse().body().jsonPath().get("action.status");
String actionStatus = response.getResponse().body().jsonPath().get("action." + core + ".status");
Assert.assertEquals(actionStatus, "scheduled");
}
catch (Exception e)
@@ -705,12 +714,12 @@ public class SolrE2eAdminTest extends AbstractE2EFunctionalTest
checkResponseStatusOk(response);
String actionStatus = response.getResponse().body().jsonPath().get("action.status");
Assert.assertEquals(actionStatus, "scheduled");
DEFAULT_CORE_NAMES.forEach(core -> {
List<String> errorNodeList = response.getResponse().body().jsonPath().get("action." + core);
Assert.assertEquals(errorNodeList, Arrays.asList(), "Expected no error nodes,");
String actionStatus = response.getResponse().body().jsonPath().get("action." + core + ".status");
Assert.assertEquals(actionStatus, "scheduled");
List<String> errorNodeList = response.getResponse().body().jsonPath().get("action." + core + "['Error Nodes']");
Assert.assertEquals(errorNodeList, emptyList(), "Expected no error nodes,");
});
}
@@ -719,7 +728,7 @@ public class SolrE2eAdminTest extends AbstractE2EFunctionalTest
* @throws Exception
*/
@Test(priority = 32)
public void testRetryCore() throws Exception
public void testRetryCore()
{
DEFAULT_CORE_NAMES.forEach(core -> {
@@ -729,11 +738,11 @@ public class SolrE2eAdminTest extends AbstractE2EFunctionalTest
checkResponseStatusOk(response);
String actionStatus = response.getResponse().body().jsonPath().get("action.status");
String actionStatus = response.getResponse().body().jsonPath().get("action." + core + ".status");
Assert.assertEquals(actionStatus, "scheduled");
List<String> errorNodeList = response.getResponse().body().jsonPath().get("action." + core);
Assert.assertEquals(errorNodeList, Arrays.asList(), "Expected no error nodes,");
List<String> errorNodeList = response.getResponse().body().jsonPath().get("action." + core + "['Error Nodes']");
Assert.assertEquals(errorNodeList, emptyList(), "Expected no error nodes,");
}
catch (Exception e)
{
@@ -755,9 +764,10 @@ public class SolrE2eAdminTest extends AbstractE2EFunctionalTest
RestResponse response = restClient.withParams("txid=" + txid).withSolrAdminAPI().getAction("index");
checkResponseStatusOk(response);
String actionStatus = response.getResponse().body().jsonPath().get("action.status");
Assert.assertEquals(actionStatus, "scheduled");
DEFAULT_CORE_NAMES.forEach(core -> {
String actionStatus = response.getResponse().body().jsonPath().get("action." + core + ".status");
Assert.assertEquals(actionStatus, "scheduled");
});
}
/**
@@ -777,7 +787,7 @@ public class SolrE2eAdminTest extends AbstractE2EFunctionalTest
checkResponseStatusOk(response);
String actionStatus = response.getResponse().body().jsonPath().get("action.status");
String actionStatus = response.getResponse().body().jsonPath().get("action." + core + ".status");
Assert.assertEquals(actionStatus, "scheduled");
}
catch (Exception e)

View File

@@ -4,10 +4,10 @@
<parent>
<groupId>org.alfresco</groupId>
<artifactId>alfresco-super-pom</artifactId>
<version>10</version>
<version>12</version>
</parent>
<artifactId>alfresco-search-and-insight-parent</artifactId>
<version>2.0.0-SNAPSHOT</version>
<version>2.0.0.0-SNAPSHOT</version>
<packaging>pom</packaging>
<name>Alfresco Search And Insight Parent</name>
<distributionManagement>

View File

@@ -6,7 +6,7 @@
<parent>
<groupId>org.alfresco</groupId>
<artifactId>alfresco-search-parent</artifactId>
<version>2.0.0-SNAPSHOT</version>
<version>2.0.0.0-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
@@ -15,7 +15,7 @@
<dependency>
<groupId>org.alfresco</groupId>
<artifactId>alfresco-solrclient-lib</artifactId>
<version>2.0.0-SNAPSHOT</version>
<version>2.0.0.0-SNAPSHOT</version>
<exclusions>
<exclusion>
<artifactId>servlet-api</artifactId>
@@ -93,13 +93,13 @@
<dependency>
<groupId>com.sun.xml.bind</groupId>
<artifactId>jaxb-xjc</artifactId>
<version>2.3.2</version>
<version>2.3.3</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.10</version>
<version>3.11</version>
</dependency>
<dependency>
<groupId>org.apache.cxf</groupId>
@@ -158,18 +158,24 @@
</dependency>
<!-- DATE Functions (YEAR, MONTH, ...) are broken in Calcite 1.11.0 (default
version provided by SOLR 6.6.x) Upgrading manually Calcite version to 1.12.0
version provided by SOLR 6.6.x)
Upgrading manually Calcite version to 1.15.0
to support this kind of functions -->
<dependency>
<groupId>org.apache.calcite</groupId>
<artifactId>calcite-core</artifactId>
<version>1.12.0</version>
<version>1.13.0</version>
</dependency>
<dependency>
<groupId>org.apache.calcite</groupId>
<artifactId>calcite-linq4j</artifactId>
<version>1.12.0</version>
<version>1.13.0</version>
</dependency>
<dependency>
<groupId>org.apache.calcite.avatica</groupId>
<artifactId>avatica-core</artifactId>
<version>1.13.0</version>
</dependency>
<!-- Test dependencies -->
<dependency>
@@ -182,7 +188,7 @@
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<version>3.4.0</version>
<version>3.4.6</version>
<scope>test</scope>
</dependency>

View File

@@ -33,12 +33,12 @@ import org.alfresco.solr.adapters.IOpenBitSet;
import org.alfresco.solr.client.SOLRAPIClientFactory;
import org.alfresco.solr.config.ConfigUtil;
import org.alfresco.solr.tracker.AclTracker;
import org.alfresco.solr.tracker.AbstractShardInformationPublisher;
import org.alfresco.solr.tracker.ActivatableTracker;
import org.alfresco.solr.tracker.ShardStatePublisher;
import org.alfresco.solr.tracker.DBIDRangeRouter;
import org.alfresco.solr.tracker.DocRouter;
import org.alfresco.solr.tracker.IndexHealthReport;
import org.alfresco.solr.tracker.MetadataTracker;
import org.alfresco.solr.tracker.NodeStatePublisher;
import org.alfresco.solr.tracker.SolrTrackerScheduler;
import org.alfresco.solr.tracker.Tracker;
import org.alfresco.solr.tracker.TrackerRegistry;
@@ -81,6 +81,7 @@ import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Consumer;
import java.util.function.Function;
import java.util.function.LongToIntFunction;
import java.util.stream.Collectors;
@@ -103,6 +104,7 @@ import static org.alfresco.solr.HandlerReportHelper.buildAclTxReport;
import static org.alfresco.solr.HandlerReportHelper.buildNodeReport;
import static org.alfresco.solr.HandlerReportHelper.buildTrackerReport;
import static org.alfresco.solr.HandlerReportHelper.buildTxReport;
import static org.alfresco.solr.utils.Utils.isNotNullAndNotEmpty;
import static org.alfresco.solr.utils.Utils.isNullOrEmpty;
import static org.alfresco.solr.utils.Utils.notNullOrEmpty;
@@ -138,10 +140,10 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
private static final String REPORT = "report";
private static final String SUMMARY = "Summary";
private static final String ARG_ACLTXID = "acltxid";
static final String ARG_ACLTXID = "acltxid";
static final String ARG_TXID = "txid";
private static final String ARG_ACLID = "aclid";
private static final String ARG_NODEID = "nodeid";
static final String ARG_ACLID = "aclid";
static final String ARG_NODEID = "nodeid";
private static final String ARG_QUERY = "query";
private static final String DATA_DIR_ROOT = "data.dir.root";
public static final String ALFRESCO_DEFAULTS = "create.alfresco.defaults";
@@ -167,7 +169,8 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
private static final String ACTION_STATUS_ERROR = "error";
static final String ACTION_STATUS_SCHEDULED = "scheduled";
static final String ACTION_STATUS_NOT_SCHEDULED = "notScheduled";
static final String ADDITIONAL_INFO = "additionalInfo";
static final String WARNING = "WARNING";
static final String DRY_RUN_PARAMETER_NAME = "dryRun";
static final String FROM_TX_COMMIT_TIME_PARAMETER_NAME = "fromTxCommitTime";
static final String TO_TX_COMMIT_TIME_PARAMETER_NAME = "toTxCommitTime";
@@ -197,7 +200,7 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
TrackerRegistry trackerRegistry;
ConcurrentHashMap<String, InformationServer> informationServers;
private final static List<String> CORE_PARAMETER_NAMES = asList(CoreAdminParams.CORE, "coreName", "index");
final static List<String> CORE_PARAMETER_NAMES = asList(CoreAdminParams.CORE, "coreName", "index");
public AlfrescoCoreAdminHandler()
{
@@ -495,6 +498,14 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
ofNullable(params.get("resource"))
.orElse("log4j.properties")));
break;
case "ENABLE-INDEXING":
case "ENABLEINDEXING":
rsp.add(ACTION_LABEL, actionEnableIndexing(params));
break;
case "DISABLE-INDEXING":
case "DISABLEINDEXING":
rsp.add(ACTION_LABEL, actionDisableIndexing(params));
break;
default:
super.handleCustomAction(req, rsp);
break;
@@ -1388,8 +1399,6 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
* - toCalTx, optional: to ACL transaction Id to filter report results
*
* - report.core: multiple Objects with the details of the report ("core" is the name of the Core)
*
* @throws JSONException
*/
private NamedList<Object> actionREPORT(SolrParams params) throws JSONException
{
@@ -1444,16 +1453,30 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
* @return Response including the action result:
* - status: scheduled, as it will be executed by Trackers on the next maintenance operation
*/
private NamedList<Object> actionPURGE(SolrParams params)
NamedList<Object> actionPURGE(SolrParams params)
{
final NamedList<Object> response = new SimpleOrderedMap<>();
Consumer<String> purgeOnSpecificCore = coreName -> {
final MetadataTracker metadataTracker = trackerRegistry.getTrackerForCore(coreName, MetadataTracker.class);
final AclTracker aclTracker = trackerRegistry.getTrackerForCore(coreName, AclTracker.class);
apply(params, ARG_TXID, metadataTracker::addTransactionToPurge);
apply(params, ARG_ACLTXID, aclTracker::addAclChangeSetToPurge);
apply(params, ARG_NODEID, metadataTracker::addNodeToPurge);
apply(params, ARG_ACLID, aclTracker::addAclToPurge);
final NamedList<Object> coreResponse = new SimpleOrderedMap<>();
if (metadataTracker.isEnabled() & aclTracker.isEnabled())
{
apply(params, ARG_TXID, metadataTracker::addTransactionToPurge);
apply(params, ARG_ACLTXID, aclTracker::addAclChangeSetToPurge);
apply(params, ARG_NODEID, metadataTracker::addNodeToPurge);
apply(params, ARG_ACLID, aclTracker::addAclToPurge);
coreResponse.add(ACTION_STATUS_LABEL, ACTION_STATUS_SCHEDULED);
}
else
{
coreResponse.add(ACTION_STATUS_LABEL, ACTION_STATUS_NOT_SCHEDULED);
coreResponse.add(ADDITIONAL_INFO, "Trackers have been disabled: the purge request cannot be executed; please enable indexing and then resubmit this command.");
}
response.add(coreName, coreResponse);
};
String requestedCoreName = coreName(params);
@@ -1463,8 +1486,11 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
.filter(this::isMasterOrStandalone)
.forEach(purgeOnSpecificCore);
NamedList<Object> response = new SimpleOrderedMap<>();
response.add(ACTION_STATUS_LABEL, ACTION_STATUS_SCHEDULED);
if (response.size() == 0)
{
addAlertMessage(response);
}
return response;
}
@@ -1484,18 +1510,31 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
* @return Response including the action result:
* - action.status: scheduled, as it will be executed by Trackers on the next maintenance operation
*/
private NamedList<Object> actionREINDEX(SolrParams params)
NamedList<Object> actionREINDEX(SolrParams params)
{
final NamedList<Object> response = new SimpleOrderedMap<>();
Consumer<String> reindexOnSpecificCore = coreName -> {
final MetadataTracker metadataTracker = trackerRegistry.getTrackerForCore(coreName, MetadataTracker.class);
final AclTracker aclTracker = trackerRegistry.getTrackerForCore(coreName, AclTracker.class);
final NamedList<Object> coreResponse = new SimpleOrderedMap<>();
apply(params, ARG_TXID, metadataTracker::addTransactionToReindex);
apply(params, ARG_ACLTXID, aclTracker::addAclChangeSetToReindex);
apply(params, ARG_NODEID, metadataTracker::addNodeToReindex);
apply(params, ARG_ACLID, aclTracker::addAclToReindex);
if (metadataTracker.isEnabled() & aclTracker.isEnabled())
{
apply(params, ARG_TXID, metadataTracker::addTransactionToReindex);
apply(params, ARG_ACLTXID, aclTracker::addAclChangeSetToReindex);
apply(params, ARG_NODEID, metadataTracker::addNodeToReindex);
apply(params, ARG_ACLID, aclTracker::addAclToReindex);
ofNullable(params.get(ARG_QUERY)).ifPresent(metadataTracker::addQueryToReindex);
coreResponse.add(ACTION_STATUS_LABEL, ACTION_STATUS_SCHEDULED);
ofNullable(params.get(ARG_QUERY)).ifPresent(metadataTracker::addQueryToReindex);
}
else
{
coreResponse.add(ACTION_STATUS_LABEL, ACTION_STATUS_NOT_SCHEDULED);
coreResponse.add(ADDITIONAL_INFO, "Trackers have been disabled: the REINDEX request cannot be executed; please enable indexing and then resubmit this command.");
}
response.add(coreName, coreResponse);
};
String requestedCoreName = coreName(params);
@@ -1505,8 +1544,11 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
.filter(this::isMasterOrStandalone)
.forEach(reindexOnSpecificCore);
NamedList<Object> response = new SimpleOrderedMap<>();
response.add(ACTION_STATUS_LABEL, ACTION_STATUS_SCHEDULED);
if (response.size() == 0)
{
addAlertMessage(response);
}
return response;
}
@@ -1520,35 +1562,41 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
* - action.status: scheduled, as it will be executed by Trackers on the next maintenance operation
* - core: list of Document Ids with error that are going to reindexed
*/
private NamedList<Object> actionRETRY(SolrParams params)
NamedList<Object> actionRETRY(SolrParams params)
{
NamedList<Object> response = new SimpleOrderedMap<>();
final Consumer<String> retryOnSpecificCore = coreName -> {
MetadataTracker tracker = trackerRegistry.getTrackerForCore(coreName, MetadataTracker.class);
InformationServer srv = informationServers.get(coreName);
final NamedList<Object> coreResponse = new SimpleOrderedMap<>();
try
if (tracker.isEnabled())
{
for (Long nodeid : srv.getErrorDocIds())
try
{
tracker.addNodeToReindex(nodeid);
for (Long nodeid : srv.getErrorDocIds())
{
tracker.addNodeToReindex(nodeid);
}
coreResponse.add("Error Nodes", srv.getErrorDocIds());
coreResponse.add(ACTION_STATUS_LABEL, ACTION_STATUS_SCHEDULED);
} catch (Exception exception)
{
LOGGER.error("I/O Exception while adding Node to reindex.", exception);
coreResponse.add(ACTION_STATUS_LABEL, ACTION_STATUS_ERROR);
coreResponse.add(ACTION_ERROR_MESSAGE_LABEL, exception.getMessage());
coreResponse.add(ACTION_STATUS_LABEL, ACTION_STATUS_NOT_SCHEDULED);
}
response.add(coreName, srv.getErrorDocIds());
}
catch (Exception exception)
else
{
LOGGER.error("I/O Exception while adding Node to reindex.", exception);
response.add(ACTION_STATUS_LABEL, ACTION_STATUS_ERROR);
response.add(ACTION_ERROR_MESSAGE_LABEL, exception.getMessage());
coreResponse.add(ACTION_STATUS_LABEL, ACTION_STATUS_NOT_SCHEDULED);
coreResponse.add(ADDITIONAL_INFO, "Trackers have been disabled: the RETRY request cannot be executed; please enable indexing and then resubmit this command.");
}
};
if (Objects.equals(response.get(ACTION_STATUS_LABEL), ACTION_STATUS_ERROR))
{
return response;
}
response.add(coreName, coreResponse);
};
String requestedCoreName = coreName(params);
@@ -1557,7 +1605,11 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
.filter(this::isMasterOrStandalone)
.forEach(retryOnSpecificCore);
response.add(ACTION_STATUS_LABEL, ACTION_STATUS_SCHEDULED);
if (response.size() == 0)
{
addAlertMessage(response);
}
return response;
}
@@ -1576,16 +1628,29 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
* @return Response including the action result:
* - action.status: scheduled, as it will be executed by Trackers on the next maintenance operation
*/
private NamedList<Object> actionINDEX(SolrParams params)
NamedList<Object> actionINDEX(SolrParams params)
{
final NamedList<Object> response = new SimpleOrderedMap<>();
Consumer<String> indexOnSpecificCore = coreName -> {
final MetadataTracker metadataTracker = trackerRegistry.getTrackerForCore(coreName, MetadataTracker.class);
final AclTracker aclTracker = trackerRegistry.getTrackerForCore(coreName, AclTracker.class);
final NamedList<Object> coreResponse = new SimpleOrderedMap<>();
apply(params, ARG_TXID, metadataTracker::addTransactionToIndex);
apply(params, ARG_ACLTXID, aclTracker::addAclChangeSetToIndex);
apply(params, ARG_NODEID, metadataTracker::addNodeToIndex);
apply(params, ARG_ACLID, aclTracker::addAclToIndex);
if (metadataTracker.isEnabled() & aclTracker.isEnabled())
{
apply(params, ARG_TXID, metadataTracker::addTransactionToIndex);
apply(params, ARG_ACLTXID, aclTracker::addAclChangeSetToIndex);
apply(params, ARG_NODEID, metadataTracker::addNodeToIndex);
apply(params, ARG_ACLID, aclTracker::addAclToIndex);
coreResponse.add(ACTION_STATUS_LABEL, ACTION_STATUS_SCHEDULED);
}
else
{
coreResponse.add(ACTION_STATUS_LABEL, ACTION_STATUS_NOT_SCHEDULED);
coreResponse.add(ADDITIONAL_INFO, "Trackers have been disabled: the INDEX request cannot be executed; please enable indexing and then resubmit this command.");
}
response.add(coreName, coreResponse);
};
String requestedCoreName = coreName(params);
@@ -1595,11 +1660,24 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
.filter(this::isMasterOrStandalone)
.forEach(indexOnSpecificCore);
NamedList<Object> response = new SimpleOrderedMap<>();
response.add(ACTION_STATUS_LABEL, ACTION_STATUS_SCHEDULED);
if (response.size() == 0)
{
addAlertMessage(response);
}
return response;
}
NamedList<Object> actionDisableIndexing(SolrParams params) throws JSONException
{
return executeTrackerSubsystemLifecycleAction(params, this::disableIndexingOnSpecificCore);
}
NamedList<Object> actionEnableIndexing(SolrParams params) throws JSONException
{
return executeTrackerSubsystemLifecycleAction(params, this::enableIndexingOnSpecificCore);
}
/**
* Find transactions and acls missing or duplicated in the cores and
* add them to be reindexed on the next maintenance operation
@@ -1644,24 +1722,30 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
boolean dryRun = params.getBool(DRY_RUN_PARAMETER_NAME, true);
int maxTransactionsToSchedule = getMaxTransactionToSchedule(params);
MetadataTracker metadataTracker = trackerRegistry.getTrackerForCore(requestedCoreName, MetadataTracker.class);
AclTracker aclTracker = trackerRegistry.getTrackerForCore(requestedCoreName, AclTracker.class);
final boolean actualDryRun = dryRun | (metadataTracker == null || metadataTracker.isDisabled()) || (aclTracker == null || aclTracker.isDisabled());
LOGGER.debug("FIX Admin request on core {}, parameters: " +
FROM_TX_COMMIT_TIME_PARAMETER_NAME + " = {}, " +
TO_TX_COMMIT_TIME_PARAMETER_NAME + " = {}, " +
DRY_RUN_PARAMETER_NAME + " = {}, " +
MAX_TRANSACTIONS_TO_SCHEDULE_PARAMETER_NAME + " = {}",
"actualDryRun = {} " +
MAX_TRANSACTIONS_TO_SCHEDULE_PARAMETER_NAME + " = {}",
requestedCoreName,
ofNullable(fromTxCommitTime).map(Object::toString).orElse("N.A."),
ofNullable(toTxCommitTime).map(Object::toString).orElse("N.A."),
dryRun,
actualDryRun,
maxTransactionsToSchedule);
coreNames().stream()
.filter(coreName -> requestedCoreName == null || coreName.equals(requestedCoreName))
.filter(coreName -> coreName.equals(requestedCoreName))
.filter(this::isMasterOrStandalone)
.forEach(coreName ->
wrapper.response.add(
coreName,
fixOnSpecificCore(coreName, fromTxCommitTime, toTxCommitTime, dryRun, maxTransactionsToSchedule)));
fixOnSpecificCore(coreName, fromTxCommitTime, toTxCommitTime, actualDryRun, maxTransactionsToSchedule)));
if (wrapper.response.size() > 0)
{
@@ -1671,7 +1755,14 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
ofNullable(toTxCommitTime).ifPresent(value -> wrapper.response.add(TO_TX_COMMIT_TIME_PARAMETER_NAME, value));
wrapper.response.add(MAX_TRANSACTIONS_TO_SCHEDULE_PARAMETER_NAME, maxTransactionsToSchedule);
wrapper.response.add(ACTION_STATUS_LABEL, dryRun ? ACTION_STATUS_NOT_SCHEDULED : ACTION_STATUS_SCHEDULED);
wrapper.response.add(ACTION_STATUS_LABEL, actualDryRun ? ACTION_STATUS_NOT_SCHEDULED : ACTION_STATUS_SCHEDULED);
// the user wanted a real execution (dryRun = false) but the trackers are disabled.
// that adds a message in the response just to inform the user we didn't schedule anything (i.e. we forced a dryRun)
if (!dryRun && actualDryRun)
{
wrapper.response.add(ADDITIONAL_INFO, "Trackers are disabled: a (dryRun = true) has been forced. As consequence of that nothing has been scheduled.");
}
}
return wrapper.response;
@@ -1701,6 +1792,8 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
try
{
MetadataTracker metadataTracker = trackerRegistry.getTrackerForCore(coreName, MetadataTracker.class);
AclTracker aclTracker = trackerRegistry.getTrackerForCore(coreName, AclTracker.class);
final IndexHealthReport metadataTrackerIndexHealthReport =
metadataTracker.checkIndex(null, fromTxCommitTime, toTxCommitTime);
@@ -1715,7 +1808,6 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
dryRun,
maxTransactionsToSchedule);
AclTracker aclTracker = trackerRegistry.getTrackerForCore(coreName, AclTracker.class);
final IndexHealthReport aclTrackerIndexHealthReport =
aclTracker.checkIndex(null, fromTxCommitTime, toTxCommitTime);
@@ -2009,11 +2101,9 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
* @param coreName the owning core name.
* @return the component which is in charge to publish the core state.
*/
AbstractShardInformationPublisher coreStatePublisher(String coreName)
ShardStatePublisher coreStatePublisher(String coreName)
{
return ofNullable(trackerRegistry.getTrackerForCore(coreName, MetadataTracker.class))
.map(AbstractShardInformationPublisher.class::cast)
.orElse(trackerRegistry.getTrackerForCore(coreName, NodeStatePublisher.class));
return trackerRegistry.getTrackerForCore(coreName, ShardStatePublisher.class);
}
/**
@@ -2036,7 +2126,7 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
private void addAlertMessage(NamedList<Object> report)
{
report.add(
"WARNING",
WARNING,
"The requested endpoint is not available on the slave. " +
"Please re-submit the same request to the corresponding Master");
}
@@ -2084,4 +2174,62 @@ public class AlfrescoCoreAdminHandler extends CoreAdminHandler
.map(Integer::parseInt)
.orElse(Integer.MAX_VALUE)); // Last fallback if we don't have a request param and a value in configuration
}
NamedList<Object> disableIndexingOnSpecificCore(String coreName) {
final NamedList<Object> coreResponse = new SimpleOrderedMap<>();
trackerRegistry.getTrackersForCore(coreName)
.stream()
.filter(tracker -> tracker instanceof ActivatableTracker)
.map(ActivatableTracker.class::cast)
.peek(ActivatableTracker::disable)
.forEach(tracker -> coreResponse.add(tracker.getType().toString(), tracker.isEnabled()));
return coreResponse;
}
NamedList<Object> enableIndexingOnSpecificCore(String coreName) {
final NamedList<Object> coreResponse = new SimpleOrderedMap<>();
trackerRegistry.getTrackersForCore(coreName)
.stream()
.filter(tracker -> tracker instanceof ActivatableTracker)
.map(ActivatableTracker.class::cast)
.peek(ActivatableTracker::enable)
.forEach(tracker -> coreResponse.add(tracker.getType().toString(), tracker.isEnabled()));
return coreResponse;
}
/**
* Internal method used for executing the enable/disable indexing/tracking action.
*
* @param params the input request parameters. The only mandatory parameter is the core name
* @param action this can be the "enable" or the "disable" action: it is an "impure" function which takes a core name
* executes the enable/disable logic as part of its side-effect, and returns the action response.
* @return the action response indicating the result of the enable/disable command on a specific core.
* @see #CORE_PARAMETER_NAMES
*/
private NamedList<Object> executeTrackerSubsystemLifecycleAction(SolrParams params, Function<String, NamedList<Object>> action) throws JSONException
{
String requestedCoreName = coreName(params);
final NamedList<Object> response = new SimpleOrderedMap<>();
if (isNotNullAndNotEmpty(requestedCoreName))
{
if (!coreNames().contains(requestedCoreName))
{
response.add(ACTION_ERROR_MESSAGE_LABEL, UNKNOWN_CORE_MESSAGE + requestedCoreName);
return response;
}
if (!isMasterOrStandalone(requestedCoreName)) {
response.add(ACTION_ERROR_MESSAGE_LABEL, UNPROCESSABLE_REQUEST_ON_SLAVE_NODES);
return response;
}
}
coreNames().stream()
.filter(coreName -> requestedCoreName == null || coreName.equals(requestedCoreName))
.filter(this::isMasterOrStandalone)
.forEach(coreName -> response.add(coreName, action.apply(coreName)));
return response;
}
}

View File

@@ -117,6 +117,8 @@ import org.springframework.context.support.FileSystemXmlApplicationContext;
import static java.util.Optional.ofNullable;
import static org.alfresco.solr.SolrInformationServer.UNIT_OF_TIME_DAY_FIELD_SUFFIX;
import static org.alfresco.solr.SolrInformationServer.UNIT_OF_TIME_DAY_OF_WEEK_FIELD_SUFFIX;
import static org.alfresco.solr.SolrInformationServer.UNIT_OF_TIME_DAY_OF_YEAR_FIELD_SUFFIX;
import static org.alfresco.solr.SolrInformationServer.UNIT_OF_TIME_HOUR_FIELD_SUFFIX;
import static org.alfresco.solr.SolrInformationServer.UNIT_OF_TIME_MINUTE_FIELD_SUFFIX;
import static org.alfresco.solr.SolrInformationServer.UNIT_OF_TIME_MONTH_FIELD_SUFFIX;
@@ -171,6 +173,8 @@ public class AlfrescoSolrDataModel implements QueryConstants
UNIT_OF_TIME_MINUTE,
UNIT_OF_TIME_HOUR,
UNIT_OF_TIME_DAY,
UNIT_OF_TIME_DAY_OF_WEEK,
UNIT_OF_TIME_DAY_OF_YEAR,
UNIT_OF_TIME_MONTH,
UNIT_OF_TIME_QUARTER,
UNIT_OF_TIME_YEAR
@@ -184,6 +188,8 @@ public class AlfrescoSolrDataModel implements QueryConstants
UNIT_OF_TIME_QUARTER_FIELD_SUFFIX,
UNIT_OF_TIME_MONTH_FIELD_SUFFIX,
UNIT_OF_TIME_DAY_FIELD_SUFFIX,
UNIT_OF_TIME_DAY_OF_WEEK_FIELD_SUFFIX,
UNIT_OF_TIME_DAY_OF_YEAR_FIELD_SUFFIX,
UNIT_OF_TIME_HOUR_FIELD_SUFFIX,
UNIT_OF_TIME_MINUTE_FIELD_SUFFIX,
UNIT_OF_TIME_SECOND_FIELD_SUFFIX);
@@ -1133,6 +1139,10 @@ public class AlfrescoSolrDataModel implements QueryConstants
return UNIT_OF_TIME_HOUR_FIELD_SUFFIX;
case UNIT_OF_TIME_DAY:
return UNIT_OF_TIME_DAY_FIELD_SUFFIX;
case UNIT_OF_TIME_DAY_OF_WEEK:
return UNIT_OF_TIME_DAY_OF_WEEK_FIELD_SUFFIX;
case UNIT_OF_TIME_DAY_OF_YEAR:
return UNIT_OF_TIME_DAY_OF_YEAR_FIELD_SUFFIX;
case UNIT_OF_TIME_MONTH:
return UNIT_OF_TIME_MONTH_FIELD_SUFFIX;
case UNIT_OF_TIME_QUARTER:
@@ -1795,6 +1805,10 @@ public class AlfrescoSolrDataModel implements QueryConstants
return SpecializedFieldType.UNIT_OF_TIME_HOUR;
case UNIT_OF_TIME_DAY_FIELD_SUFFIX:
return SpecializedFieldType.UNIT_OF_TIME_DAY;
case UNIT_OF_TIME_DAY_OF_WEEK_FIELD_SUFFIX:
return SpecializedFieldType.UNIT_OF_TIME_DAY_OF_WEEK;
case UNIT_OF_TIME_DAY_OF_YEAR_FIELD_SUFFIX:
return SpecializedFieldType.UNIT_OF_TIME_DAY_OF_YEAR;
case UNIT_OF_TIME_MONTH_FIELD_SUFFIX:
return SpecializedFieldType.UNIT_OF_TIME_MONTH;
case UNIT_OF_TIME_QUARTER_FIELD_SUFFIX:

View File

@@ -125,7 +125,7 @@ class HandlerReportHelper
return nr;
}
static NamedList<Object> buildNodeReport(AbstractShardInformationPublisher publisher, Long dbid) throws JSONException
static NamedList<Object> buildNodeReport(ShardStatePublisher publisher, Long dbid) throws JSONException
{
NodeReport nodeReport = publisher.checkNode(dbid);
@@ -159,6 +159,7 @@ class HandlerReportHelper
AclTracker aclTracker = trackerRegistry.getTrackerForCore(coreName, AclTracker.class);
IndexHealthReport aclReport = aclTracker.checkIndex(toAclTx, fromTime, toTime);
NamedList<Object> ihr = new SimpleOrderedMap<>();
ihr.add("ACL Tracker", (aclTracker.isEnabled() ? "enabled" : "disabled"));
ihr.add("DB acl transaction count", aclReport.getDbAclTransactionCount());
ihr.add("Count of duplicated acl transactions in the index", aclReport.getDuplicatedAclTxInIndex()
.cardinality());
@@ -188,6 +189,7 @@ class HandlerReportHelper
// Metadata
MetadataTracker metadataTracker = trackerRegistry.getTrackerForCore(coreName, MetadataTracker.class);
IndexHealthReport metaReport = metadataTracker.checkIndex(toTx, fromTime, toTime);
ihr.add("Metadata Tracker", (metadataTracker.isEnabled() ? "enabled" : "disabled"));
ihr.add("DB transaction count", metaReport.getDbTransactionCount());
ihr.add("Count of duplicated transactions in the index", metaReport.getDuplicatedTxInIndex()
.cardinality());
@@ -248,7 +250,7 @@ class HandlerReportHelper
NamedList<Object> coreSummary = new SimpleOrderedMap<>();
coreSummary.addAll((SimpleOrderedMap<Object>) srv.getCoreStats());
NodeStatePublisher statePublisher = trackerRegistry.getTrackerForCore(cname, NodeStatePublisher.class);
ShardStatePublisher statePublisher = trackerRegistry.getTrackerForCore(cname, ShardStatePublisher.class);
TrackerState trackerState = statePublisher.getTrackerState();
long lastIndexTxCommitTime = trackerState.getLastIndexedTxCommitTime();
@@ -429,17 +431,12 @@ class HandlerReportHelper
long remainingContentTimeMillis = 0;
srv.addContentOutdatedAndUpdatedCounts(ftsSummary);
long cleanCount =
ofNullable(ftsSummary.get("Node count with FTSStatus Clean"))
ofNullable(ftsSummary.get("Node count whose content is in sync"))
.map(Number.class::cast)
.map(Number::longValue)
.orElse(0L);
long dirtyCount =
ofNullable(ftsSummary.get("Node count with FTSStatus Dirty"))
.map(Number.class::cast)
.map(Number::longValue)
.orElse(0L);
long newCount =
ofNullable(ftsSummary.get("Node count with FTSStatus New"))
ofNullable(ftsSummary.get("Node count whose content needs to be updated"))
.map(Number.class::cast)
.map(Number::longValue)
.orElse(0L);
@@ -450,12 +447,14 @@ class HandlerReportHelper
.map(Number::longValue)
.orElse(0L);
long contentYetToSee = nodesInIndex > 0 ? nodesToDo * (cleanCount + dirtyCount + newCount)/nodesInIndex : 0;
if (dirtyCount + newCount + contentYetToSee > 0)
long contentYetToSee = nodesInIndex > 0 ? nodesToDo * (cleanCount + dirtyCount)/nodesInIndex : 0;
if (dirtyCount + contentYetToSee > 0)
{
// We now use the elapsed time as seen by the single thread farming out alc indexing
double meanContentElapsedIndexTime = srv.getTrackerStats().getMeanContentElapsedIndexTime();
remainingContentTimeMillis = (long) ((dirtyCount + newCount + contentYetToSee) * meanContentElapsedIndexTime);
remainingContentTimeMillis = (long) ((dirtyCount + contentYetToSee) * meanContentElapsedIndexTime);
}
now = new Date();
end = new Date(now.getTime() + remainingContentTimeMillis);
@@ -485,6 +484,8 @@ class HandlerReportHelper
}
ContentTracker contentTrkr = trackerRegistry.getTrackerForCore(cname, ContentTracker.class);
CascadeTracker cascadeTracker = trackerRegistry.getTrackerForCore(cname, CascadeTracker.class);
TrackerState contentTrkrState = contentTrkr.getTrackerState();
// Leave ModelTracker out of this check, because it is common
boolean aTrackerIsRunning = aclTrkrState.isRunning() || metadataTrkrState.isRunning()
@@ -498,6 +499,11 @@ class HandlerReportHelper
coreSummary.add("MetadataTracker Active", metadataTrkrState.isRunning());
coreSummary.add("AclTracker Active", aclTrkrState.isRunning());
coreSummary.add("ContentTracker Enabled", contentTrkr.isEnabled());
coreSummary.add("MetadataTracker Enabled", metaTrkr.isEnabled());
coreSummary.add("AclTracker Enabled", aclTrkr.isEnabled());
coreSummary.add("CascadeTracker Enabled", cascadeTracker.isEnabled());
// TX
coreSummary.add("Last Index TX Commit Time", lastIndexTxCommitTime);

View File

@@ -384,7 +384,9 @@ public class SolrInformationServer implements InformationServer
public static final String UNIT_OF_TIME_YEAR_FIELD_SUFFIX = UNIT_OF_TIME_FIELD_INFIX + "_year";
public static final String UNIT_OF_TIME_QUARTER_FIELD_SUFFIX = UNIT_OF_TIME_FIELD_INFIX + "_quarter";
public static final String UNIT_OF_TIME_MONTH_FIELD_SUFFIX = UNIT_OF_TIME_FIELD_INFIX + "_month";
public static final String UNIT_OF_TIME_DAY_FIELD_SUFFIX = UNIT_OF_TIME_FIELD_INFIX + "_day";
public static final String UNIT_OF_TIME_DAY_FIELD_SUFFIX = UNIT_OF_TIME_FIELD_INFIX + "_day_of_month";
public static final String UNIT_OF_TIME_DAY_OF_WEEK_FIELD_SUFFIX = UNIT_OF_TIME_FIELD_INFIX + "_day_of_week";
public static final String UNIT_OF_TIME_DAY_OF_YEAR_FIELD_SUFFIX = UNIT_OF_TIME_FIELD_INFIX + "_day_of_year";
public static final String UNIT_OF_TIME_HOUR_FIELD_SUFFIX = UNIT_OF_TIME_FIELD_INFIX + "_hour";
public static final String UNIT_OF_TIME_MINUTE_FIELD_SUFFIX = UNIT_OF_TIME_FIELD_INFIX + "_minute";
public static final String UNIT_OF_TIME_SECOND_FIELD_SUFFIX = UNIT_OF_TIME_FIELD_INFIX + "_second";
@@ -3832,11 +3834,13 @@ public class SolrInformationServer implements InformationServer
{
String fieldNamePrefix = dataModel.destructuredDateTimePartFieldNamePrefix(sourceFieldName);
ZonedDateTime dateTime = ZonedDateTime.parse(value, DateTimeFormatter.ISO_ZONED_DATE_TIME);
consumer.accept(fieldNamePrefix + UNIT_OF_TIME_YEAR_FIELD_SUFFIX, dateTime.getYear());
consumer.accept(fieldNamePrefix + UNIT_OF_TIME_QUARTER_FIELD_SUFFIX, dateTime.get(IsoFields.QUARTER_OF_YEAR));
consumer.accept(fieldNamePrefix + UNIT_OF_TIME_MONTH_FIELD_SUFFIX, dateTime.getMonth().getValue());
consumer.accept(fieldNamePrefix + UNIT_OF_TIME_DAY_FIELD_SUFFIX, dateTime.getDayOfMonth());
consumer.accept(fieldNamePrefix + UNIT_OF_TIME_DAY_OF_WEEK_FIELD_SUFFIX, dateTime.getDayOfWeek().getValue());
consumer.accept(fieldNamePrefix + UNIT_OF_TIME_DAY_OF_YEAR_FIELD_SUFFIX, dateTime.getDayOfYear());
if (DataTypeDefinition.DATETIME.equals(dataType.getName()) && isTimeComponentDefined(value))
{

View File

@@ -102,7 +102,7 @@ public class RewriteFieldListComponent extends SearchComponent {
{
fieldListSet.add("*");
}
else
else if (solrReturnFields.getLuceneFieldNames() != null)
{
fieldListSet.addAll(solrReturnFields.getLuceneFieldNames().stream()
.map( field -> AlfrescoSolrDataModel.getInstance()

View File

@@ -51,7 +51,7 @@ import org.alfresco.solr.tracker.CommitTracker;
import org.alfresco.solr.tracker.ContentTracker;
import org.alfresco.solr.tracker.MetadataTracker;
import org.alfresco.solr.tracker.ModelTracker;
import org.alfresco.solr.tracker.NodeStatePublisher;
import org.alfresco.solr.tracker.ShardStatePublisher;
import org.alfresco.solr.tracker.SolrTrackerScheduler;
import org.alfresco.solr.tracker.Tracker;
import org.alfresco.solr.tracker.TrackerRegistry;
@@ -190,7 +190,7 @@ public class SolrCoreLoadListener extends AbstractSolrEventListener
{
LOGGER.info("SearchServices Core Trackers have been explicitly disabled on core \"{}\" through \"enable.alfresco.tracking\" configuration property.", core.getName());
NodeStatePublisher statePublisher = new NodeStatePublisher(false, coreProperties, repositoryClient, core.getName(), informationServer);
ShardStatePublisher statePublisher = new ShardStatePublisher(false, coreProperties, repositoryClient, core.getName(), informationServer);
trackerRegistry.register(core.getName(), statePublisher);
scheduler.schedule(statePublisher, core.getName(), coreProperties);
trackers.add(statePublisher);
@@ -205,7 +205,7 @@ public class SolrCoreLoadListener extends AbstractSolrEventListener
{
LOGGER.info("SearchServices Core Trackers have been disabled on core \"{}\" because it is a slave core.", core.getName());
NodeStatePublisher statePublisher = new NodeStatePublisher(false, coreProperties, repositoryClient, core.getName(), informationServer);
ShardStatePublisher statePublisher = new ShardStatePublisher(false, coreProperties, repositoryClient, core.getName(), informationServer);
trackerRegistry.register(core.getName(), statePublisher);
scheduler.schedule(statePublisher, core.getName(), coreProperties);
trackers.add(statePublisher);
@@ -264,9 +264,9 @@ public class SolrCoreLoadListener extends AbstractSolrEventListener
trackerRegistry,
scheduler);
NodeStatePublisher coreStateTracker =
ShardStatePublisher coreStateTracker =
registerAndSchedule(
new NodeStatePublisher(true, props, repositoryClient, core.getName(), srv),
new ShardStatePublisher(true, props, repositoryClient, core.getName(), srv),
core,
props,
trackerRegistry,
@@ -288,6 +288,7 @@ public class SolrCoreLoadListener extends AbstractSolrEventListener
trackers.add(cascadeTracker);
}
//The CommitTracker will acquire these locks in order
//The ContentTracker will likely have the longest runs so put it first to ensure the MetadataTracker is not paused while
//waiting for the ContentTracker to release it's lock.

View File

@@ -29,19 +29,29 @@ package org.alfresco.solr.tracker;
import static java.util.Optional.ofNullable;
import static org.alfresco.repo.index.shard.ShardMethodEnum.DB_ID;
import static org.alfresco.solr.tracker.DocRouterFactory.SHARD_KEY_KEY;
import java.net.ConnectException;
import java.net.SocketTimeoutException;
import java.util.Optional;
import java.util.Properties;
import java.util.concurrent.Semaphore;
import java.util.function.Consumer;
import org.alfresco.opencmis.dictionary.CMISStrictDictionaryService;
import org.alfresco.repo.dictionary.NamespaceDAO;
import org.alfresco.repo.index.shard.ShardMethodEnum;
import org.alfresco.repo.search.impl.QueryParserUtils;
import org.alfresco.service.cmr.dictionary.DictionaryService;
import org.alfresco.service.cmr.dictionary.PropertyDefinition;
import org.alfresco.service.cmr.repository.StoreRef;
import org.alfresco.service.namespace.QName;
import org.alfresco.solr.AlfrescoSolrDataModel;
import org.alfresco.solr.IndexTrackingShutdownException;
import org.alfresco.solr.InformationServer;
import org.alfresco.solr.NodeReport;
import org.alfresco.solr.TrackerState;
import org.alfresco.solr.client.SOLRAPIClient;
import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -81,6 +91,24 @@ public abstract class AbstractTracker implements Tracker
protected final Type type;
protected final String trackerId;
DocRouter docRouter;
/**
* The property to use for determining the shard.
* Note that this property is not used by all trackers, it is actually managed by the {@link ShardStatePublisher} and
* {@link MetadataTracker}. We put this property here because otherwise we should introduce another supertype layer
* for those two trackers.
*/
protected Optional<QName> shardProperty = Optional.empty();
/**
* The string representation of the shard key.
* Note that this property is not used by all trackers, it is actually managed by the {@link ShardStatePublisher} and
* {@link MetadataTracker}. We put this property here because otherwise we should introduce another supertype layer
* for those two trackers.
*/
protected Optional<String> shardKey;
/**
* Default constructor, strictly for testing.
*/
@@ -114,9 +142,14 @@ public abstract class AbstractTracker implements Tracker
this.type = type;
this.trackerId = type + "@" + hashCode();
shardKey = ofNullable(p.getProperty(SHARD_KEY_KEY));
firstUpdateShardProperty();
docRouter = DocRouterFactory.getRouter(p, shardMethod);
}
/**
* Subclasses must implement behaviour that completes the following steps, in order:
*
@@ -352,4 +385,88 @@ public abstract class AbstractTracker implements Tracker
{
return type;
}
/**
* Set the shard property using the shard key.
*/
void updateShardProperty()
{
shardKey.ifPresent(shardKeyName -> {
Optional<QName> updatedShardProperty = getShardProperty(shardKeyName);
if (!shardProperty.equals(updatedShardProperty))
{
if (updatedShardProperty.isEmpty())
{
LOGGER.warn("The model defining {} property has been disabled", shardKeyName);
}
else
{
LOGGER.info("New {} property found for {}", SHARD_KEY_KEY, shardKeyName);
}
}
shardProperty = updatedShardProperty;
});
}
/**
* Given the field name, returns the name of the property definition.
* If the property definition is not found, Empty optional is returned.
*
* @param field the field name.
* @return the name of the associated property definition if present, Optional.Empty() otherwise
*/
static Optional<QName> getShardProperty(String field)
{
if (StringUtils.isBlank(field))
{
throw new IllegalArgumentException("Sharding property " + SHARD_KEY_KEY + " has not been set.");
}
AlfrescoSolrDataModel dataModel = AlfrescoSolrDataModel.getInstance();
NamespaceDAO namespaceDAO = dataModel.getNamespaceDAO();
DictionaryService dictionaryService = dataModel.getDictionaryService(CMISStrictDictionaryService.DEFAULT);
PropertyDefinition propertyDef = QueryParserUtils.matchPropertyDefinition("http://www.alfresco.org/model/content/1.0",
namespaceDAO,
dictionaryService,
field);
return ofNullable(propertyDef).map(PropertyDefinition::getName);
}
/**
* Returns information about the {@link org.alfresco.solr.client.Node} associated with the given dbid.
*
* @param dbid the node identifier.
* @return the {@link org.alfresco.solr.client.Node} associated with the given dbid.
*/
public NodeReport checkNode(Long dbid)
{
NodeReport nodeReport = new NodeReport();
nodeReport.setDbid(dbid);
this.infoSrv.addCommonNodeReportInfo(nodeReport);
return nodeReport;
}
/**
* Returns the {@link DocRouter} instance in use on this node.
*
* @return the {@link DocRouter} instance in use on this node.
*/
public DocRouter getDocRouter()
{
return this.docRouter;
}
private void firstUpdateShardProperty()
{
shardKey.ifPresent( shardKeyName -> {
updateShardProperty();
if (shardProperty.isEmpty())
{
LOGGER.warn("Sharding property {} was set to {}, but no such property was found.", SHARD_KEY_KEY, shardKeyName);
}
});
}
}

View File

@@ -67,7 +67,7 @@ import org.slf4j.LoggerFactory;
* @author Matt Ward
**/
public class AclTracker extends AbstractTracker
public class AclTracker extends ActivatableTracker
{
protected final static Logger LOGGER = LoggerFactory.getLogger(AclTracker.class);
@@ -363,6 +363,19 @@ public class AclTracker extends AbstractTracker
aclsToPurge.offer(aclToPurge);
}
@Override
protected void clearScheduledMaintenanceWork()
{
logAndClear(aclChangeSetsToIndex, "ACL ChangeSets to be indexed");
logAndClear(aclsToIndex, "ACLs to be indexed");
logAndClear(aclChangeSetsToReindex, "ACL ChangeSets to be re-indexed");
logAndClear(aclsToReindex, "ACLs to be re-indexed");
logAndClear(aclChangeSetsToPurge, "ACL ChangeSets to be purged");
logAndClear(aclsToPurge, "ACLs to be purged");
}
protected void trackRepository() throws IOException, AuthenticationException, JSONException
{
checkShutdown();

View File

@@ -0,0 +1,143 @@
/*
* #%L
* Alfresco Search Services
* %%
* Copyright (C) 2005 - 2020 Alfresco Software Limited
* %%
* This file is part of the Alfresco software.
* If the software was purchased under a paid Alfresco license, the terms of
* the paid license agreement will prevail. Otherwise, the software is
* provided under the following open source license terms:
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
* #L%
*/
package org.alfresco.solr.tracker;
import org.alfresco.solr.InformationServer;
import org.alfresco.solr.client.SOLRAPIClient;
import org.apache.solr.core.CoreDescriptorDecorator;
import org.apache.solr.core.SolrCore;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.Properties;
import java.util.concurrent.atomic.AtomicBoolean;
/**
* Supertype layer for trackers that can be enabled/disabled.
*/
public abstract class ActivatableTracker extends AbstractTracker
{
private static final Logger LOGGER = LoggerFactory.getLogger(ActivatableTracker.class);
protected static AtomicBoolean isEnabled = new AtomicBoolean(true);
protected ActivatableTracker(Type type)
{
super(type);
}
protected ActivatableTracker(Properties properties, SOLRAPIClient client, String coreName, InformationServer informationServer, Type type)
{
super(properties, client, coreName, informationServer, type);
if (isEnabled.get())
{
LOGGER.info("[{} / {} / {}] {} Tracker set to enabled at startup.", coreName, trackerId, state, type);
}
else
{
LOGGER.info("[{} / {} / {}] {} Tracker set to disabled at startup.", coreName, trackerId, state, type);
}
}
/**
* Disables this tracker instance.
*/
public final void disable()
{
clearScheduledMaintenanceWork();
if (isEnabled.compareAndSet(true, false))
{
if (state != null && state.isRunning())
{
setRollback(true, null);
}
}
LOGGER.info("[{} / {} / {}] {} Tracker has been disabled.", coreName, trackerId, state, type);
}
/**
* Enables this tracker instance.
*/
public final void enable()
{
isEnabled.set(true);
LOGGER.info("[{} / {} / {}] {} Tracker has been enabled", coreName, trackerId, state, type);
}
@Override
public void track()
{
if (isEnabled())
{
super.track();
}
else
{
LOGGER.trace("[{} / {} / {}] {} Tracker is disabled. That is absolutely ok, that means you disabled the tracking on this core.", coreName, trackerId, state, type);
}
}
public boolean isEnabled()
{
return isEnabled.get();
}
public boolean isDisabled()
{
return !isEnabled();
}
/**
* Cleans up the scheduled maintenance work collected by this tracker.
*/
protected void clearScheduledMaintenanceWork()
{
// Default behaviour is: do nothing
};
/**
* Logs out the content of the input collection.
*
* @param values the collection which (in case is not empty) contains the identifiers (e.g. txid, aclid) the system
* is going to clear.
* @param kind the kind of identifier (e.g. Transaction, Node ID, ACL ID) in the input collection.
*/
protected void logAndClear(Collection<Long> values, String kind)
{
if (values == null || values.size() == 0) {
return;
}
final List<Long> tmp = new ArrayList<>(values);
values.clear();
LOGGER.info("[CORE {}] Scheduled work ({}) that will be cleaned: {}", coreName, kind, new ArrayList<>(tmp));
}
}

View File

@@ -52,14 +52,11 @@ import org.json.JSONException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import static java.util.stream.Collectors.joining;
import static org.alfresco.solr.utils.Utils.notNullOrEmpty;
/*
* This tracks Cascading Updates
* @author Joel Bernstein
*/
public class CascadeTracker extends AbstractTracker implements Tracker
public class CascadeTracker extends ActivatableTracker
{
protected final static Logger LOGGER = LoggerFactory.getLogger(CascadeTracker.class);

View File

@@ -112,12 +112,13 @@ public class CommitTracker extends AbstractTracker
WRITE_LOCK_BY_CORE.put(coreName, new Semaphore(1, true));
}
public boolean hasMaintenance() throws Exception
public boolean hasMaintenance()
{
return (metadataTracker.hasMaintenance() || aclTracker.hasMaintenance());
}
public int getRollbackCount() {
public int getRollbackCount()
{
return rollbackCount.get();
}
@@ -134,7 +135,6 @@ public class CommitTracker extends AbstractTracker
boolean commitNeeded = false;
boolean openSearcherNeeded = false;
boolean hasMaintenance = hasMaintenance();
//System.out.println("############# Commit Tracker doTrack()");
if((currentTime - lastCommit) > commitInterval || hasMaintenance)
{
@@ -151,8 +151,6 @@ public class CommitTracker extends AbstractTracker
openSearcherNeeded = true;
}
//System.out.println("############# Commit Tracker commit needed");
try
{
metadataTracker.getWriteLock().acquire();
@@ -161,9 +159,8 @@ public class CommitTracker extends AbstractTracker
aclTracker.getWriteLock().acquire();
assert(aclTracker.getWriteLock().availablePermits() == 0);
//See if we need a rollback
if(metadataTracker.getRollback() || aclTracker.getRollback()) {
if(metadataTracker.getRollback() || aclTracker.getRollback())
{
/*
* The metadataTracker and aclTracker will return true if an unhandled exception has occurred during indexing.
*
@@ -174,30 +171,36 @@ public class CommitTracker extends AbstractTracker
* the index, rather then the in-memory state. This keeps the trackers in-sync with index if their work is
* rolled back.
*/
doRollback();
return;
}
if(hasMaintenance) {
// The disable-indexing command should happen while the commit tracker is here.
// In that case the (disable-indexing) command clears the maintenance work as much as possible,
// however, there's a chance that some work will be still executed: for that reason the check is repeated
// later (see below) and in case a rollback is executed
if (hasMaintenance)
{
maintenance();
}
//Do the commit opening the searcher if needed. This will commit all the work done by indexing trackers.
//This will return immediately and not wait for searchers to warm
boolean searcherOpened = infoSrv.commit(openSearcherNeeded);
lastCommit = currentTime;
if(searcherOpened) {
lastSearcherOpened = currentTime;
if (metadataTracker.isEnabled() && aclTracker.isEnabled())
{
boolean searcherOpened = infoSrv.commit(openSearcherNeeded);
lastCommit = currentTime;
if(searcherOpened)
{
lastSearcherOpened = currentTime;
}
}
else
{
doRollback();
}
}
finally
{
//Release the lock on the metadata Tracker
metadataTracker.getWriteLock().release();
//Release the lock on the aclTracker
aclTracker.getWriteLock().release();
}
}

View File

@@ -49,7 +49,7 @@ import static org.alfresco.solr.utils.Utils.notNullOrEmpty;
*
* @author Ahmed Owian
*/
public class ContentTracker extends AbstractTracker implements Tracker
public class ContentTracker extends ActivatableTracker
{
protected final static Logger LOGGER = LoggerFactory.getLogger(ContentTracker.class);

View File

@@ -68,7 +68,7 @@ import static org.alfresco.repo.index.shard.ShardMethodEnum.DB_ID_RANGE;
* This tracks two things: transactions and metadata nodes
* @author Ahmed Owian
*/
public class MetadataTracker extends AbstractShardInformationPublisher implements Tracker
public class MetadataTracker extends ActivatableTracker
{
protected final static Logger LOGGER = LoggerFactory.getLogger(MetadataTracker.class);
@@ -101,8 +101,8 @@ public class MetadataTracker extends AbstractShardInformationPublisher implement
private ForkJoinPool forkJoinPool;
// Share run and write locks across all MetadataTracker threads
private static Map<String, Semaphore> RUN_LOCK_BY_CORE = new ConcurrentHashMap<>();
private static Map<String, Semaphore> WRITE_LOCK_BY_CORE = new ConcurrentHashMap<>();
private static final Map<String, Semaphore> RUN_LOCK_BY_CORE = new ConcurrentHashMap<>();
private static final Map<String, Semaphore> WRITE_LOCK_BY_CORE = new ConcurrentHashMap<>();
@Override
public Semaphore getWriteLock()
{
@@ -119,7 +119,7 @@ public class MetadataTracker extends AbstractShardInformationPublisher implement
* This service is used to find the next available transaction commit time from a given time,
* so periods of time where no document updating is happening can be skipped while getting
* pending transactions list.
*
*
* {@link org.alfresco.solr.client.SOLRAPIClient#GET_NEXT_TX_COMMIT_TIME}
*/
private boolean nextTxCommitTimeServiceAvailable = false;
@@ -127,8 +127,8 @@ public class MetadataTracker extends AbstractShardInformationPublisher implement
/**
* Check if txInteravlCommitTimeService is available in the repository.
* This service returns the minimum and the maximum commit time for transactions in a node id range,
* so method sharding DB_ID_RANGE can skip transactions not relevant for the DB ID range.
*
* so method sharding DB_ID_RANGE can skip transactions not relevant for the DB ID range.
*
* {@link org.alfresco.solr.client.SOLRAPIClient#GET_TX_INTERVAL_COMMIT_TIME}
*/
private boolean txIntervalCommitTimeServiceAvailable = false;
@@ -159,7 +159,7 @@ public class MetadataTracker extends AbstractShardInformationPublisher implement
public MetadataTracker( Properties p, SOLRAPIClient client, String coreName,
InformationServer informationServer, boolean checkRepoServicesAvailability)
{
super(true, p, client, coreName, informationServer, Tracker.Type.METADATA);
super(p, client, coreName, informationServer, Tracker.Type.METADATA);
transactionDocsBatchSize = Integer.parseInt(p.getProperty("alfresco.transactionDocsBatchSize",
String.valueOf(DEFAULT_TRANSACTION_DOCS_BATCH_SIZE)));
@@ -1319,6 +1319,19 @@ public class MetadataTracker extends AbstractShardInformationPublisher implement
transactionsToIndex.offer(txId);
}
@Override
protected void clearScheduledMaintenanceWork()
{
logAndClear(transactionsToIndex, "Transactions to be indexed");
logAndClear(nodesToIndex, "Nodes to be indexed");
logAndClear(transactionsToReindex, "Transactions to be re-indexed");
logAndClear(nodesToReindex, "Nodes to be re-indexed");
logAndClear(transactionsToPurge, "Transactions to be purged");
logAndClear(nodesToPurge, "Nodes to be purged");
}
public void addNodeToIndex(Long nodeId)
{
this.nodesToIndex.offer(nodeId);
@@ -1333,4 +1346,6 @@ public class MetadataTracker extends AbstractShardInformationPublisher implement
{
this.queriesToReindex.offer(query);
}
}

View File

@@ -92,7 +92,7 @@ import org.slf4j.LoggerFactory;
* deactivate ModelTracker
* @enduml
*/
public class ModelTracker extends AbstractTracker implements Tracker
public class ModelTracker extends AbstractTracker
{
private static final Logger LOGGER = LoggerFactory.getLogger(ModelTracker.class);

View File

@@ -1,134 +0,0 @@
/*
* #%L
* Alfresco Search Services
* %%
* Copyright (C) 2005 - 2020 Alfresco Software Limited
* %%
* This file is part of the Alfresco software.
* If the software was purchased under a paid Alfresco license, the terms of
* the paid license agreement will prevail. Otherwise, the software is
* provided under the following open source license terms:
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
* #L%
*/
package org.alfresco.solr.tracker;
import static org.alfresco.solr.tracker.Tracker.Type.NODE_STATE_PUBLISHER;
import org.alfresco.httpclient.AuthenticationException;
import org.alfresco.repo.index.shard.ShardState;
import org.alfresco.solr.SolrInformationServer;
import org.alfresco.solr.TrackerState;
import org.alfresco.solr.client.SOLRAPIClient;
import org.apache.commons.codec.EncoderException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.Semaphore;
/**
* Despite belonging to the Tracker ecosystem, this component is actually a publisher, which periodically informs
* Alfresco about the state of the hosting slave core.
* As the name suggests, this worker is scheduled only when the owning core acts as a slave.
* It allows Solr's master/slave setup to be used with dynamic shard registration.
*
* In this scenario the slave is polling a "tracking" Solr node. The tracker below calls
* the repo to register the state of the node without pulling any real transactions from the repo.
*
* This allows the repo to register the replica so that it will be included in queries. But the slave Solr node
* will pull its data from a "tracking" Solr node using Solr's master/slave replication, rather then tracking the repository.
*
* @author Andrea Gazzarini
* @since 1.5
*/
public class NodeStatePublisher extends AbstractShardInformationPublisher
{
private static final Logger LOGGER = LoggerFactory.getLogger(NodeStatePublisher.class);
// Share run and write locks across all SlaveCoreStatePublisher threads
private static final Map<String, Semaphore> RUN_LOCK_BY_CORE = new ConcurrentHashMap<>();
private static final Map<String, Semaphore> WRITE_LOCK_BY_CORE = new ConcurrentHashMap<>();
@Override
public Semaphore getWriteLock()
{
return WRITE_LOCK_BY_CORE.get(coreName);
}
@Override
public Semaphore getRunLock()
{
return RUN_LOCK_BY_CORE.get(coreName);
}
public NodeStatePublisher(
boolean isMaster,
Properties coreProperties,
SOLRAPIClient repositoryClient,
String name,
SolrInformationServer informationServer)
{
super(isMaster, coreProperties, repositoryClient, name, informationServer, NODE_STATE_PUBLISHER);
RUN_LOCK_BY_CORE.put(coreName, new Semaphore(1, true));
WRITE_LOCK_BY_CORE.put(coreName, new Semaphore(1, true));
}
@Override
protected void doTrack(String iterationId)
{
try
{
ShardState shardstate = getShardState();
client.getTransactions(0L, null, 0L, null, 0, shardstate);
}
catch (EncoderException | IOException | AuthenticationException exception )
{
LOGGER.error("Unable to publish this node state. " +
"A failure condition has been met during the outbound subscription message encoding process. " +
"See the stacktrace below for further details.", exception);
}
}
@Override
public void maintenance()
{
// Do nothing here
}
@Override
public boolean hasMaintenance()
{
return false;
}
/**
* When running in a slave mode, we need to recreate the tracker state every time.
* This because in that context we don't have any tracker updating the state (e.g. lastIndexedChangeSetCommitTime,
* lastIndexedChangeSetId)
*
* @return a new, fresh and up to date instance of {@link TrackerState}.
*/
@Override
public TrackerState getTrackerState()
{
return infoSrv.getTrackerInitialState();
}
}

View File

@@ -26,35 +26,31 @@
package org.alfresco.solr.tracker;
import org.alfresco.opencmis.dictionary.CMISStrictDictionaryService;
import org.alfresco.repo.dictionary.NamespaceDAO;
import org.alfresco.repo.index.shard.ShardMethodEnum;
import org.alfresco.httpclient.AuthenticationException;
import org.alfresco.repo.index.shard.ShardState;
import org.alfresco.repo.index.shard.ShardStateBuilder;
import org.alfresco.repo.search.impl.QueryParserUtils;
import org.alfresco.service.cmr.dictionary.DictionaryService;
import org.alfresco.service.cmr.dictionary.PropertyDefinition;
import org.alfresco.service.namespace.QName;
import org.alfresco.solr.AlfrescoCoreAdminHandler;
import org.alfresco.solr.AlfrescoSolrDataModel;
import org.alfresco.solr.InformationServer;
import org.alfresco.solr.NodeReport;
import org.alfresco.solr.TrackerState;
import org.alfresco.solr.client.SOLRAPIClient;
import org.apache.commons.lang3.StringUtils;
import org.apache.commons.codec.EncoderException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
import java.util.HashMap;
import java.util.Optional;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.Semaphore;
import static java.util.Optional.of;
import static java.util.Optional.ofNullable;
import static org.alfresco.solr.tracker.DocRouterFactory.SHARD_KEY_KEY;
import static org.alfresco.solr.tracker.Tracker.Type.NODE_STATE_PUBLISHER;
/**
* Superclass for all components which are able to inform Alfresco about the hosting node state.
* Despite belonging to the Tracker ecosystem, this component is actually a publisher, which periodically informs
* Alfresco about the state of the hosting core.
*
* This has been introduced in SEARCH-1752 for splitting the dual responsibility of the {@link org.alfresco.solr.tracker.MetadataTracker}.
* As consequence of that, this class contains only the members needed for obtaining a valid
* {@link org.alfresco.repo.index.shard.ShardState} that can be periodically communicated to Alfresco.
@@ -63,113 +59,80 @@ import static org.alfresco.solr.tracker.DocRouterFactory.SHARD_KEY_KEY;
* @since 1.5
* @see <a href="https://issues.alfresco.com/jira/browse/SEARCH-1752">SEARCH-1752</a>
*/
public abstract class AbstractShardInformationPublisher extends AbstractTracker
public class ShardStatePublisher extends AbstractTracker
{
private static final Logger LOGGER = LoggerFactory.getLogger(AbstractShardInformationPublisher.class);
DocRouter docRouter;
private static final Logger LOGGER = LoggerFactory.getLogger(ShardStatePublisher.class);
private static final Map<String, Semaphore> RUN_LOCK_BY_CORE = new ConcurrentHashMap<>();
private static final Map<String, Semaphore> WRITE_LOCK_BY_CORE = new ConcurrentHashMap<>();
private final boolean isMaster;
/** The string representation of the shard key. */
private Optional<String> shardKey;
/** The property to use for determining the shard. */
protected Optional<QName> shardProperty = Optional.empty();
AbstractShardInformationPublisher(
public ShardStatePublisher(
boolean isMaster,
Properties p,
SOLRAPIClient client,
String coreName,
InformationServer informationServer,
Type type)
InformationServer informationServer)
{
super(p, client, coreName, informationServer, type);
super(p, client, coreName, informationServer, NODE_STATE_PUBLISHER);
this.isMaster = isMaster;
shardKey = ofNullable(p.getProperty(SHARD_KEY_KEY));
firstUpdateShardProperty();
docRouter = DocRouterFactory.getRouter(p, shardMethod);
RUN_LOCK_BY_CORE.put(coreName, new Semaphore(1, true));
WRITE_LOCK_BY_CORE.put(coreName, new Semaphore(1, true));
}
AbstractShardInformationPublisher(Type type)
@Override
protected void doTrack(String iterationId)
{
super(type);
this.isMaster = false;
}
/**
* Returns information about the {@link org.alfresco.solr.client.Node} associated with the given dbid.
*
* @param dbid the node identifier.
* @return the {@link org.alfresco.solr.client.Node} associated with the given dbid.
*/
public NodeReport checkNode(Long dbid)
{
NodeReport nodeReport = new NodeReport();
nodeReport.setDbid(dbid);
this.infoSrv.addCommonNodeReportInfo(nodeReport);
return nodeReport;
}
private void firstUpdateShardProperty()
{
shardKey.ifPresent( shardKeyName -> {
updateShardProperty();
if (shardProperty.isEmpty())
{
LOGGER.warn("Sharding property {} was set to {}, but no such property was found.", SHARD_KEY_KEY, shardKeyName);
}
});
}
/**
* Set the shard property using the shard key.
*/
void updateShardProperty()
{
shardKey.ifPresent(shardKeyName -> {
Optional<QName> updatedShardProperty = getShardProperty(shardKeyName);
if (!shardProperty.equals(updatedShardProperty))
{
if (updatedShardProperty.isEmpty())
{
LOGGER.warn("The model defining {} property has been disabled", shardKeyName);
}
else
{
LOGGER.info("New {} property found for {}", SHARD_KEY_KEY, shardKeyName);
}
}
shardProperty = updatedShardProperty;
});
}
/**
* Given the field name, returns the name of the property definition.
* If the property definition is not found, Empty optional is returned.
*
* @param field the field name.
* @return the name of the associated property definition if present, Optional.Empty() otherwise
*/
static Optional<QName> getShardProperty(String field)
{
if (StringUtils.isBlank(field))
try
{
throw new IllegalArgumentException("Sharding property " + SHARD_KEY_KEY + " has not been set.");
ShardState shardstate = getShardState();
client.getTransactions(0L, null, 0L, null, 0, shardstate);
}
catch (EncoderException | IOException | AuthenticationException exception )
{
LOGGER.error("Unable to publish this node state. " +
"A failure condition has been met during the outbound subscription message encoding process. " +
"See the stacktrace below for further details.", exception);
}
}
AlfrescoSolrDataModel dataModel = AlfrescoSolrDataModel.getInstance();
NamespaceDAO namespaceDAO = dataModel.getNamespaceDAO();
DictionaryService dictionaryService = dataModel.getDictionaryService(CMISStrictDictionaryService.DEFAULT);
PropertyDefinition propertyDef = QueryParserUtils.matchPropertyDefinition("http://www.alfresco.org/model/content/1.0",
namespaceDAO,
dictionaryService,
field);
@Override
public void maintenance()
{
// Do nothing here
}
return ofNullable(propertyDef).map(PropertyDefinition::getName);
@Override
public boolean hasMaintenance()
{
return false;
}
/**
* When running in a slave mode, we need to recreate the tracker state every time.
* This because in that context we don't have any tracker updating the state (e.g. lastIndexedChangeSetCommitTime,
* lastIndexedChangeSetId)
*
* @return a new, fresh and up to date instance of {@link TrackerState}.
*/
@Override
public TrackerState getTrackerState()
{
return infoSrv.getTrackerInitialState();
}
@Override
public Semaphore getWriteLock()
{
return WRITE_LOCK_BY_CORE.get(coreName);
}
@Override
public Semaphore getRunLock()
{
return RUN_LOCK_BY_CORE.get(coreName);
}
/**
@@ -177,7 +140,6 @@ public abstract class AbstractShardInformationPublisher extends AbstractTracker
* {@link MetadataTracker} instance.
*
* @return the {@link ShardState} instance which stores the current state of the hosting shard.
* @see NodeStatePublisher
*/
ShardState getShardState()
{
@@ -222,16 +184,6 @@ public abstract class AbstractShardInformationPublisher extends AbstractTracker
}
/**
* Returns the {@link DocRouter} instance in use on this node.
*
* @return the {@link DocRouter} instance in use on this node.
*/
public DocRouter getDocRouter()
{
return this.docRouter;
}
/**
* Returns true if the hosting core is master or standalone.
*

View File

@@ -148,10 +148,19 @@ public abstract class Utils
*/
public static boolean isNullOrEmpty(String value)
{
return ofNullable(value)
.map(String::trim)
.map(String::isEmpty)
.orElse(true);
return value == null || value.trim().length() == 0;
}
/**
* Returns true if the input string is not null and it is not empty.
* Note whitespaces are not considered, so if a string contains only whitespaces, it is considered empty.
*
* @param value the input string.
* @return true if the input string is not null and it is not empty.
*/
public static boolean isNotNullAndNotEmpty(String value)
{
return value != null && value.trim().length() != 0;
}
/**

View File

@@ -68,7 +68,9 @@ import org.apache.solr.core.CloseHook;
import org.apache.solr.core.PluginInfo;
import org.apache.solr.core.SolrCore;
import org.apache.solr.handler.RequestHandlerBase;
import org.apache.solr.handler.component.FacetComponent.FacetContext;
import org.apache.solr.request.SolrQueryRequest;
import org.apache.solr.response.BasicResultContext;
import org.apache.solr.response.ResultContext;
import org.apache.solr.response.SolrQueryResponse;
import org.apache.solr.schema.IndexSchema;
@@ -603,7 +605,77 @@ public class AlfrescoSearchHandler extends RequestHandlerBase implements
shardInfo.add(shardInfoName, nl);
rsp.getValues().add(ShardParams.SHARDS_INFO, shardInfo);
}
removeFacetQueriesWithCountZero(rsp);
}
/**
* Facet SOLR Queries are omitting facet results with count equals to 0 as general rule.
* SOLR Queries executed on a Shard environment, don't include these results but the same
* query executed on a single server environment are adding these elements to the response.
* This method removes every facet query having count equals to 0 to provide the same
* behaviour in both cases.
*
* A query like the following, will return only Facet Queries having elements:
* "facetQueries" : [
* { "query" : "content.size:[0 TO 102400]", "label" : "small"},
* { "query" : "content.size:[102400 TO 1048576]", "label" : "medium"},
* { "query" : "content.size:[1048576 TO 16777216]", "label" : "large"}
* ]
*
* For instance, if there are only results with "small" key, the result will be:
* "facetQueries": [
* {
* "label": "small",
* "filterQuery": "content.size:[0 TO 102400]",
* "count": 5
* }
* ]
*
*/
public static final String FACET_COUNTS_KEY = "facet_counts";
public static final String FACET_CONTEXT_KEY = "_facet.context";
@SuppressWarnings("unchecked")
public static void removeFacetQueriesWithCountZero(SolrQueryResponse rsp)
{
NamedList<Object> facetCounts = (NamedList<Object>) rsp.getValues().get(FACET_COUNTS_KEY);
if (facetCounts != null)
{
NamedList<Object> facetQueries = (NamedList<Object>) facetCounts.get(FacetComponent.FACET_QUERY_KEY);
if (facetQueries != null)
{
List<String> keyCountsToRemove = new ArrayList<>();
facetQueries.forEach(facetQuery -> {
if ((Integer) facetQuery.getValue() == 0)
{
keyCountsToRemove.add(facetQuery.getKey());
}
});
if (!keyCountsToRemove.isEmpty())
{
keyCountsToRemove.forEach(key -> facetQueries.remove(key));
((NamedList<Object>) rsp.getValues().get(FACET_COUNTS_KEY)).remove(FacetComponent.FACET_QUERY_KEY);
((NamedList<Object>) rsp.getValues().get(FACET_COUNTS_KEY)).add(FacetComponent.FACET_QUERY_KEY,
facetQueries);
BasicResultContext result = (BasicResultContext) rsp.getResponse();
FacetContext facetContext = (FacetContext) result.getRequest().getContext()
.get(FACET_CONTEXT_KEY);
facetContext.getAllQueryFacets()
.removeIf(queryFacet -> keyCountsToRemove.contains(queryFacet.getKey()));
result.getRequest().getContext().put(FACET_CONTEXT_KEY, facetContext);
log.debug("In SOLR query '" + result.getRequest() + "', Facet Queries results having labels "
+ keyCountsToRemove + " have been removed from results");
}
}
}
}
// ////////////////////// SolrInfoMBeans methods //////////////////////

View File

@@ -191,6 +191,16 @@ public abstract class AbstractAlfrescoSolrIT implements SolrTestFiles, AlfrescoS
h.reload();
}
protected void disableIndexing()
{
admin.actionDisableIndexing(new ModifiableSolrParams());
}
protected void enableIndexing()
{
admin.actionEnableIndexing(new ModifiableSolrParams());
}
/**
* @deprecated as testHarness is used
* Get admin core handler

View File

@@ -62,7 +62,7 @@ import org.alfresco.solr.tracker.DocRouter;
import org.alfresco.solr.tracker.IndexHealthReport;
import org.alfresco.solr.tracker.MetadataTracker;
import org.alfresco.solr.tracker.PropertyRouter;
import org.alfresco.solr.tracker.NodeStatePublisher;
import org.alfresco.solr.tracker.ShardStatePublisher;
import org.alfresco.solr.tracker.TrackerRegistry;
import org.apache.solr.common.SolrException;
import org.apache.solr.common.params.CoreAdminParams;
@@ -219,23 +219,22 @@ public class AlfrescoCoreAdminHandlerIT
}
@Test
public void coreIsMaster_thenCoreStatePublisherInstanceCorrespondsToMetadataTracker()
public void coreIsMaster_thenCoreStatePublisherInstanceCorrespondsToShardStatePublisher()
{
MetadataTracker coreStatePublisher = mock(MetadataTracker.class);
ShardStatePublisher coreStatePublisher = mock(ShardStatePublisher.class);
when(trackerRegistry.getTrackerForCore(anyString(), eq(MetadataTracker.class)))
when(trackerRegistry.getTrackerForCore(anyString(), eq(ShardStatePublisher.class)))
.thenReturn(coreStatePublisher);
assertSame(coreStatePublisher, alfrescoCoreAdminHandler.coreStatePublisher("ThisIsTheCoreName"));
}
@Test
public void coreIsSlave_thenCoreStatePublisherInstanceCorrespondsToSlaveCoreStatePublisher()
public void coreIsSlave_thenCoreStatePublisherInstanceCorrespondsToShardStatePublisher()
{
NodeStatePublisher coreStateTracker = mock(NodeStatePublisher.class);
ShardStatePublisher coreStateTracker = mock(ShardStatePublisher.class);
when(trackerRegistry.getTrackerForCore(anyString(), eq(MetadataTracker.class))).thenReturn(null);
when(trackerRegistry.getTrackerForCore(anyString(), eq(NodeStatePublisher.class))).thenReturn(coreStateTracker);
when(trackerRegistry.getTrackerForCore(anyString(), eq(ShardStatePublisher.class))).thenReturn(coreStateTracker);
assertSame(coreStateTracker, alfrescoCoreAdminHandler.coreStatePublisher("ThisIsTheCoreName"));
}

View File

@@ -26,37 +26,21 @@
package org.alfresco.solr;
import org.alfresco.solr.adapters.IOpenBitSet;
import org.alfresco.solr.adapters.SolrOpenBitSetAdapter;
import org.alfresco.solr.tracker.AclTracker;
import org.alfresco.solr.tracker.IndexHealthReport;
import org.alfresco.solr.tracker.MetadataTracker;
import org.alfresco.solr.tracker.TrackerRegistry;
import org.apache.solr.common.params.ModifiableSolrParams;
import org.apache.solr.common.util.NamedList;
import org.apache.solr.core.CoreContainer;
import org.apache.solr.core.SolrCore;
import org.apache.solr.core.SolrResourceLoader;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.mockito.junit.MockitoJUnitRunner;
import java.util.Properties;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Consumer;
import static java.util.Optional.of;
import static java.util.stream.IntStream.range;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.ACL_TX_IN_INDEX_NOT_IN_DB;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.ACTION_ERROR_MESSAGE_LABEL;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.ACTION_STATUS_LABEL;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.ACTION_STATUS_NOT_SCHEDULED;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.ACTION_STATUS_SCHEDULED;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.ADDITIONAL_INFO;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.ALFRESCO_CORE_NAME;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.ARCHIVE_CORE_NAME;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.ARG_ACLID;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.ARG_ACLTXID;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.ARG_NODEID;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.ARG_TXID;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.CORE_PARAMETER_NAMES;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.DRY_RUN_PARAMETER_NAME;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.DUPLICATED_ACL_TX_IN_INDEX;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.DUPLICATED_TX_IN_INDEX;
@@ -69,13 +53,53 @@ import static org.alfresco.solr.AlfrescoCoreAdminHandler.TO_TX_COMMIT_TIME_PARAM
import static org.alfresco.solr.AlfrescoCoreAdminHandler.TX_IN_INDEX_NOT_IN_DB;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.UNKNOWN_CORE_MESSAGE;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.UNPROCESSABLE_REQUEST_ON_SLAVE_NODES;
import static org.alfresco.solr.AlfrescoCoreAdminHandler.VERSION_CORE_NAME;
import static org.apache.solr.common.params.CoreAdminParams.ACTION;
import static org.apache.solr.common.params.CoreAdminParams.CORE;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertTrue;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.anyString;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.spy;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.verifyNoInteractions;
import static org.mockito.Mockito.when;
import org.alfresco.solr.adapters.IOpenBitSet;
import org.alfresco.solr.adapters.SolrOpenBitSetAdapter;
import org.alfresco.solr.client.SOLRAPIClient;
import org.alfresco.solr.tracker.AclTracker;
import org.alfresco.solr.tracker.IndexHealthReport;
import org.alfresco.solr.tracker.MetadataTracker;
import org.alfresco.solr.tracker.TrackerRegistry;
import org.apache.solr.common.params.ModifiableSolrParams;
import org.apache.solr.common.params.SolrParams;
import org.apache.solr.common.util.NamedList;
import org.apache.solr.common.util.SimpleOrderedMap;
import org.apache.solr.core.CoreContainer;
import org.apache.solr.core.SolrCore;
import org.apache.solr.core.SolrResourceLoader;
import org.apache.solr.request.SolrQueryRequest;
import org.apache.solr.response.SolrQueryResponse;
import org.json.JSONException;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mock;
import org.mockito.junit.MockitoJUnitRunner;
import java.util.List;
import java.util.Properties;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.function.Consumer;
@RunWith(MockitoJUnitRunner.class)
public class AlfrescoCoreAdminHandlerTest
{
@@ -97,7 +121,7 @@ public class AlfrescoCoreAdminHandlerTest
}
@Test
public void noTargetCoreInParams()
public void noTargetCoreToFixInParams()
{
assertEquals(0, params.size());
@@ -106,7 +130,7 @@ public class AlfrescoCoreAdminHandlerTest
}
@Test
public void unknownTargetCoreInParams()
public void unknownTargetCoreToFixInParams()
{
String invalidCoreName = "thisIsAnInvalidOrAtLeastUnknownCoreName";
params.set(CORE, invalidCoreName);
@@ -213,6 +237,76 @@ public class AlfrescoCoreAdminHandlerTest
assertEquals(ACTION_STATUS_NOT_SCHEDULED, actionResponse.get(ACTION_STATUS_LABEL));
}
@Test
public void masterOrStandaloneNodeWithTrackersDisabled_DryRunParameterShouldBeForcedToTrue()
{
class TestMetadataTracker extends MetadataTracker {
protected TestMetadataTracker() {
super(new Properties(), mock(SOLRAPIClient.class), ALFRESCO_CORE_NAME, mock(InformationServer.class));
this.state = new TrackerState();
}
@Override
protected void doTrack(String iterationId) {
// Nothing to be done here, it's a fake implementation.
}
}
class TestAclTracker extends AclTracker {
protected TestAclTracker() {
super(new Properties(), mock(SOLRAPIClient.class), ALFRESCO_CORE_NAME, mock(InformationServer.class));
this.state = new TrackerState();
}
@Override
protected void doTrack(String iterationId) {
// Nothing to be done here, it's a fake implementation.
}
}
admin = new AlfrescoCoreAdminHandler() {
@Override
NamedList<Object> fixOnSpecificCore(
String coreName,
Long fromTxCommitTime,
Long toTxCommitTime,
boolean dryRun,
int maxTransactionsToSchedule) {
return new NamedList<>(); // dummy entry
}
@Override
boolean isMasterOrStandalone(String coreName)
{
return true;
}
};
admin.trackerRegistry = registry;
final MetadataTracker metadataTracker = new TestMetadataTracker();
final AclTracker aclTracker = new TestAclTracker();
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, MetadataTracker.class)).thenReturn(metadataTracker);
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, AclTracker.class)).thenReturn(aclTracker);
when(registry.getTrackersForCore(ALFRESCO_CORE_NAME)).thenReturn(List.of(metadataTracker, aclTracker));
params.set(CORE, ALFRESCO_CORE_NAME);
// Let's disable tracking on Alfresco
admin.actionDisableIndexing(params);
params.set(DRY_RUN_PARAMETER_NAME, false);
NamedList<Object> actionResponse = admin.actionFIX(params);
assertEquals(false, actionResponse.get(DRY_RUN_PARAMETER_NAME));
assertEquals(ACTION_STATUS_NOT_SCHEDULED, actionResponse.get(ACTION_STATUS_LABEL));
assertNotNull(
"There should be a message which informs the requestor about the actual dryRun execution",
actionResponse.get(ADDITIONAL_INFO));
}
@Test
public void masterOrStandaloneNode_explicitDryRunParameterIsEchoed()
{
@@ -482,6 +576,532 @@ public class AlfrescoCoreAdminHandlerTest
.orElseThrow(() -> new RuntimeException(MISSING_ACL_TX_IN_INDEX + " section not found in response.")));
}
@Test
public void disableIndexingActionParameter_shouldTriggerTheDisableIndexingAction()
{
final AtomicBoolean invocationMarker = new AtomicBoolean();
admin = new AlfrescoCoreAdminHandler() {
@Override
NamedList<Object> fixOnSpecificCore(
String coreName,
Long fromTxCommitTime,
Long toTxCommitTime,
boolean dryRun,
int maxTransactionsToSchedule) {
return new NamedList<>(); // dummy entry
}
@Override
NamedList<Object> actionDisableIndexing(SolrParams params) throws JSONException {
invocationMarker.set(true);
return new SimpleOrderedMap<>();
}
};
params.set(ACTION, "DISABLE-INDEXING");
SolrQueryRequest request = mock(SolrQueryRequest.class);
when(request.getParams()).thenReturn(params);
admin.handleCustomAction(request, mock(SolrQueryResponse.class));
assertTrue(invocationMarker.get());
}
@Test
public void enableIndexingActionParameter_shouldTriggerTheIndexingEnabling()
{
final AtomicBoolean invocationMarker = new AtomicBoolean();
admin = new AlfrescoCoreAdminHandler() {
@Override
NamedList<Object> fixOnSpecificCore(
String coreName,
Long fromTxCommitTime,
Long toTxCommitTime,
boolean dryRun,
int maxTransactionsToSchedule) {
return new NamedList<>(); // dummy entry
}
@Override
NamedList<Object> actionEnableIndexing(SolrParams params) throws JSONException {
invocationMarker.set(true);
return new SimpleOrderedMap<>();
}
};
params.set(ACTION, "ENABLE-INDEXING");
SolrQueryRequest request = mock(SolrQueryRequest.class);
when(request.getParams()).thenReturn(params);
admin.handleCustomAction(request, mock(SolrQueryResponse.class));
assertTrue(invocationMarker.get());
}
@Test
public void unknownCoreNameInDisableIndexingCommand_shouldReturnAnErrorResponse()
{
String unknownCoreName = "ThisShouldBeAnInexistentCore";
CORE_PARAMETER_NAMES.forEach(parameter -> {
params.set(parameter, unknownCoreName);
NamedList<?> response = admin.actionDisableIndexing(params);
assertEquals(UNKNOWN_CORE_MESSAGE + unknownCoreName, response.get(ACTION_ERROR_MESSAGE_LABEL));
});
}
@Test
public void unknownCoreNameInEnableIndexingCommand_shouldReturnAnErrorResponse()
{
String unknownCoreName = "ThisShouldBeAnInexistentCore";
CORE_PARAMETER_NAMES.forEach(parameter -> {
params.set(parameter, unknownCoreName);
NamedList<?> response = admin.actionEnableIndexing(params);
assertEquals(UNKNOWN_CORE_MESSAGE + unknownCoreName, response.get(ACTION_ERROR_MESSAGE_LABEL));
});
}
@Test
public void disableIndexingOnSpecificSlaveCore_shouldReturnAnErrorResponse()
{
// The admin handler detects if a core is slave, master or standalone by checking
// the trackers installed on it. If no trackers have been registered, then the core is considered a slave.
assertFalse(admin.isMasterOrStandalone(ALFRESCO_CORE_NAME));
CORE_PARAMETER_NAMES.forEach(parameter -> {
params.set(parameter, ALFRESCO_CORE_NAME);
NamedList<?> response = admin.actionDisableIndexing(params);
assertEquals(UNPROCESSABLE_REQUEST_ON_SLAVE_NODES, response.get(ACTION_ERROR_MESSAGE_LABEL));
});
}
@Test
public void enableIndexingOnSpecificSlaveCore_shouldReturnAnErrorResponse()
{
// The admin handler detects if a core is slave, master or standalone by checking
// the trackers installed on it. If no trackers have been registered, then the core is considered a slave.
assertFalse(admin.isMasterOrStandalone(ALFRESCO_CORE_NAME));
CORE_PARAMETER_NAMES.forEach(parameter -> {
params.set(parameter, ALFRESCO_CORE_NAME);
NamedList<?> response = admin.actionEnableIndexing(params);
assertEquals(UNPROCESSABLE_REQUEST_ON_SLAVE_NODES, response.get(ACTION_ERROR_MESSAGE_LABEL));
});
}
@Test
public void disableIndexingWithoutIndicatingSpecificCore_shouldHaveNoEffectIfAllCoresAreSlave()
{
admin = spy(new AlfrescoCoreAdminHandler());
admin.trackerRegistry = registry;
when(registry.getCoreNames()).thenReturn(Set.of(ALFRESCO_CORE_NAME, ARCHIVE_CORE_NAME));
admin.actionDisableIndexing(params);
verify(admin, times(0)).disableIndexingOnSpecificCore(anyString());
}
@Test
public void enableIndexingWithoutIndicatingSpecificCore_shouldHaveNoEffectIfAllCoresAreSlave()
{
admin = spy(new AlfrescoCoreAdminHandler());
admin.trackerRegistry = registry;
when(registry.getCoreNames()).thenReturn(Set.of(ALFRESCO_CORE_NAME, ARCHIVE_CORE_NAME));
admin.actionEnableIndexing(params);
verify(admin, times(0)).enableIndexingOnSpecificCore(anyString());
}
@Test
public void disableIndexingWithoutIndicatingSpecificCore_shouldAffectOnlyMasterOrStandaloneCores()
{
admin = spy(new AlfrescoCoreAdminHandler());
admin.trackerRegistry = registry;
when(registry.getCoreNames()).thenReturn(Set.of(ALFRESCO_CORE_NAME, ARCHIVE_CORE_NAME, VERSION_CORE_NAME));
// "alfresco" and "archive" are master/standalone cores, "version" is a slave core
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, MetadataTracker.class)).thenReturn(mock(MetadataTracker.class));
when(registry.getTrackerForCore(ARCHIVE_CORE_NAME, MetadataTracker.class)).thenReturn(mock(MetadataTracker.class));
when(registry.getTrackerForCore(VERSION_CORE_NAME, MetadataTracker.class)).thenReturn(null);
admin.actionDisableIndexing(params);
verify(admin, times(1)).disableIndexingOnSpecificCore(ALFRESCO_CORE_NAME);
verify(admin, times(1)).disableIndexingOnSpecificCore(ARCHIVE_CORE_NAME);
}
@Test
public void enableIndexingWithoutIndicatingSpecificCore_shouldAffectOnlyMasterOrStandaloneCores()
{
admin = spy(new AlfrescoCoreAdminHandler());
admin.trackerRegistry = registry;
when(registry.getCoreNames()).thenReturn(Set.of(ALFRESCO_CORE_NAME, ARCHIVE_CORE_NAME, VERSION_CORE_NAME));
// "alfresco" and "archive" are master/standalone cores, "version" is a slave core
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, MetadataTracker.class)).thenReturn(mock(MetadataTracker.class));
when(registry.getTrackerForCore(ARCHIVE_CORE_NAME, MetadataTracker.class)).thenReturn(mock(MetadataTracker.class));
when(registry.getTrackerForCore(VERSION_CORE_NAME, MetadataTracker.class)).thenReturn(null);
admin.actionEnableIndexing(params);
verify(admin, times(1)).enableIndexingOnSpecificCore(ALFRESCO_CORE_NAME);
verify(admin, times(1)).enableIndexingOnSpecificCore(ARCHIVE_CORE_NAME);
}
@Test
public void retryActionOnSlaveNode_shouldReturnWarningMessage()
{
admin.coreNames().forEach(coreName -> assertFalse(admin.isMasterOrStandalone(coreName)));
NamedList<Object> actionResponse = admin.actionRETRY(params);
assertNotNull(actionResponse.get(AlfrescoCoreAdminHandler.WARNING));
}
@Test
public void retryActionWhenIndexingIsDisabled_shouldReturnAnInfoMessage()
{
// That is not true: each core has an its own InformationServer instance
// However for this specific test we don't care
InformationServer srv = mock(InformationServer.class);
admin.informationServers = new ConcurrentHashMap<>();
admin.informationServers.put(ALFRESCO_CORE_NAME, srv);
admin.informationServers.put(ARCHIVE_CORE_NAME, srv);
// That is not true: each core has an its own MetadataTracker instance
// However for this specific test we don't care
MetadataTracker metadataTracker = mock(MetadataTracker.class);
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, MetadataTracker.class)).thenReturn(metadataTracker);
when(registry.getTrackerForCore(ARCHIVE_CORE_NAME, MetadataTracker.class)).thenReturn(metadataTracker);
when(metadataTracker.isEnabled()).thenReturn(false);
admin.coreNames().forEach(coreName -> assertTrue(admin.isMasterOrStandalone(coreName)));
final NamedList<Object> actionResponse = admin.actionRETRY(params);
admin.coreNames()
.stream()
.map(actionResponse::get)
.map(NamedList.class::cast)
.forEach(response -> {
assertEquals(ACTION_STATUS_NOT_SCHEDULED, response.get(ACTION_STATUS_LABEL));
assertNotNull(response.get(ADDITIONAL_INFO));
});
verifyNoInteractions(srv);
}
@Test
public void retryActionWhenIndexingIsEnabled_shouldCollectThingsToReindex() throws Exception
{
final Set<Long> alfrescoErrorNodeIds = Set.of(123452L, 13579L, 24680L, 98765L);
final Set<Long> archiveErrorNodeIds = Set.of(1234520L, 913579L, 124680L, 598765L);
InformationServer alfrescoInformationServer = mock(InformationServer.class);
InformationServer archiveInformationServer = mock(InformationServer.class);
admin.informationServers = new ConcurrentHashMap<>();
admin.informationServers.put(ALFRESCO_CORE_NAME, alfrescoInformationServer);
admin.informationServers.put(ARCHIVE_CORE_NAME, archiveInformationServer);
when(alfrescoInformationServer.getErrorDocIds()).thenReturn(alfrescoErrorNodeIds);
when(archiveInformationServer.getErrorDocIds()).thenReturn(archiveErrorNodeIds);
MetadataTracker alfrescoMetadataTracker = mock(MetadataTracker.class);
MetadataTracker archiveMetadataTracker = mock(MetadataTracker.class);
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, MetadataTracker.class)).thenReturn(alfrescoMetadataTracker);
when(registry.getTrackerForCore(ARCHIVE_CORE_NAME, MetadataTracker.class)).thenReturn(archiveMetadataTracker);
when(alfrescoMetadataTracker.isEnabled()).thenReturn(true);
when(archiveMetadataTracker.isEnabled()).thenReturn(true);
admin.coreNames().forEach(coreName -> assertTrue(admin.isMasterOrStandalone(coreName)));
final NamedList<Object> actionResponse = admin.actionRETRY(params);
admin.coreNames()
.stream()
.map(actionResponse::get)
.map(NamedList.class::cast)
.forEach(response -> {
assertEquals(ACTION_STATUS_SCHEDULED, response.get(ACTION_STATUS_LABEL));
});
alfrescoErrorNodeIds.forEach(id -> verify(alfrescoMetadataTracker).addNodeToReindex(id));
archiveErrorNodeIds.forEach(id -> verify(archiveMetadataTracker).addNodeToReindex(id));
}
@Test
public void indexActionOnSlaveNode_shouldReturnWarningMessage()
{
admin.coreNames().forEach(coreName -> assertFalse(admin.isMasterOrStandalone(coreName)));
NamedList<Object> actionResponse = admin.actionINDEX(params);
assertNotNull(actionResponse.get(AlfrescoCoreAdminHandler.WARNING));
}
@Test
public void indexActionWhenIndexingIsDisabled_shouldReturnAnInfoMessage()
{
MetadataTracker metadataTracker = mock(MetadataTracker.class);
AclTracker aclTracker = mock(AclTracker.class);
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, MetadataTracker.class)).thenReturn(metadataTracker);
when(registry.getTrackerForCore(ARCHIVE_CORE_NAME, MetadataTracker.class)).thenReturn(metadataTracker);
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, AclTracker.class)).thenReturn(aclTracker);
when(registry.getTrackerForCore(ARCHIVE_CORE_NAME, AclTracker.class)).thenReturn(aclTracker);
when(metadataTracker.isEnabled()).thenReturn(false);
when(aclTracker.isEnabled()).thenReturn(false);
admin.coreNames().forEach(coreName -> assertTrue(admin.isMasterOrStandalone(coreName)));
final NamedList<Object> actionResponse = admin.actionINDEX(params);
admin.coreNames()
.stream()
.map(actionResponse::get)
.map(NamedList.class::cast)
.forEach(response -> {
assertEquals(ACTION_STATUS_NOT_SCHEDULED, response.get(ACTION_STATUS_LABEL));
assertNotNull(response.get(ADDITIONAL_INFO));
});
}
@Test
public void indexActionWhenIndexingIsEnabled_shouldCollectThingsToReindex()
{
final String txIdParam = "123452";
final String aclTxIdParam = "13579";
final String nodeIdParam = "24680";
final String aclIdParam = "98765";
params.set(ARG_TXID, txIdParam);
params.set(ARG_ACLTXID, aclTxIdParam);
params.set(ARG_NODEID, nodeIdParam);
params.set(ARG_ACLID, aclIdParam);
MetadataTracker alfrescoMetadataTracker = mock(MetadataTracker.class);
AclTracker alfrescoAclTracker = mock(AclTracker.class);
MetadataTracker archiveMetadataTracker = mock(MetadataTracker.class);
AclTracker archiveAclTracker = mock(AclTracker.class);
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, MetadataTracker.class)).thenReturn(alfrescoMetadataTracker);
when(registry.getTrackerForCore(ARCHIVE_CORE_NAME, MetadataTracker.class)).thenReturn(archiveMetadataTracker);
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, AclTracker.class)).thenReturn(alfrescoAclTracker);
when(registry.getTrackerForCore(ARCHIVE_CORE_NAME, AclTracker.class)).thenReturn(archiveAclTracker);
when(alfrescoMetadataTracker.isEnabled()).thenReturn(true);
when(alfrescoAclTracker.isEnabled()).thenReturn(true);
when(archiveMetadataTracker.isEnabled()).thenReturn(true);
when(archiveAclTracker.isEnabled()).thenReturn(true);
admin.coreNames().forEach(coreName -> assertTrue(admin.isMasterOrStandalone(coreName)));
final NamedList<Object> actionResponse = admin.actionINDEX(params);
admin.coreNames()
.stream()
.map(actionResponse::get)
.map(NamedList.class::cast)
.forEach(response -> {
assertEquals(ACTION_STATUS_SCHEDULED, response.get(ACTION_STATUS_LABEL));
});
verify(alfrescoMetadataTracker).addTransactionToIndex(Long.parseLong(txIdParam));
verify(alfrescoMetadataTracker).addNodeToIndex(Long.parseLong(nodeIdParam));
verify(alfrescoAclTracker).addAclChangeSetToIndex(Long.parseLong(aclTxIdParam));
verify(alfrescoAclTracker).addAclToIndex(Long.parseLong(aclIdParam));
verify(archiveMetadataTracker).addTransactionToIndex(Long.parseLong(txIdParam));
verify(archiveMetadataTracker).addNodeToIndex(Long.parseLong(nodeIdParam));
verify(archiveAclTracker).addAclChangeSetToIndex(Long.parseLong(aclTxIdParam));
verify(archiveAclTracker).addAclToIndex(Long.parseLong(aclIdParam));
}
@Test
public void reindexActionOnSlaveNode_shouldReturnWarningMessage()
{
admin.coreNames().forEach(coreName -> assertFalse(admin.isMasterOrStandalone(coreName)));
NamedList<Object> actionResponse = admin.actionREINDEX(params);
assertNotNull(actionResponse.get(AlfrescoCoreAdminHandler.WARNING));
}
@Test
public void reindexActionWhenIndexingIsDisabled_shouldReturnAnInfoMessage()
{
MetadataTracker metadataTracker = mock(MetadataTracker.class);
AclTracker aclTracker = mock(AclTracker.class);
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, MetadataTracker.class)).thenReturn(metadataTracker);
when(registry.getTrackerForCore(ARCHIVE_CORE_NAME, MetadataTracker.class)).thenReturn(metadataTracker);
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, AclTracker.class)).thenReturn(aclTracker);
when(registry.getTrackerForCore(ARCHIVE_CORE_NAME, AclTracker.class)).thenReturn(aclTracker);
when(metadataTracker.isEnabled()).thenReturn(false);
when(aclTracker.isEnabled()).thenReturn(false);
admin.coreNames().forEach(coreName -> assertTrue(admin.isMasterOrStandalone(coreName)));
final NamedList<Object> actionResponse = admin.actionREINDEX(params);
admin.coreNames()
.stream()
.map(actionResponse::get)
.map(NamedList.class::cast)
.forEach(response -> {
assertEquals(ACTION_STATUS_NOT_SCHEDULED, response.get(ACTION_STATUS_LABEL));
assertNotNull(response.get(ADDITIONAL_INFO));
});
}
@Test
public void reindexActionWhenIndexingIsEnabled_shouldCollectThingsToReindex()
{
final String txIdParam = "123452";
final String aclTxIdParam = "13579";
final String nodeIdParam = "24680";
final String aclIdParam = "98765";
params.set(ARG_TXID, txIdParam);
params.set(ARG_ACLTXID, aclTxIdParam);
params.set(ARG_NODEID, nodeIdParam);
params.set(ARG_ACLID, aclIdParam);
MetadataTracker alfrescoMetadataTracker = mock(MetadataTracker.class);
AclTracker alfrescoAclTracker = mock(AclTracker.class);
MetadataTracker archiveMetadataTracker = mock(MetadataTracker.class);
AclTracker archiveAclTracker = mock(AclTracker.class);
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, MetadataTracker.class)).thenReturn(alfrescoMetadataTracker);
when(registry.getTrackerForCore(ARCHIVE_CORE_NAME, MetadataTracker.class)).thenReturn(archiveMetadataTracker);
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, AclTracker.class)).thenReturn(alfrescoAclTracker);
when(registry.getTrackerForCore(ARCHIVE_CORE_NAME, AclTracker.class)).thenReturn(archiveAclTracker);
when(alfrescoMetadataTracker.isEnabled()).thenReturn(true);
when(alfrescoAclTracker.isEnabled()).thenReturn(true);
when(archiveMetadataTracker.isEnabled()).thenReturn(true);
when(archiveAclTracker.isEnabled()).thenReturn(true);
admin.coreNames().forEach(coreName -> assertTrue(admin.isMasterOrStandalone(coreName)));
final NamedList<Object> actionResponse = admin.actionREINDEX(params);
admin.coreNames()
.stream()
.map(actionResponse::get)
.map(NamedList.class::cast)
.forEach(response -> {
assertEquals(ACTION_STATUS_SCHEDULED, response.get(ACTION_STATUS_LABEL));
});
verify(alfrescoMetadataTracker).addTransactionToReindex(Long.parseLong(txIdParam));
verify(alfrescoMetadataTracker).addNodeToReindex(Long.parseLong(nodeIdParam));
verify(alfrescoAclTracker).addAclChangeSetToReindex(Long.parseLong(aclTxIdParam));
verify(alfrescoAclTracker).addAclToReindex(Long.parseLong(aclIdParam));
verify(archiveMetadataTracker).addTransactionToReindex(Long.parseLong(txIdParam));
verify(archiveMetadataTracker).addNodeToReindex(Long.parseLong(nodeIdParam));
verify(archiveAclTracker).addAclChangeSetToReindex(Long.parseLong(aclTxIdParam));
verify(archiveAclTracker).addAclToReindex(Long.parseLong(aclIdParam));
}
@Test
public void purgeActionOnSlaveNode_shouldReturnWarningMessage()
{
admin.coreNames().forEach(coreName -> assertFalse(admin.isMasterOrStandalone(coreName)));
NamedList<Object> actionResponse = admin.actionPURGE(params);
assertNotNull(actionResponse.get(AlfrescoCoreAdminHandler.WARNING));
}
@Test
public void purgeActionWhenIndexingIsDisabled_shouldReturnAnInfoMessage()
{
MetadataTracker metadataTracker = mock(MetadataTracker.class);
AclTracker aclTracker = mock(AclTracker.class);
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, MetadataTracker.class)).thenReturn(metadataTracker);
when(registry.getTrackerForCore(ARCHIVE_CORE_NAME, MetadataTracker.class)).thenReturn(metadataTracker);
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, AclTracker.class)).thenReturn(aclTracker);
when(registry.getTrackerForCore(ARCHIVE_CORE_NAME, AclTracker.class)).thenReturn(aclTracker);
when(metadataTracker.isEnabled()).thenReturn(false);
when(aclTracker.isEnabled()).thenReturn(false);
admin.coreNames().forEach(coreName -> assertTrue(admin.isMasterOrStandalone(coreName)));
final NamedList<Object> actionResponse = admin.actionPURGE(params);
admin.coreNames()
.stream()
.map(actionResponse::get)
.map(NamedList.class::cast)
.forEach(response -> {
assertEquals(ACTION_STATUS_NOT_SCHEDULED, response.get(ACTION_STATUS_LABEL));
assertNotNull(response.get(ADDITIONAL_INFO));
});
}
@Test
public void purgeActionWhenIndexingIsEnabled_shouldCollectTransactionsToPurge()
{
final String txIdParam = "123452";
final String aclTxIdParam = "13579";
final String nodeIdParam = "24680";
final String aclIdParam = "98765";
params.set(ARG_TXID, txIdParam);
params.set(ARG_ACLTXID, aclTxIdParam);
params.set(ARG_NODEID, nodeIdParam);
params.set(ARG_ACLID, aclIdParam);
MetadataTracker alfrescoMetadataTracker = mock(MetadataTracker.class);
AclTracker alfrescoAclTracker = mock(AclTracker.class);
MetadataTracker archiveMetadataTracker = mock(MetadataTracker.class);
AclTracker archiveAclTracker = mock(AclTracker.class);
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, MetadataTracker.class)).thenReturn(alfrescoMetadataTracker);
when(registry.getTrackerForCore(ARCHIVE_CORE_NAME, MetadataTracker.class)).thenReturn(archiveMetadataTracker);
when(registry.getTrackerForCore(ALFRESCO_CORE_NAME, AclTracker.class)).thenReturn(alfrescoAclTracker);
when(registry.getTrackerForCore(ARCHIVE_CORE_NAME, AclTracker.class)).thenReturn(archiveAclTracker);
when(alfrescoMetadataTracker.isEnabled()).thenReturn(true);
when(alfrescoAclTracker.isEnabled()).thenReturn(true);
when(archiveMetadataTracker.isEnabled()).thenReturn(true);
when(archiveAclTracker.isEnabled()).thenReturn(true);
admin.coreNames().forEach(coreName -> assertTrue(admin.isMasterOrStandalone(coreName)));
final NamedList<Object> actionResponse = admin.actionPURGE(params);
admin.coreNames()
.stream()
.map(actionResponse::get)
.map(NamedList.class::cast)
.forEach(response -> {
assertEquals(ACTION_STATUS_SCHEDULED, response.get(ACTION_STATUS_LABEL));
});
verify(alfrescoMetadataTracker).addTransactionToPurge(Long.parseLong(txIdParam));
verify(alfrescoMetadataTracker).addNodeToPurge(Long.parseLong(nodeIdParam));
verify(alfrescoAclTracker).addAclChangeSetToPurge(Long.parseLong(aclTxIdParam));
verify(alfrescoAclTracker).addAclToPurge(Long.parseLong(aclIdParam));
verify(archiveMetadataTracker).addTransactionToPurge(Long.parseLong(txIdParam));
verify(archiveMetadataTracker).addNodeToPurge(Long.parseLong(nodeIdParam));
verify(archiveAclTracker).addAclChangeSetToPurge(Long.parseLong(aclTxIdParam));
verify(archiveAclTracker).addAclToPurge(Long.parseLong(aclIdParam));
}
private <T> void assertThatExplicitParameterIsEchoed(String parameterName, T parameterValue)
{
admin = new AlfrescoCoreAdminHandler() {

View File

@@ -0,0 +1,180 @@
/*
* #%L
* Alfresco Search Services
* %%
* Copyright (C) 2005 - 2020 Alfresco Software Limited
* %%
* This file is part of the Alfresco software.
* If the software was purchased under a paid Alfresco license, the terms of
* the paid license agreement will prevail. Otherwise, the software is
* provided under the following open source license terms:
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
* #L%
*/
package org.alfresco.solr;
import org.alfresco.solr.client.Acl;
import org.alfresco.solr.client.AclChangeSet;
import org.alfresco.solr.client.AclReaders;
import org.alfresco.solr.client.Node;
import org.alfresco.solr.client.NodeMetaData;
import org.alfresco.solr.client.SOLRAPIQueueClient;
import org.alfresco.solr.client.Transaction;
import org.alfresco.solr.tracker.ActivatableTracker;
import org.alfresco.solr.tracker.Tracker;
import org.apache.lucene.index.Term;
import org.apache.lucene.search.TermQuery;
import org.apache.solr.SolrTestCaseJ4;
import org.junit.After;
import org.junit.BeforeClass;
import org.junit.Test;
import org.quartz.SchedulerException;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;
import static java.util.Collections.singletonList;
import static org.alfresco.solr.AlfrescoSolrUtils.getAcl;
import static org.alfresco.solr.AlfrescoSolrUtils.getAclChangeSet;
import static org.alfresco.solr.AlfrescoSolrUtils.getAclReaders;
import static org.alfresco.solr.AlfrescoSolrUtils.getNode;
import static org.alfresco.solr.AlfrescoSolrUtils.getNodeMetaData;
import static org.alfresco.solr.AlfrescoSolrUtils.getTransaction;
import static org.alfresco.solr.AlfrescoSolrUtils.indexAclChangeSet;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
@SolrTestCaseJ4.SuppressSSL
public class AlfrescoIndexingStatePersistenceAcrossReloadsIT extends AbstractAlfrescoSolrIT
{
@BeforeClass
public static void beforeClass() throws Exception
{
initAlfrescoCore("schema.xml");
admin = (AlfrescoCoreAdminHandler)getCore().getCoreContainer().getMultiCoreHandler();
}
@After
public void clearQueue()
{
SOLRAPIQueueClient.NODE_META_DATA_MAP.clear();
SOLRAPIQueueClient.TRANSACTION_QUEUE.clear();
SOLRAPIQueueClient.ACL_CHANGE_SET_QUEUE.clear();
SOLRAPIQueueClient.ACL_READERS_MAP.clear();
SOLRAPIQueueClient.ACL_MAP.clear();
SOLRAPIQueueClient.NODE_MAP.clear();
}
@Test
public void testIndexingStateAcrossReloads() throws Exception
{
long localId = 0L;
AclChangeSet aclChangeSet = getAclChangeSet(1, ++localId);
Acl acl = getAcl(aclChangeSet);
AclReaders aclReaders = getAclReaders(aclChangeSet, acl, singletonList("joel"), singletonList("phil"), null);
indexAclChangeSet(aclChangeSet,
singletonList(acl),
singletonList(aclReaders));
int numNodes = 1;
List<Node> nodes = new ArrayList<>();
List<NodeMetaData> nodeMetaDatas = new ArrayList<>();
Transaction bigTxn = getTransaction(0, numNodes, ++localId);
for(int i=0; i<numNodes; i++)
{
Node node = getNode(bigTxn, acl, Node.SolrApiNodeStatus.UPDATED);
nodes.add(node);
NodeMetaData nodeMetaData = getNodeMetaData(node, bigTxn, acl, "mike", null, false);
nodeMetaDatas.add(nodeMetaData);
}
indexTransaction(bigTxn, nodes, nodeMetaDatas);
waitForDocCount(new TermQuery(new Term("content@s___t@{http://www.alfresco.org/model/content/1.0}content", "world")), numNodes, 100000);
Collection<Tracker> trackers = getTrackers();
disableIndexing();
// Make sure trackers have been disabled
Collection<ActivatableTracker> activatableTrackers =
getTrackers().stream()
.filter(tracker -> tracker instanceof ActivatableTracker)
.map(ActivatableTracker.class::cast)
.collect(Collectors.toList());
assertFalse(activatableTrackers.isEmpty());
activatableTrackers.forEach(tracker -> assertTrue(tracker.isDisabled()));
// Reload the core
reloadAndAssertCorrect(trackers, trackers.size(), getJobsCount());
// Make sure indexing is disabled in the reloaded core
Collection<ActivatableTracker> activatableTrackersBelongingToReloadedCore =
getTrackers().stream()
.filter(tracker -> tracker instanceof ActivatableTracker)
.map(ActivatableTracker.class::cast)
.collect(Collectors.toList());
assertFalse(activatableTrackersBelongingToReloadedCore.isEmpty());
activatableTrackersBelongingToReloadedCore.forEach(tracker -> assertTrue(tracker.isDisabled()));
// Re-enable indexing
enableIndexing();
// Make sure tracking has been enabled
activatableTrackersBelongingToReloadedCore.forEach(tracker -> assertTrue(tracker.isEnabled()));
Transaction bigTxn2 = getTransaction(0, numNodes, ++localId);
for(int i=0; i<numNodes; i++)
{
Node node = getNode(bigTxn2, acl, Node.SolrApiNodeStatus.UPDATED);
nodes.add(node);
NodeMetaData nodeMetaData = getNodeMetaData(node, bigTxn2, acl, "mike", null, false);
nodeMetaDatas.add(nodeMetaData);
}
indexTransaction(bigTxn2, nodes, nodeMetaDatas);
waitForDocCount(new TermQuery(new Term("content@s___t@{http://www.alfresco.org/model/content/1.0}content", "world")), numNodes * 2, 100000);
}
private void reloadAndAssertCorrect(Collection<Tracker> trackers, int numOfTrackers, int jobs) throws Exception
{
reload();
//Give it a little time to shutdown properly and recover.
TimeUnit.SECONDS.sleep(1);
Collection<Tracker> reloadedTrackers = getTrackers();
assertEquals("After a reload the number of trackers should be the same", numOfTrackers, getTrackers().size());
assertEquals("After a reload the number of jobs should be the same", jobs, getJobsCount());
trackers.forEach(tracker -> assertFalse("The reloaded trackers should be different.", reloadedTrackers.contains(tracker)));
}
private int getJobsCount() throws SchedulerException
{
return admin.getScheduler().getJobsCount();
}
}

View File

@@ -0,0 +1,189 @@
/*
* #%L
* Alfresco Search Services
* %%
* Copyright (C) 2005 - 2020 Alfresco Software Limited
* %%
* This file is part of the Alfresco software.
* If the software was purchased under a paid Alfresco license, the terms of
* the paid license agreement will prevail. Otherwise, the software is
* provided under the following open source license terms:
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
* #L%
*/
package org.alfresco.solr.tracker;
import org.alfresco.solr.InformationServer;
import org.alfresco.solr.TrackerState;
import org.alfresco.solr.client.SOLRAPIClient;
import org.junit.Before;
import org.junit.Test;
import java.util.Properties;
import java.util.concurrent.Semaphore;
import java.util.stream.Stream;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertTrue;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.spy;
import static org.mockito.Mockito.verify;
public class ActivatableTrackerTest
{
private static class TestActivatableTracker extends ActivatableTracker {
protected TestActivatableTracker(Properties properties, TrackerState state) {
super(properties, mock(SOLRAPIClient.class), "thisIsTheCoreName", mock(InformationServer.class), Type.NODE_STATE_PUBLISHER);
this.state = state;
}
@Override
protected void doTrack(String iterationId) {
// Nothing to be done here, it's a fake implementation.
}
@Override
public void maintenance() {
}
@Override
public boolean hasMaintenance() {
return false;
}
@Override
public Semaphore getWriteLock() {
return null;
}
@Override
public Semaphore getRunLock() {
return null;
}
}
private ActivatableTracker tracker;
private TrackerState state;
@Before
public void setUp()
{
state = new TrackerState();
state.setRunning(false);
tracker = spy(new TestActivatableTracker(new Properties(), state));
tracker.enable();
assertTrue(tracker.isEnabled());
assertFalse(tracker.state.isRunning());
}
@Test
public void enabledShouldBeTheDefaultState()
{
assertTrue(tracker.isEnabled());
}
@Test
public void trackersCanBeExplicitlyDisabled()
{
assertTrue(tracker.isEnabled());
tracker.disable();
assertFalse(tracker.isEnabled());
}
@Test
public void disablingATracker_shouldClearTheScheduledMaintenanceWork()
{
assertTrue(tracker.isEnabled());
tracker.disable();
assertFalse(tracker.isEnabled());
verify(tracker).clearScheduledMaintenanceWork();
}
@Test
public void enableIsIdempotent()
{
assertTrue(tracker.isEnabled());
tracker.enable();
assertTrue(tracker.isEnabled());
tracker.disable();
assertFalse(tracker.isEnabled());
tracker.enable();
assertTrue(tracker.isEnabled());
tracker.enable();
assertTrue(tracker.isEnabled());
}
@Test
public void disableIsIdempotent()
{
assertTrue(tracker.isEnabled());
tracker.disable();
assertFalse(tracker.isEnabled());
tracker.disable();
assertFalse(tracker.isEnabled());
tracker.enable();
assertTrue(tracker.isEnabled());
tracker.disable();
assertFalse(tracker.isEnabled());
tracker.disable();
assertFalse(tracker.isEnabled());
}
@Test
public void disableIndexingOnRunningTracker_shouldDisableTheTrackerAnSetItInRollbackMode()
{
state.setRunning(true);
assertTrue(tracker.isEnabled());
assertTrue(tracker.state.isRunning());
tracker.disable();
state.setRunning(true);
assertTrue(tracker.state.isRunning());
assertFalse(tracker.isEnabled());
verify(tracker).setRollback(true, null);
}
@Test
public void assertActivatableTrackersList() {
Stream.of(MetadataTracker.class, AclTracker.class, ContentTracker.class, CascadeTracker.class)
.forEach(clazz -> assertTrue("Warning: " + clazz + " is supposed to be enabled/disabled", ActivatableTracker.class.isAssignableFrom(clazz)));
}
@Test
public void assertAlwaysActivatedTrackersList() {
Stream.of(CommitTracker.class, ModelTracker.class)
.forEach(clazz -> assertFalse("Warning: " + clazz + " is not supposed to be enabled/disabled", ActivatableTracker.class.isAssignableFrom(clazz)));
}
}

View File

@@ -71,7 +71,7 @@ import java.util.stream.Collectors;
/**
* A partial state of {@link org.alfresco.solr.TrackerState} is exposed through two interfaces: AdminHandler.SUMMARY and
* {@link MetadataTracker#getShardState}.
* {@link ShardStatePublisher#getShardState}.
*
* This test makes sure that state is consistent across the two mentioned approaches. That is, properties returned by the
* Core SUMMARY must have the same value of the same properties in the ShardState.
@@ -109,10 +109,10 @@ public class AlfrescoSolrTrackerStateIT extends AbstractAlfrescoSolrIT
public void shardStateMustBeConsistentWithCoreSummaryStats() throws Exception {
SolrCore core = getCore();
MetadataTracker tracker =
ShardStatePublisher tracker =
of(coreAdminHandler(core))
.map(AlfrescoCoreAdminHandler::getTrackerRegistry)
.map(registry -> registry.getTrackerForCore(core.getName(), MetadataTracker.class))
.map(registry -> registry.getTrackerForCore(core.getName(), ShardStatePublisher.class))
.orElseThrow(() -> new IllegalStateException("Cannot retrieve the Metadata tracker on this test core."));
// 1. First consistency check: ShardState must have the same values of CoreAdmin.SUMMARY report

View File

@@ -67,7 +67,7 @@ import java.util.stream.Collectors;
/**
* A partial state of {@link org.alfresco.solr.TrackerState} is exposed through two interfaces: AdminHandler.SUMMARY and
* {@link MetadataTracker#getShardState}.
* {@link ShardStatePublisher#getShardState}.
* This test makes sure that state is consistent across the two mentioned approaches. That is, properties returned by the
* Core SUMMARY must have the same value of the same properties in the ShardState.
*
@@ -99,10 +99,10 @@ public class DistributedAlfrescoSolrTrackerStateIT extends AbstractAlfrescoDistr
putHandleDefaults();
getCores(solrShards).forEach(core -> {
MetadataTracker tracker =
ShardStatePublisher tracker =
of(coreAdminHandler(core))
.map(AlfrescoCoreAdminHandler::getTrackerRegistry)
.map(registry -> registry.getTrackerForCore(core.getName(), MetadataTracker.class))
.map(registry -> registry.getTrackerForCore(core.getName(), ShardStatePublisher.class))
.orElseThrow(() -> new IllegalStateException("Cannot retrieve the Metadata tracker on this test core."));
// 1. First consistency check: ShardState must have the same values of CoreAdmin.SUMMARY report

View File

@@ -0,0 +1,170 @@
/*
* #%L
* Alfresco Search Services
* %%
* Copyright (C) 2005 - 2020 Alfresco Software Limited
* %%
* This file is part of the Alfresco software.
* If the software was purchased under a paid Alfresco license, the terms of
* the paid license agreement will prevail. Otherwise, the software is
* provided under the following open source license terms:
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
* #L%
*/
package org.apache.solr.handler.component;
import static org.junit.Assert.assertEquals;
import static org.mockito.Mockito.when;
import static org.mockito.MockitoAnnotations.initMocks;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import org.apache.solr.common.util.NamedList;
import org.apache.solr.handler.component.FacetComponent.FacetBase;
import org.apache.solr.handler.component.FacetComponent.FacetContext;
import org.apache.solr.request.SolrQueryRequest;
import org.apache.solr.response.BasicResultContext;
import org.apache.solr.response.SolrQueryResponse;
import org.junit.Before;
import org.junit.Test;
import org.mockito.Mock;
/**
* Unit tests for the {@link AlfrescoSearchHandler} for Facet Queries requests.
*/
public class AlfrescoSearchHandlerFacetQueryIT
{
@Mock
NamedList<NamedList<Object>> mockParams;
@Mock
SolrQueryResponse mockResponse;
@Mock
SolrQueryRequest mockRequest;
@Mock
BasicResultContext mockResultContext;
@Mock
FacetContext mockFacetContext;
@Mock
Map<Object, Object> mockContext;
@Before
public void setUp()
{
initMocks(this);
when(mockResponse.getValues()).thenReturn(mockParams);
when(mockResponse.getResponse()).thenReturn(mockResultContext);
when(mockResultContext.getRequest()).thenReturn(mockRequest);
when(mockRequest.getContext()).thenReturn(mockContext);
when(mockContext.get(AlfrescoSearchHandler.FACET_CONTEXT_KEY)).thenReturn(mockFacetContext);
}
@SuppressWarnings("unchecked")
@Test
public void testKeysWithCountZeroAreRemoved()
{
NamedList<Object> facetQueries = new NamedList<>();
facetQueries.add("small", 1);
facetQueries.add("medium", 0);
facetQueries.add("big", 0);
NamedList<Object> facetCounts = new NamedList<>();
facetCounts.add(FacetComponent.FACET_QUERY_KEY, facetQueries);
when(mockParams.get(AlfrescoSearchHandler.FACET_COUNTS_KEY)).thenReturn(facetCounts);
List<FacetBase> queryFacets = new ArrayList<>();
when(mockFacetContext.getAllQueryFacets()).thenReturn(queryFacets);
AlfrescoSearchHandler.removeFacetQueriesWithCountZero(mockResponse);
assertEquals(((NamedList<Object>) ((NamedList<Object>) mockResponse.getValues()
.get(AlfrescoSearchHandler.FACET_COUNTS_KEY)).get(FacetComponent.FACET_QUERY_KEY)).size(), 1);
assertEquals(((NamedList<Object>) ((NamedList<Object>) mockResponse.getValues()
.get(AlfrescoSearchHandler.FACET_COUNTS_KEY)).getVal(0)).get("small"), 1);
}
@SuppressWarnings("unchecked")
@Test
public void testKeysWithCountNonZeroArePresent()
{
NamedList<Object> facetQueries = new NamedList<>();
facetQueries.add("small", 1);
facetQueries.add("medium", 2);
facetQueries.add("big", 10);
NamedList<Object> facetCounts = new NamedList<>();
facetCounts.add(FacetComponent.FACET_QUERY_KEY, facetQueries);
when(mockParams.get(AlfrescoSearchHandler.FACET_COUNTS_KEY)).thenReturn(facetCounts);
List<FacetBase> queryFacets = new ArrayList<>();
when(mockFacetContext.getAllQueryFacets()).thenReturn(queryFacets);
AlfrescoSearchHandler.removeFacetQueriesWithCountZero(mockResponse);
assertEquals(((NamedList<Object>) ((NamedList<Object>) mockResponse.getValues()
.get(AlfrescoSearchHandler.FACET_COUNTS_KEY)).get(FacetComponent.FACET_QUERY_KEY)).size(), 3);
assertEquals(((NamedList<Object>) ((NamedList<Object>) mockResponse.getValues()
.get(AlfrescoSearchHandler.FACET_COUNTS_KEY)).getVal(0)).get("small"), 1);
assertEquals(((NamedList<Object>) ((NamedList<Object>) mockResponse.getValues()
.get(AlfrescoSearchHandler.FACET_COUNTS_KEY)).getVal(0)).get("medium"), 2);
assertEquals(((NamedList<Object>) ((NamedList<Object>) mockResponse.getValues()
.get(AlfrescoSearchHandler.FACET_COUNTS_KEY)).getVal(0)).get("big"), 10);
}
@SuppressWarnings("unchecked")
@Test
public void testEmptyFacetQueries()
{
NamedList<Object> facetQueries = new NamedList<>();
NamedList<Object> facetCounts = new NamedList<>();
facetCounts.add(FacetComponent.FACET_QUERY_KEY, facetQueries);
when(mockParams.get(AlfrescoSearchHandler.FACET_COUNTS_KEY)).thenReturn(facetCounts);
List<FacetBase> queryFacets = new ArrayList<>();
when(mockFacetContext.getAllQueryFacets()).thenReturn(queryFacets);
AlfrescoSearchHandler.removeFacetQueriesWithCountZero(mockResponse);
assertEquals(((NamedList<Object>) ((NamedList<Object>) mockResponse.getValues()
.get(AlfrescoSearchHandler.FACET_COUNTS_KEY)).get(FacetComponent.FACET_QUERY_KEY)).size(), 0);
}
@SuppressWarnings("unchecked")
@Test
public void testEmptyFacetCount()
{
NamedList<Object> facetCounts = new NamedList<>();
when(mockParams.get(AlfrescoSearchHandler.FACET_COUNTS_KEY)).thenReturn(facetCounts);
List<FacetBase> queryFacets = new ArrayList<>();
when(mockFacetContext.getAllQueryFacets()).thenReturn(queryFacets);
AlfrescoSearchHandler.removeFacetQueriesWithCountZero(mockResponse);
assertEquals(((NamedList<Object>) mockResponse.getValues().get(AlfrescoSearchHandler.FACET_COUNTS_KEY)).size(),
0);
}
}

View File

@@ -7,7 +7,7 @@
<parent>
<groupId>org.alfresco</groupId>
<artifactId>alfresco-search-parent</artifactId>
<version>2.0.0-SNAPSHOT</version>
<version>2.0.0.0-SNAPSHOT</version>
</parent>
<distributionManagement>
@@ -22,8 +22,8 @@
</distributionManagement>
<properties>
<dependency.alfresco-data-model.version>8.135</dependency.alfresco-data-model.version>
<dependency.jackson.version>2.10.3</dependency.jackson.version>
<dependency.alfresco-data-model.version>8.145</dependency.alfresco-data-model.version>
<dependency.jackson.version>2.11.2</dependency.jackson.version>
</properties>
<dependencies>
@@ -67,7 +67,7 @@
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-core</artifactId>
<version>3.4.0</version>
<version>3.4.6</version>
<scope>test</scope>
</dependency>
<dependency>

View File

@@ -1,43 +1,43 @@
/*
* #%L
* Alfresco Search Services
* %%
* Copyright (C) 2005 - 2020 Alfresco Software Limited
* %%
* This file is part of the Alfresco software.
* If the software was purchased under a paid Alfresco license, the terms of
* the paid license agreement will prevail. Otherwise, the software is
* provided under the following open source license terms:
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
* #L%
*/
/*
* #%L
* Alfresco Search Services
* %%
* Copyright (C) 2005 - 2020 Alfresco Software Limited
* %%
* This file is part of the Alfresco software.
* If the software was purchased under a paid Alfresco license, the terms of
* the paid license agreement will prevail. Otherwise, the software is
* provided under the following open source license terms:
*
* Alfresco is free software: you can redistribute it and/or modify
* it under the terms of the GNU Lesser General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* Alfresco is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with Alfresco. If not, see <http://www.gnu.org/licenses/>.
* #L%
*/
package org.alfresco.solr;
import java.util.concurrent.atomic.AtomicInteger;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
package org.alfresco.solr;
import java.util.concurrent.atomic.AtomicInteger;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* This class was moved from org.alfresco.solr.tracker.CoreTracker
* The data in this class is relevant for a particular Solr index.
*/
public class TrackerState
{
private static final Logger log = LoggerFactory.getLogger(TrackerState.class);
{
private static final Logger log = LoggerFactory.getLogger(TrackerState.class);
private volatile long lastChangeSetIdOnServer;
@@ -61,22 +61,24 @@ public class TrackerState
private volatile boolean running = false;
private boolean enabled;
private volatile boolean checkedFirstTransactionTime = false;
private volatile boolean checkedFirstAclTransactionTime = false;
private volatile boolean checkedLastAclTransactionTime = false;
private volatile boolean checkedLastTransactionTime = false;
private volatile boolean check = false;
private volatile boolean check = false;
// Handle Thread Safe operations
private volatile AtomicInteger trackerCycles = new AtomicInteger(0);
private volatile AtomicInteger trackerCycles = new AtomicInteger(0);
private long timeToStopIndexing;
private long lastGoodChangeSetCommitTimeInIndex;
private long lastGoodTxCommitTimeInIndex;
private long timeBeforeWhichThereCanBeNoHoles;
private long timeBeforeWhichThereCanBeNoHoles;
private volatile long lastStartTime = 0;
public long getLastChangeSetIdOnServer()
@@ -237,19 +239,19 @@ public class TrackerState
public void setLastGoodTxCommitTimeInIndex(long lastGoodTxCommitTimeInIndex)
{
this.lastGoodTxCommitTimeInIndex = lastGoodTxCommitTimeInIndex;
}
}
public int getTrackerCycles()
{
return this.trackerCycles.get();
}
public synchronized void incrementTrackerCycles()
{
log.debug("incrementTrackerCycles from :" + trackerCycles);
this.trackerCycles.incrementAndGet();
log.debug("incremented TrackerCycles to :" + trackerCycles);
}
public int getTrackerCycles()
{
return this.trackerCycles.get();
}
public synchronized void incrementTrackerCycles()
{
log.debug("incrementTrackerCycles from :" + trackerCycles);
this.trackerCycles.incrementAndGet();
log.debug("incremented TrackerCycles to :" + trackerCycles);
}
public long getTimeBeforeWhichThereCanBeNoHoles()
{
@@ -261,6 +263,14 @@ public class TrackerState
this.timeBeforeWhichThereCanBeNoHoles = timeBeforeWhichThereCanBeNoHoles;
}
public boolean isEnabled() {
return enabled;
}
public void setEnabled(boolean enabled) {
this.enabled = enabled;
}
/*
* (non-Javadoc)
* @see java.lang.Object#toString()
@@ -284,10 +294,11 @@ public class TrackerState
+ ", checkedLastTransactionTime=" + this.checkedLastTransactionTime
+ ", checkedLastAclTransactionTime=" + this.checkedLastAclTransactionTime
+ ", check=" + check
+ ", enabled=" + enabled
+ ", timeToStopIndexing=" + timeToStopIndexing
+ ", lastGoodChangeSetCommitTimeInIndex=" + lastGoodChangeSetCommitTimeInIndex
+ ", lastGoodTxCommitTimeInIndex=" + lastGoodTxCommitTimeInIndex
+ ", timeBeforeWhichThereCanBeNoHoles=" + timeBeforeWhichThereCanBeNoHoles
+ ", timeBeforeWhichThereCanBeNoHoles=" + timeBeforeWhichThereCanBeNoHoles
+ ",trackerCycles= " + trackerCycles + " ]";
}
@@ -319,15 +330,15 @@ public class TrackerState
public void setCheckedLastAclTransactionTime(boolean checkedLastAclTransactionTime)
{
this.checkedLastAclTransactionTime = checkedLastAclTransactionTime;
}
}
public long getLastStartTime()
{
return this.lastStartTime;
}
public void setLastStartTime(long lastStartTime)
{
this.lastStartTime = lastStartTime;
public long getLastStartTime()
{
return this.lastStartTime;
}
public void setLastStartTime(long lastStartTime)
{
this.lastStartTime = lastStartTime;
}
}

View File

@@ -13,7 +13,7 @@
<parent>
<groupId>org.alfresco</groupId>
<artifactId>alfresco-search-parent</artifactId>
<version>2.0.0-SNAPSHOT</version>
<version>2.0.0.0-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<dependencies>
@@ -60,7 +60,7 @@
<plugin>
<groupId>com.googlecode.maven-download-plugin</groupId>
<artifactId>download-maven-plugin</artifactId>
<version>1.5.0</version>
<version>1.6.0</version>
<executions>
<execution>
<id>unpack-solr-war</id>

View File

@@ -1,6 +1,6 @@
# Alfresco Search Services ${project.version} Docker Image
FROM alfresco/alfresco-base-java:11.0.7-openjdk-centos-7-480e2906d08a
FROM alfresco/alfresco-base-java:11.0.8-openjdk-centos-7-d292b7de4cfb
LABEL creator="Gethin James" maintainer="Alfresco Search Services Team"
ENV DIST_DIR /opt/alfresco-search-services

View File

@@ -36,37 +36,37 @@ xml-resolver-1.2.jar https://github.com/FasterXML/jackson
neethi-3.1.1.jar http://ws.apache.org/commons/neethi/
commons-dbcp-1.4.jar http://jakarta.apache.org/commons/
commons-logging-1.1.3.jar http://jakarta.apache.org/commons/
commons-lang3-3.10.jar http://jakarta.apache.org/commons/
commons-lang3-3.11.jar http://jakarta.apache.org/commons/
commons-pool-1.5.4.jar http://jakarta.apache.org/commons/
chemistry-opencmis-commons-impl-1.1.0.jar http://chemistry.apache.org/
chemistry-opencmis-commons-api-1.1.0.jar http://chemistry.apache.org/
xmlschema-core-2.2.5.jar http://ws.apache.org/commons/XmlSchema/
HikariCP-java7-2.4.13.jar https://github.com/brettwooldridge/HikariCP
cxf-core-3.2.13.jar https://cxf.apache.org/
cxf-rt-bindings-soap-3.2.13.jar https://cxf.apache.org/
cxf-rt-bindings-xml-3.2.13.jar https://cxf.apache.org/
cxf-rt-databinding-jaxb-3.2.13.jar https://cxf.apache.org/
cxf-rt-frontend-jaxws-3.2.13.jar https://cxf.apache.org/
cxf-rt-frontend-simple-3.2.13.jar https://cxf.apache.org/
cxf-rt-transports-http-3.2.13.jar https://cxf.apache.org/
cxf-rt-ws-addr-3.2.13.jar https://cxf.apache.org/
cxf-rt-ws-policy-3.2.13.jar https://cxf.apache.org/
cxf-rt-wsdl-3.2.13.jar https://cxf.apache.org/
cxf-core-3.2.14.jar https://cxf.apache.org/
cxf-rt-bindings-soap-3.2.14.jar https://cxf.apache.org/
cxf-rt-bindings-xml-3.2.14.jar https://cxf.apache.org/
cxf-rt-databinding-jaxb-3.2.14.jar https://cxf.apache.org/
cxf-rt-frontend-jaxws-3.2.14.jar https://cxf.apache.org/
cxf-rt-frontend-simple-3.2.14.jar https://cxf.apache.org/
cxf-rt-transports-http-3.2.14.jar https://cxf.apache.org/
cxf-rt-ws-addr-3.2.14.jar https://cxf.apache.org/
cxf-rt-ws-policy-3.2.14.jar https://cxf.apache.org/
cxf-rt-wsdl-3.2.14.jar https://cxf.apache.org/
chemistry-opencmis-server-support-1.0.0.jar http://chemistry.apache.org/
chemistry-opencmis-server-bindings-1.0.0.jar http://chemistry.apache.org/
quartz-2.3.2.jar http://quartz-scheduler.org/
jackson-core-2.10.3.jar https://github.com/FasterXML/jackson
jackson-annotations-2.10.3.jar https://github.com/FasterXML/jackson
jackson-databind-2.10.3.jar https://github.com/FasterXML/jackson
jackson-core-2.11.2.jar https://github.com/FasterXML/jackson
jackson-annotations-2.11.2.jar https://github.com/FasterXML/jackson
jackson-databind-2.11.2.jar https://github.com/FasterXML/jackson
commons-httpclient-3.1-HTTPCLIENT-1265.jar http://jakarta.apache.org/commons/
spring-aop-5.2.7.RELEASE.jar http://projects.spring.io/spring-framework/
spring-beans-5.2.7.RELEASE.jar http://projects.spring.io/spring-framework/
spring-context-5.2.7.RELEASE.jar http://projects.spring.io/spring-framework/
spring-core-5.2.7.RELEASE.jar http://projects.spring.io/spring-framework/
spring-expression-5.2.7.RELEASE.jar http://projects.spring.io/spring-framework/
spring-jdbc-5.2.7.RELEASE.jar http://projects.spring.io/spring-framework/
spring-orm-5.2.7.RELEASE.jar http://projects.spring.io/spring-framework/
spring-tx-5.2.7.RELEASE.jar http://projects.spring.io/spring-framework/
spring-aop-5.2.8.RELEASE.jar http://projects.spring.io/spring-framework/
spring-beans-5.2.8.RELEASE.jar http://projects.spring.io/spring-framework/
spring-context-5.2.8.RELEASE.jar http://projects.spring.io/spring-framework/
spring-core-5.2.8.RELEASE.jar http://projects.spring.io/spring-framework/
spring-expression-5.2.8.RELEASE.jar http://projects.spring.io/spring-framework/
spring-jdbc-5.2.8.RELEASE.jar http://projects.spring.io/spring-framework/
spring-orm-5.2.8.RELEASE.jar http://projects.spring.io/spring-framework/
spring-tx-5.2.8.RELEASE.jar http://projects.spring.io/spring-framework/
xercesImpl-2.12.0-alfresco-patched-20191004.jar http://xerces.apache.org/xerces2-j
guessencoding-1.4.jar http://docs.codehaus.org/display/GUESSENC/
xml-apis-1.4.01.jar https://github.com/FasterXML/jackson
@@ -90,13 +90,13 @@ aggdesigner-algorithm-6.0.jar https://github.com/julianhyde/aggdesigner
=== CDDL 1.1 ===
jaxb-core-2.3.0.1.jar http://jaxb.java.net/
jaxb-xjc-2.3.2.jar http://jaxb.java.net/
jaxb-core-2.3.0.1.jar http://jaxb.java.net/
jaxb-xjc-2.3.3.jar http://jaxb.java.net/
jaxb-impl-2.3.3.jar http://jaxb.java.net/
=== Eclipse Distribution License 1.0 (BSD) ===
jakarta.activation-1.2.2.jar https://eclipse-ee4j.github.io/jaf
jakarta.activation-api-1.2.2.jar https://eclipse-ee4j.github.io/jaf
jakarta.jws-api-1.1.1.jar https://projects.eclipse.org/projects/ee4j.websocket/releases/1.1.1
jakarta.xml.bind-api-2.3.3.jar https://projects.eclipse.org/projects/ee4j.jaxb
@@ -124,8 +124,10 @@ bcmail-jdk15on-1.47.jar
bcprov-jdk15on-1.47.jar
boilerpipe-1.1.0.jar
caffeine-2.4.0.jar
calcite-core-1.12.0.jar
calcite-linq4j-1.12.0.jar
calcite-core-1.13.0.jar
calcite-linq4j-1.13.0.jar
avatica-core-1.13.0.jar
avatica-metrics-1.13.0.jar
carrot2-guava-18.0.jar
carrot2-mini-3.15.0.jar
commons-cli-1.2.jar

View File

@@ -4,18 +4,18 @@
<parent>
<groupId>org.alfresco</groupId>
<artifactId>alfresco-search-and-insight-parent</artifactId>
<version>2.0.0-SNAPSHOT</version>
<version>2.0.0.0-SNAPSHOT</version>
</parent>
<!-- The groupId and version are required by the maven pom extractor plugin on Bamboo - more details in this issue:
https://bitbucket.org/dehringer/bamboo-maven-pom-extractor-plugin/issues/18/groupid-not-populated-if-using-parent-pom -->
<groupId>org.alfresco</groupId>
<artifactId>alfresco-search-parent</artifactId>
<version>2.0.0-SNAPSHOT</version>
<version>2.0.0.0-SNAPSHOT</version>
<packaging>pom</packaging>
<name>Alfresco Solr Search parent</name>
<properties>
<slf4j.version>1.7.30</slf4j.version>
<cxf.version>3.2.13</cxf.version>
<cxf.version>3.2.14</cxf.version>
<licenseName>community</licenseName>
</properties>
<modules>