Fix #104 "java.lang.NegativeArraySizeException parsing Wikidata HDT file" and fix some warnings #157
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
In this pull request, I've fixed the issue #104, in fact in was the same issue as the issue #126, but for the HDTs obtained with
HDTManager.loadHDT(...)
, the previous pull request #154 was fixing the bug for HDT obtained withHDTManager.mapHDT(...)
. My fix is the same as the one for the map.HDT-Java was casting a long value to an int, but because the long was bigger than
Integer.MAX_VALUE
, it was creating an overflow and returning a negative ID.This pull request wasn't as obvious as the previous one, while searching for the bug, I've fixed some warnings and create an utility test class
LargeFakeDataSetStreamSupplier
to create large test HDT or RDF streams and added some int to long in the mapped section, I can rebase my commits to remove them if necessary.