Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test different HERON libraries #2410

Open
wants to merge 29 commits into
base: devel
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
a47ec44
Updating to scipy 1.12
joshua-cogliati-inl Jan 24, 2024
8eb0b5e
Scikitlearn 1.0 is incompatible with scipy 1.12, so updating to 1.1
joshua-cogliati-inl Jan 25, 2024
2ade198
Sometimes only a 1d array is returned.
joshua-cogliati-inl Jan 25, 2024
d2c5317
Merge remote-tracking branch 'origin/devel' into scipy_1_12
joshua-cogliati-inl Dec 5, 2024
eb93d44
Removing testing kulsinski metric.
joshua-cogliati-inl Dec 5, 2024
2d9c229
simps is deprecated so switching to simpson
joshua-cogliati-inl Dec 5, 2024
2640624
Handle change from https://github.com/scikit-learn/scikit-learn/pull/…
joshua-cogliati-inl Dec 6, 2024
9a695a2
Improving test to classify better.
joshua-cogliati-inl Dec 6, 2024
e3955f2
Regold due to scikit change
joshua-cogliati-inl Dec 6, 2024
e352a56
Regold PolyExponential files (had rel err of 1e-03 or less)
joshua-cogliati-inl Dec 6, 2024
938e8be
Make timestep uniform for scipy update.
joshua-cogliati-inl Dec 11, 2024
200c8f4
Regolding because of changes in scipy 1.12
joshua-cogliati-inl Dec 11, 2024
3e25a8c
Increase limits to improve convergence.
joshua-cogliati-inl Dec 11, 2024
e6ae39c
Unpinning xarray and updating numpy
joshua-cogliati-inl Dec 6, 2024
57b28e9
Updating various libraries.
joshua-cogliati-inl Dec 6, 2024
91c6e8f
Fix working with newer tensorflow.
joshua-cogliati-inl Dec 11, 2024
cbbd615
Values need to be switched to tuples for hstack in numpy 1.26
joshua-cogliati-inl Dec 11, 2024
7d8b673
Updating to new ray version.
joshua-cogliati-inl Dec 12, 2024
c3a0e39
The deque size can be bigger in python 3.11
joshua-cogliati-inl Dec 12, 2024
94b7ea1
Report difference in row lengths, instead of crashing OrderedCSVDiffer.
joshua-cogliati-inl Dec 12, 2024
1851ed2
Remove Fourier__signal_f__period10.0__phase
joshua-cogliati-inl Dec 12, 2024
9544362
Regolding changes to ROM/TimeSeries/DMD/BOPDMD because of library cha…
joshua-cogliati-inl Dec 12, 2024
d38449a
Support xarray 2024.7 and newer.
joshua-cogliati-inl Dec 12, 2024
91bb9c4
Fixing long line.
joshua-cogliati-inl Dec 16, 2024
b7b22a0
Increasing zero threshold because of change in libraries.
joshua-cogliati-inl Dec 16, 2024
4242b78
Remove version from setuptools since ray updated.
joshua-cogliati-inl Dec 16, 2024
6cccb51
Optimizing persistence in BayesianMatyas.
joshua-cogliati-inl Dec 17, 2024
8b3f3bb
Testing different libraries with HERON.
joshua-cogliati-inl Dec 17, 2024
9549b79
Leave dill at previous version.
joshua-cogliati-inl Dec 17, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions dependencies.xml
Original file line number Diff line number Diff line change
Expand Up @@ -38,17 +38,17 @@ Note all install methods after "main" take
<dependencies>
<main>
<h5py/>
<numpy>1.24</numpy>
<scipy>1.9</scipy>
<scikit-learn>1.0</scikit-learn>
<numpy>1.26</numpy>
<scipy>1.12</scipy>
<scikit-learn>1.1</scikit-learn>
<pandas/>
<!-- Note most versions of xarray work, but some (such as 0.20) don't -->
<xarray>2023</xarray>
<xarray/>
<netcdf4 source="pip">1.6</netcdf4>
<matplotlib>3.5</matplotlib>
<matplotlib>3.6</matplotlib>
<statsmodels>0.13</statsmodels>
<cloudpickle/>
<tensorflow source="pip">2.13</tensorflow>
<tensorflow source="pip">2.14</tensorflow>
<grpcio source="pip" />
<!-- conda is really slow on windows if the version is not specified.-->
<python skip_check='True' os='windows'>3.10</python>
Expand All @@ -66,14 +66,14 @@ Note all install methods after "main" take
<nomkl os='linux' skip_check='True'/>
<cmake skip_check='True' optional='True'/>
<dask source="pip" pip_extra="[complete]"/>
<ray source="pip" pip_extra="[default]">2.6</ray>
<ray source="pip" pip_extra="[default]">2.38</ray>
<!-- redis is needed by ray, but on windows, this seems to need to be explicitly stated -->
<redis source="pip" os='windows'/>
<imageio source="pip">2.22</imageio>
<line_profiler optional='True'/>
<!-- <ete3 optional='True'/> -->
<statsforecast/>
<pywavelets>1.2</pywavelets>
<pywavelets>1.4</pywavelets>
<python-sensors source="pip"/>
<numdifftools source="pip">0.9</numdifftools>
<fmpy optional='True'/>
Expand All @@ -83,7 +83,7 @@ Note all install methods after "main" take
<ipopt skip_check='True' optional='True'/>
<cyipopt optional='True'/>
<pyomo-extensions source="pyomo" skip_check='True' optional='True'/>
<setuptools>69</setuptools> <!-- ray 2.6 can't be installed with setuptools 70 -->
<setuptools />
<!-- source="mamba" are the ones installed when mamba is installed -->
<!-- mamba version 2.0.0 causes failures on mac: critical libmamba filesystem error: in permissions: Operation not permitted-->
<mamba source="mamba" skip_check='True' os='mac'>1.5</mamba>
Expand Down
2 changes: 1 addition & 1 deletion plugins/HERON
Submodule HERON updated 63 files
+15 −2 .github/workflows/github-actions.yml
+3 −0 .gitignore
+4 −0 README.md
+1 −0 build_cfg/HERON
+3 −0 build_cfg/MANIFEST.in
+22 −0 build_cfg/README
+3 −0 build_cfg/pyproject.toml
+37 −0 build_cfg/setup.cfg
+32 −0 coverage_scripts/.coveragerc
+26 −0 coverage_scripts/check_py_coverage.sh
+52 −0 coverage_scripts/initialize_coverage.sh
+39 −0 coverage_scripts/report_py_coverage.sh
+1 −1 dependencies.xml
+18 −6 doc/guide/heron_guide.md
+192 −0 doc/theory_manual/ComponentCharacterization/HERON_Cashflows.md
+75 −0 doc/theory_manual/ComponentCharacterization/HERON_Components.md
+29 −0 doc/theory_manual/HERON_Theory_Manual.md
+43 −0 doc/theory_manual/TimeHistoryTraining/HERON_TimeIndexing.md
+138 −0 doc/theory_manual/Workflows/HERON_Standard_Workflow.md
+ doc/theory_manual/diagrams/HERON_bilevel_optimization_full.png
+ doc/theory_manual/diagrams/HERON_cfs.png
+ doc/theory_manual/diagrams/HERON_comps.png
+ doc/theory_manual/diagrams/HERON_time.png
+25 −0 doc/user_manual/make_win.sh
+1 −0 doc/user_manual/src/HERON_user_manual.tex
+4 −1 heron
+19 −0 src/Cases.py
+14 −16 src/Components.py
+3 −1 src/Testers/HeronIntegrationTester.py
+11 −33 src/ValuedParamHandler.py
+2 −1 src/ValuedParams/Activity.py
+12 −1 src/ValuedParams/Factory.py
+4 −1 src/ValuedParams/Function.py
+2 −1 src/ValuedParams/Parametric.py
+2 −1 src/ValuedParams/ROM.py
+2 −1 src/ValuedParams/RandomVariable.py
+2 −1 src/ValuedParams/StaticHistory.py
+2 −1 src/ValuedParams/SyntheticHistory.py
+2 −1 src/ValuedParams/ValuedParam.py
+2 −1 src/ValuedParams/Variable.py
+7 −1 src/dispatch/pyomo_dispatch.py
+10 −2 src/main.py
+20 −6 templates/template_driver.py
+11 −0 tests/README.md
+152 −0 tests/integration_tests/XML_check/gold/optimization_type_BO_mean_NPV_o/outer.xml
+128 −0 tests/integration_tests/XML_check/optimization_type_BO_mean_NPV/heron_input.xml
+13 −0 tests/integration_tests/XML_check/tests
+113 −0 tests/integration_tests/mechanics/debug_mode/opt/heron_input.xml
+1 −1 tests/integration_tests/mechanics/debug_mode/sweep/heron_input.xml
+29 −6 tests/integration_tests/mechanics/debug_mode/tests
+87 −0 ...ion_tests/mechanics/infeasible_constraints/gold/Sweep_Runs_o/sweep/1/Sweep_Runs_i/constraint_violations.log
+72 −0 tests/integration_tests/mechanics/infeasible_constraints/heron_input.xml
+13 −0 tests/integration_tests/mechanics/infeasible_constraints/tests
+22 −0 tests/integration_tests/mechanics/min_demand/gold/Sizing_o/dispatch_print.csv
+0 −3 tests/integration_tests/mechanics/min_demand/gold/Sizing_o/sweep.csv
+10 −14 tests/integration_tests/mechanics/min_demand/heron_input.xml
+1 −1 tests/integration_tests/mechanics/min_demand/tests
+0 −19 tests/integration_tests/mechanics/min_demand/transfers.py
+12 −0 tests/integration_tests/mechanics/ramp_freq/gold/Sweep_Runs_o/dispatch_print.csv
+1 −3 tests/integration_tests/mechanics/ramp_freq/heron_input.xml
+20 −0 tests/integration_tests/mechanics/ramp_freq/tests
+8 −0 tests/integration_tests/mechanics/storage_func/heron_input.xml
+11 −1 tests/integration_tests/mechanics/storage_func/transfers.py
2 changes: 1 addition & 1 deletion ravenframework/DataObjects/HistorySet.py
Original file line number Diff line number Diff line change
Expand Up @@ -275,7 +275,7 @@ def _toCSV(self,fileName,start=0,**kwargs):
if startIndex > 0:
data = self._data.isel(**{self.sampleTag:slice(startIndex,None,None)})

data = data.drop(toDrop)
data = data.drop_vars(toDrop)
self.raiseADebug('Printing data to CSV: "{}"'.format(fileName+'.csv'))
# specific implementation
## write input space CSV with pointers to history CSVs
Expand Down
3 changes: 0 additions & 3 deletions ravenframework/Metrics/metrics/ScipyMetric.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,9 +45,6 @@ class ScipyMetric(MetricInterface):
availMetrics['boolean']['dice'] = spatialDistance.dice
availMetrics['boolean']['hamming'] = spatialDistance.hamming
availMetrics['boolean']['jaccard'] = spatialDistance.jaccard
#Note in scipy 1.12 this needs to be changed to
#availMetrics['boolean']['kulsinski'] = spatialDistance.kulczynski1
availMetrics['boolean']['kulsinski'] = spatialDistance.kulsinski
availMetrics['boolean']['russellrao'] = spatialDistance.russellrao
availMetrics['boolean']['sokalmichener'] = spatialDistance.sokalmichener
availMetrics['boolean']['sokalsneath'] = spatialDistance.sokalsneath
Expand Down
2 changes: 2 additions & 0 deletions ravenframework/Models/PostProcessors/BasicStatistics.py
Original file line number Diff line number Diff line change
Expand Up @@ -1257,6 +1257,7 @@ def getCovarianceSubset(desired):
pivotCoords = reducedCovar.coords[self.pivotParameter].values
ds = None
for label, group in reducedCovar.groupby(self.pivotParameter):
group = group.squeeze()
corrMatrix = self.corrCoeff(group.values)
da = xr.DataArray(corrMatrix, dims=('targets','features'), coords={'targets':targCoords,'features':targCoords})
ds = da if ds is None else xr.concat([ds,da], dim=self.pivotParameter)
Expand Down Expand Up @@ -1315,6 +1316,7 @@ def getCovarianceSubset(desired):
pivotCoords = reducedCovar.coords[self.pivotParameter].values
ds = None
for label, group in reducedCovar.groupby(self.pivotParameter):
group = group.squeeze()
da = self.varianceDepSenCalculation(targCoords,group.values)
ds = da if ds is None else xr.concat([ds,da], dim=self.pivotParameter)
ds.coords[self.pivotParameter] = pivotCoords
Expand Down
1 change: 1 addition & 0 deletions ravenframework/Models/PostProcessors/EconomicRatio.py
Original file line number Diff line number Diff line change
Expand Up @@ -552,6 +552,7 @@ def __runLocal(self, inputData):
if self.pivotParameter in targDa.sizes.keys():
subCVaR = []
for label, group in targDa.groupby(self.pivotParameter):
group = group.squeeze()
sortedWeightsAndPoints, indexL = self._computeSortedWeightsAndPoints(group.values, targWeight,thd)
quantile = self._computeWeightedPercentile(group.values, targWeight, needed[metric]['interpolation'], percent=[thd])[0]
lowerPartialE = np.sum(sortedWeightsAndPoints[:indexL, 0]*sortedWeightsAndPoints[:indexL,1])
Expand Down
6 changes: 3 additions & 3 deletions ravenframework/Models/PostProcessors/LimitSurface.py
Original file line number Diff line number Diff line change
Expand Up @@ -410,9 +410,9 @@ def run(self, inputIn = None, returnListSurfCoord = False, exceptionGrid = None,
if self.name != exceptionGrid:
self.listSurfPointNegative, self.listSurfPointPositive = listSurfPoint[self.name][:nNegPoints-1],listSurfPoint[self.name][nNegPoints:]
if merge == True:
evals = np.hstack(evaluations.values())
listSurfPoints = np.hstack(listSurfPoint.values())
surfPoint = np.hstack(self.surfPoint.values())
evals = np.hstack(tuple(evaluations.values()))
listSurfPoints = np.hstack(tuple(listSurfPoint.values()))
surfPoint = np.hstack(tuple(self.surfPoint.values()))
returnSurface = (surfPoint, evals, listSurfPoints) if returnListSurfCoord else (surfPoint, evals)
else:
returnSurface = (self.surfPoint, evaluations, listSurfPoint) if returnListSurfCoord else (self.surfPoint, evaluations)
Expand Down
6 changes: 3 additions & 3 deletions ravenframework/Models/PostProcessors/Validations/PPDSS.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
#External Modules------------------------------------------------------------------------------------
import numpy as np
from scipy.interpolate import interp1d
from scipy.integrate import simps
from scipy.integrate import simpson
import xarray as xr
import os
from collections import OrderedDict
Expand Down Expand Up @@ -360,7 +360,7 @@ def _evaluate(self, datasets, **kwargs):
else:
featureD[cnt2][i] = 0
#
featureProcessAction = simps(featureIntNew, interpGridNew)
featureProcessAction = simpson(featureIntNew, x=interpGridNew)
featureProcessTimeNorm[cnt2] = featureProcessTime/featureProcessAction
featureOmegaNorm[cnt2] = featureProcessAction*featureOmega
#
Expand Down Expand Up @@ -406,7 +406,7 @@ def _evaluate(self, datasets, **kwargs):
else:
targetD[cnt2][i] = 0
#
targetProcessAction = simps(targetIntNew, interpGridNew)
targetProcessAction = simpson(targetIntNew, x=interpGridNew)
targetProcessTimeNorm[cnt2] = targetProcessTime/targetProcessAction
targetOmegaNorm[cnt2] = targetProcessAction*targetOmega
#
Expand Down
5 changes: 4 additions & 1 deletion ravenframework/SupervisedLearning/ARMA.py
Original file line number Diff line number Diff line change
Expand Up @@ -1628,7 +1628,10 @@ def getFundamentalFeatures(self, requestedFeatures, featureTemplate=None):
## IND
#most probabble index
if len(group['Ind']):
modeInd = stats.mode(group['Ind'])[0][0]
try:
modeInd = stats.mode(group['Ind'])[0][0]
except IndexError:
modeInd = stats.mode(group['Ind'])[0]
else:
modeInd = 0
ID = 'gp_{}_modeInd'.format(g)
Expand Down
5 changes: 4 additions & 1 deletion ravenframework/SupervisedLearning/KerasBase.py
Original file line number Diff line number Diff line change
Expand Up @@ -2348,7 +2348,10 @@ def __returnInitialParametersLocal__(self):
@ In, None
@ Out, params, dict, dictionary of parameter names and initial values
"""
params = copy.deepcopy(self.__dict__)
selfDict = copy.copy(self.__dict__)
# labelEncoder can't be deepcopied so remove if it exists
selfDict.pop("labelEncoder", None)
params = copy.deepcopy(selfDict)
return params

def __returnCurrentSettingLocal__(self):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,12 @@ def __evaluateLocal__(self,featureVals):
except TypeError:
outcomes = self.model.predict(featureVals)
outcomes = np.atleast_1d(outcomes)
if len(outcomes.shape) == 1:
#possibilities for predict results are:
# (n_samples,) or (n_samples, n_targets)
if len(outcomes.shape) == 1 and len(self.target) == 1:
returnDict = {self.target[0]:outcomes}
elif len(outcomes.shape) == 1:
#this might only be possible for scikitlearn bugs
returnDict = {key:value for (key,value) in zip(self.target,outcomes)}
else:
returnDict = {key: outcomes[:, i] for i, key in enumerate(self.target)}
Expand Down
13 changes: 8 additions & 5 deletions rook/NumTextDiff.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ def check_output(self):
same = False
# if either file did not exist, clean up and go to next outfile
if not same:
self.finalize_message(same, msg, test_filename)
self.finalize_message(same, msg, test_filename, gold_filename)
continue
cswf = DU.compare_strings_with_floats
same, message = cswf(test_file.read(),
Expand All @@ -100,18 +100,21 @@ def check_output(self):
rel_err=self.__text_opts['rel_err'])
if not same:
msg.append(message)
self.finalize_message(same, msg, test_filename)
self.finalize_message(same, msg, test_filename, gold_filename)
return self.__same, self.__message


def finalize_message(self, same, msg, filename):
def finalize_message(self, same, msg, test_filename, gold_filename):
"""
Compiles useful messages to print, prepending with file paths.
@ In, same, bool, True if files are the same
@ In, msg, list(str), messages that explain differences
@ In, filename, str, test filename/path
@ In, test_filename, str, test filename/path
@ In, gold_filename, str, gold filename/path
@ Out, None
"""
if not same:
self.__same = False
self.__message += '\nDIFF in {}: \n {}'.format(filename, '\n '.join(msg))
self.__message += '\nDIFF in {} and\n{}: \n {}'.format(test_filename,
gold_filename,
'\n '.join(msg))
23 changes: 16 additions & 7 deletions rook/OrderedCSVDiffer.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,17 +68,20 @@ def __init__(self, out_files, gold_files, relative_error=1e-10,
print('abs check:', self.__check_absolute_values)
print('zero thr :', self.__zero_threshold)

def finalize_message(self, same, msg, filename):
def finalize_message(self, same, msg, test_filename, gold_filename):
"""
Compiles useful messages to print, prepending with file paths.
@ In, same, bool, True if files are the same
@ In, msg, list(str), messages that explain differences
@ In, filename, str, test filename/path
@ In, test_filename, str, test filename/path
@ In, gold_filename, str, gold filename/path
@ Out, None
"""
if not same:
self.__same = False
self.__message += '\nDIFF in {}: \n {}'.format(filename, '\n '.join(msg))
self.__message += '\nDIFF in {} and\n{}: \n {}'.format(test_filename,
gold_filename,
'\n '.join(msg))

def matches(self, a_obj, b_obj, is_number, tol):
"""
Expand Down Expand Up @@ -143,7 +146,7 @@ def diff(self):
same = False
# if either file did not exist, clean up and go to next outfile
if not same:
self.finalize_message(same, msg, test_filename)
self.finalize_message(same, msg, test_filename, gold_filename)
continue
# at this point, we've loaded both files (even if they're empty), so compare them.
## first, cover the case when both files are empty.
Expand All @@ -157,14 +160,14 @@ def diff(self):
if len(diff_columns) > 0:
same = False
msg.append('Columns are not the same! Different: {}'.format(', '.join(diff_columns)))
self.finalize_message(same, msg, test_filename)
self.finalize_message(same, msg, test_filename, gold_filename)
continue
## check index length
if len(gold_rows) != len(test_rows):
same = False
msg.append('Different number of entires in Gold ({}) versus Test ({})!'
.format(len(gold_rows), len(test_rows)))
self.finalize_message(same, msg, test_filename)
self.finalize_message(same, msg, test_filename, gold_filename)
continue
## at this point both CSVs have the same shape, with the same header contents.
## figure out column indexs
Expand All @@ -179,6 +182,12 @@ def diff(self):
for idx in range(1, len(gold_rows)):
gold_row = gold_rows[idx]
test_row = test_rows[idx]
if len(gold_row) != len(test_row):
same = False
msg.append("Different row lengths"+
f" {len(gold_row)} != {len(test_row)} "+
f" in {gold_row} and {test_row}")
continue
for column in range(len(gold_row)):
gold_value = to_float(gold_row[column])
test_value = to_float(test_row[test_indexes[column]])
Expand All @@ -201,7 +210,7 @@ def diff(self):
msg.append('| Difference | statistics:')
msg.append(' MEAN diff.: {:1.9e}'.format(sum(diffs)/float(len(diffs))))
msg.append(' LARGEST diff.: {:1.9e}'.format(max(diffs)))
self.finalize_message(same, msg, test_filename)
self.finalize_message(same, msg, test_filename, gold_filename)
return self.__same, self.__message


Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@
</variable>
<TargetEvaluation class="DataObjects" type="PointSet">optOut</TargetEvaluation>
<samplerInit>
<limit>50</limit>
<limit>100</limit>
<initialSeed>42</initialSeed>
<writeSteps>every</writeSteps>
</samplerInit>
Expand All @@ -80,7 +80,7 @@
</ModelSelection>
<convergence>
<acquisition>1e-8</acquisition>
<persistence>5</persistence>
<persistence>6</persistence>
</convergence>
<Acquisition>
<ProbabilityOfImprovement>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
rogerstanimoto_ans2_ans,dice_ans2_ans,hamming_ans2_ans,jaccard_ans2_ans,kulsinski_ans2_ans,russellrao_ans2_ans,sokalmichener_ans2_ans,sokalsneath_ans2_ans,yule_ans2_ans
0.666666666667,0.384615384615,0.5,0.555555555556,0.733333333333,0.6,0.666666666667,0.714285714286,1.2
rogerstanimoto_ans2_ans,dice_ans2_ans,hamming_ans2_ans,jaccard_ans2_ans,russellrao_ans2_ans,sokalmichener_ans2_ans,sokalsneath_ans2_ans,yule_ans2_ans
0.666666666667,0.384615384615,0.5,0.555555555556,0.6,0.666666666667,0.714285714286,1.2
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@
<Metric class="Metrics" type="Metric">dice</Metric>
<Metric class="Metrics" type="Metric">hamming</Metric>
<Metric class="Metrics" type="Metric">jaccard</Metric>
<Metric class="Metrics" type="Metric">kulsinski</Metric>
<Metric class="Metrics" type="Metric">russellrao</Metric>
<Metric class="Metrics" type="Metric">sokalmichener</Metric>
<Metric class="Metrics" type="Metric">sokalsneath</Metric>
Expand All @@ -50,7 +49,6 @@
dice_ans2_ans,
hamming_ans2_ans,
jaccard_ans2_ans,
kulsinski_ans2_ans,
russellrao_ans2_ans,
sokalmichener_ans2_ans,
sokalsneath_ans2_ans,
Expand Down Expand Up @@ -88,9 +86,6 @@
<Metric name="jaccard" subType="ScipyMetric">
<metricType>boolean|jaccard</metricType>
</Metric>
<Metric name="kulsinski" subType="ScipyMetric">
<metricType>boolean|kulsinski</metricType>
</Metric>
<Metric name="russellrao" subType="ScipyMetric">
<metricType>boolean|russellrao</metricType>
</Metric>
Expand Down
6 changes: 6 additions & 0 deletions tests/framework/PostProcessors/TSACharacterizer/basic.xml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
<Model class="Models" type="PostProcessor">tsa_chz</Model>
<Output class="DataObjects" type="PointSet">chz</Output>
<Output class="OutStreams" type="Print">chz</Output>
<Output class="OutStreams" type="Print">chz_full</Output>
</PostProcess>
</Steps>

Expand All @@ -49,6 +50,11 @@
<Print name="chz">
<type>csv</type>
<source>chz</source>
<what>metadata|ARMA__signal_fa__AR__0,metadata|ARMA__signal_fa__MA__0,metadata|ARMA__signal_a__variance,metadata|Fourier__signal_f__period2.0__amplitude,metadata|ARMA__signal_a__MA__0,metadata|ARMA__signal_a__MA__2,metadata|Fourier__signal_f__fit_intercept,metadata|Fourier__signal_fa__period10.0__amplitude,metadata|Fourier__signal_f__period5.0__amplitude,metadata|Fourier__signal_fa__period2.0__amplitude,metadata|Fourier__signal_f__period5.0__phase,metadata|ARMA__signal_fa__variance,metadata|Fourier__signal_fa__period10.0__phase,metadata|ARMA__signal_fa__AR__1,metadata|Fourier__signal_fa__period2.0__phase,metadata|ARMA__signal_a__MA__1,metadata|ARMA__signal_fa__constant,metadata|Fourier__signal_f__period2.0__phase,metadata|ARMA__signal_a__constant,metadata|Fourier__signal_fa__period5.0__amplitude,metadata|ARMA__signal_fa__MA__1,metadata|ARMA__signal_a__AR__1,metadata|ARMA__signal_fa__MA__2,metadata|ARMA__signal_a__AR__0,metadata|Fourier__signal_f__period10.0__amplitude,metadata|Fourier__signal_fa__fit_intercept,metadata|Fourier__signal_fa__period5.0__phase</what>
</Print>
<Print name="chz_full">
<type>csv</type>
<source>chz</source>
</Print>
</OutStreams>

Expand Down
Loading