public class ClassifierAttributeEval extends ASEvaluation implements AttributeEvaluator, OptionHandler
-L Evaluate an attribute by measuring the impact of leaving it out from the full set instead of considering its worth in isolation
-execution-slots <integer> Number of attributes to evaluate in parallel. Default = 1 (i.e. no parallelism)
-B <base learner> class name of base learner to use for accuracy estimation. Place any classifier options LAST on the command line following a "--". eg.: -B weka.classifiers.bayes.NaiveBayes ... -- -K (default: weka.classifiers.rules.ZeroR)
-F <num> number of cross validation folds to use for estimating accuracy. (default=5)
-R <seed> Seed for cross validation accuracy testimation. (default = 1)
-T <num> threshold by which to execute another cross validation (standard deviation---expressed as a percentage of the mean). (default: 0.01 (1%))
-E <acc | rmse | mae | f-meas | auc | auprc> Performance evaluation measure to use for selecting attributes. (Default = accuracy for discrete class and rmse for numeric class)
-IRclass <label | index> Optional class value (label or 1-based index) to use in conjunction with IR statistics (f-meas, auc or auprc). Omitting this option will use the class-weighted average.
Options specific to scheme weka.classifiers.rules.ZeroR:
-output-debug-info If set, classifier is run in debug mode and may output additional info to the console
-do-not-check-capabilities If set, classifier capabilities are not checked before classifier is built (use with caution).
| Constructor and Description |
|---|
ClassifierAttributeEval()
Constructor.
|
| Modifier and Type | Method and Description |
|---|---|
void |
buildEvaluator(Instances data)
Initializes a ClassifierAttribute attribute evaluator.
|
String |
classifierTipText()
Returns the tip text for this property
|
double |
evaluateAttribute(int attribute)
Evaluates an individual attribute by measuring the amount of information
gained about the class given the attribute.
|
String |
evaluationMeasureTipText()
Returns the tip text for this property
|
String |
foldsTipText()
Returns the tip text for this property
|
Capabilities |
getCapabilities()
Returns the capabilities of this evaluator.
|
Classifier |
getClassifier()
Get the classifier used as the base learner.
|
SelectedTag |
getEvaluationMeasure()
Gets the currently set performance evaluation measure used for selecting
attributes for the decision table
|
int |
getFolds()
Get the number of folds used for accuracy estimation
|
String |
getIRClassValue()
Get the class value (label or index) to use with IR metric evaluation of
subsets.
|
boolean |
getLeaveOneAttributeOut()
Get whether to evaluate the merit of an attribute based on the impact of
leaving it out from the full set instead of considering its worth in
isolation
|
int |
getNumToEvaluateInParallel()
Get the number of attributes to evaluate in parallel
|
String[] |
getOptions()
returns the current setup.
|
String |
getRevision()
Returns the revision string.
|
int |
getSeed()
Get the random number seed used for cross validation
|
double |
getThreshold()
Get the value of the threshold
|
String |
globalInfo()
Returns a string describing this attribute evaluator.
|
String |
IRClassValueTipText()
Returns the tip text for this property
|
String |
leaveOneAttributeOutTipText()
Tip text for this property
|
Enumeration<Option> |
listOptions()
Returns an enumeration describing the available options.
|
static void |
main(String[] args)
Main method for executing this class.
|
String |
numToEvaluateInParallelTipText()
Tip text for this property.
|
String |
seedTipText()
Returns the tip text for this property
|
void |
setClassifier(Classifier newClassifier)
Set the classifier to use for accuracy estimation
|
void |
setEvaluationMeasure(SelectedTag newMethod)
Sets the performance evaluation measure to use for selecting attributes for
the decision table
|
void |
setFolds(int f)
Set the number of folds to use for accuracy estimation
|
void |
setIRClassValue(String val)
Set the class value (label or index) to use with IR metric evaluation of
subsets.
|
void |
setLeaveOneAttributeOut(boolean l)
Set whether to evaluate the merit of an attribute based on the impact of
leaving it out from the full set instead of considering its worth in
isolation
|
void |
setNumToEvaluateInParallel(int n)
Set the number of attributes to evaluate in parallel
|
void |
setOptions(String[] options)
Parses a given list of options.
|
void |
setSeed(int s)
Set the seed to use for cross validation
|
void |
setThreshold(double t)
Set the value of the threshold for repeating cross validation
|
String |
thresholdTipText()
Returns the tip text for this property
|
String |
toString()
Return a description of the evaluator.
|
clean, doNotCheckCapabilitiesTipText, forName, getDoNotCheckCapabilities, makeCopies, postExecution, postProcess, preExecution, run, runEvaluator, setDoNotCheckCapabilitiesequals, getClass, hashCode, notify, notifyAll, wait, wait, waitmakeCopypublic String globalInfo()
public Enumeration<Option> listOptions()
listOptions in interface OptionHandlerlistOptions in class ASEvaluationpublic void setOptions(String[] options) throws Exception
-L Evaluate an attribute by measuring the impact of leaving it out from the full set instead of considering its worth in isolation
-execution-slots <integer> Number of attributes to evaluate in parallel. Default = 1 (i.e. no parallelism)
-B <base learner> class name of base learner to use for accuracy estimation. Place any classifier options LAST on the command line following a "--". eg.: -B weka.classifiers.bayes.NaiveBayes ... -- -K (default: weka.classifiers.rules.ZeroR)
-F <num> number of cross validation folds to use for estimating accuracy. (default=5)
-R <seed> Seed for cross validation accuracy testimation. (default = 1)
-T <num> threshold by which to execute another cross validation (standard deviation---expressed as a percentage of the mean). (default: 0.01 (1%))
-E <acc | rmse | mae | f-meas | auc | auprc> Performance evaluation measure to use for selecting attributes. (Default = accuracy for discrete class and rmse for numeric class)
-IRclass <label | index> Optional class value (label or 1-based index) to use in conjunction with IR statistics (f-meas, auc or auprc). Omitting this option will use the class-weighted average.
Options specific to scheme weka.classifiers.rules.ZeroR:
-output-debug-info If set, classifier is run in debug mode and may output additional info to the console
-do-not-check-capabilities If set, classifier capabilities are not checked before classifier is built (use with caution).
setOptions in interface OptionHandlersetOptions in class ASEvaluationoptions - the list of options as an array of stringsException - if an option is not supportedpublic String[] getOptions()
getOptions in interface OptionHandlergetOptions in class ASEvaluationpublic String leaveOneAttributeOutTipText()
public void setLeaveOneAttributeOut(boolean l)
l - true if each attribute should be evaluated by measuring the impact
of leaving it out from the full setpublic boolean getLeaveOneAttributeOut()
public String numToEvaluateInParallelTipText()
public void setNumToEvaluateInParallel(int n)
n - the number of attributes to evaluate in parallelpublic int getNumToEvaluateInParallel()
public void setIRClassValue(String val)
val - the class label or 1-based index of the class label to use when
evaluating subsets with an IR metricpublic String getIRClassValue()
public String IRClassValueTipText()
public String evaluationMeasureTipText()
public SelectedTag getEvaluationMeasure()
public void setEvaluationMeasure(SelectedTag newMethod)
newMethod - the new performance evaluation metric to usepublic String thresholdTipText()
public void setThreshold(double t)
t - the value of the thresholdpublic double getThreshold()
public String foldsTipText()
public void setFolds(int f)
f - the number of foldspublic int getFolds()
public String seedTipText()
public void setSeed(int s)
s - the seedpublic int getSeed()
public String classifierTipText()
public void setClassifier(Classifier newClassifier)
newClassifier - the Classifier to use.public Classifier getClassifier()
public Capabilities getCapabilities()
getCapabilities in interface CapabilitiesHandlergetCapabilities in class ASEvaluationCapabilitiespublic void buildEvaluator(Instances data) throws Exception
buildEvaluator in class ASEvaluationdata - set of instances serving as training dataException - if the evaluator has not been generated successfullypublic double evaluateAttribute(int attribute)
throws Exception
evaluateAttribute in interface AttributeEvaluatorattribute - the index of the attribute to be evaluatedException - if the attribute could not be evaluatedpublic String toString()
public String getRevision()
getRevision in interface RevisionHandlergetRevision in class ASEvaluationpublic static void main(String[] args)
args - the optionsCopyright © 2020 University of Waikato, Hamilton, NZ. All rights reserved.