go back


Confidences for Learning Systems

Supervised learning methods take a possibly large set of training examples and adapt their representation to achieve a good fit to the data. Since training data can never cover all possible input stimuli, there will always be cases, where the output of the learning architecture remains uncertain. We identify these cases by estimating the confidence of the algorithm’ output. If the confidence is below a given threshold, we may reject the classification as not robust enough.

Confidence estimation in exemplar-based learning models can be based on the distance relations between exemplars in their feature spaces.

The use of distance-based confidences is possible if the metric in the feature space is adapted according to a learning rule. Sometimes it is better to use local thresholds for different regions of the feature space.

Learning is one of the key features of intelligence:

The capability of using prior experience to adapt intelligent behavior to novel situations. This requires some form of memory that may be distinguished into short-term and long-term memory. The stored information can be used to adapt and synthesize representations, acquire new skills and change values or preferences. This flexibility widens the scope of intelligent systems going beyond the boundaries of fully pre-programmed solutions.

Incremental learning is characterized by the capability to perform experience-based adaptation from a continuous stream of incoming data. Thus it facilitates an immediate feedback between a learning system, its user(s) and its environment. We develop new approaches for the key challenge of incremental learning systems: finding a good compromise between stability and plasticity of the learned representations. Incremental learning is a prerequisite for personalized assistance systems.

For more information

L. Fischer, B. Hammer and H. Wersing, “Efficient rejection strategies for prototypebased classification”,
Neurocomputing, vol. 169, pp. 334-342, 2015.