Abstract:
In mid-20th century Shannon introduced the concept of entropy, which can be intuitively described as the “average number of information bits in a single value of a random variable.” However this concept is not applicable to individual objects, such as the text of a novel or the DNA molecule. Indeed, there is no random variable without an ensemble of similar objects of the same kind.
In mid-1960s several people (Kolmogorov, Solomonoff, Levin, Chaitin, …) realized that the amount of information or complexity contained in an individual object can be defined as the minimal length of a computer program that generates this object, under some natural assumptions on the programming language. Thus appeared algorithmic information theory, which proved to be connected to various domains such as philosophical foundations of probability (when can a statistical hypothesis be rejected?), combinatorics (inequalities between cardinalities of sets and their projections), and theory of computation.
I will present basic definitions and some of the fundamental results of this theory, and also, depending on the audience, a glimpse of current work.