2. an empirical statement - grounded in a stochastic model. It is shorthand for All ravenlike birds tend to be rather black or whatever the professional might deem correct. The meaning of such statements is more subject to context than in the case of well-groomed definitions.

The human mind thus faces the choice: To adopt a definition and run the risk that this does not fit reality so well, or to adopt a statement on averages and work out more details of the empirical loss function. Decisions on such statements thus are sensitive to the loss function, but the second category requires more detail.

This of course does not solve everything. The distinction of these two dimensions or perspectives is not like solving all problems in their domains. Also a definition like All ravens are black by definition does not answer the question whether a particular object is a raven or is black. Is a size of 10 kilometers acceptable ? Did we look in daytime or at night ? Must it be alive, and then, what is life ? So the distinction between definitions and empirical statements is useful, but it does not solve all problems. The point is not quite that one can always adjust definitions, but rather that a definition is not reality by itself. (Though it can get close.)

At one point in history, scientists were willing to accept the periodic system of elements to catalogue the wide variety of materials around us. There was apparently little loss involved in accepting these definitions, or Lavoisier’s periodic table was more gainful than other catalogs. The definitions did not change the materials, but facilitated more efficient research. At one point in history, see Mirowski (1989), economists were willing to analyse human behaviour in terms of utility maximisation. The approach is an empty box, since any behaviour can be described as such. For example satisficing behaviour can be represented as minimising the distance from satisfaction. Also in ‘evolutionary economics’ the utility maximisation model can be applied though these researchers are critical of this approach. (While, curiously, Charles Darwin was inspired, amongst others, by Adam Smith.) The new approach for laboratory experiments makes us even more critical about the rationality hypothesis. Utility maximisation however helps organising one’s thoughts, helps professional discussion, facilitates modelling and empirical estimation, and is generally considered an advance above less explicit approaches.

As with the Pythagoras example, but now empirically, there is a switch from just empirical knowledge to a set of definitions, when the loss function allows it.

Kuhn (1962) describes major changes as ‘paradigm switches’ (though someone noted that he used that word in perhaps 40 ways). I rather draw attention to the change from empirical knowledge to definition. This change need not be a paradigm switch. Paradigm switches may be the most intriguing or flashy examples of the introduction of new definitions, but the change from empirical knowledge to definition does also occur in ‘normal science’.

Determinism and free will

Holland around 1600 had the theological argument between Gomarus who defended predestination and Arminius who defended a measure of volition. This discussion had started before them, didn’t end with them, and continues till this day, also in these pages.

The 20th century gave a novel twist to the argument, namely quantum mechanics. Instead of the folly of the gods, there now is a randomizer with a scientific garb. If objects, and the molecules in our brains, have random aspects, then this would be neither determinism nor volition. Quantum mechanics normally is applied at the micro level of particles, and there is the suggestion that larger aggregations of masses still would behave in the Newton-Einstein fashion. Schrödinger however gave an example - his cat - how quantum mechanics could also extend into this macro world. So the challenge to the debate on predestination is real. [60]

The quantum model is stochastic of itself. This differs from the randomness caused by simple measurement errors - the randomness commonly used in economics. However, economics has some purely stochastic models of itself too. There is for example the Erlang queueing model. Consider a postoffice with clients arriving and being served. Interarrival and service times can be modeled with exponential distributions, and this allows us to determine the average length of the queue, the average waiting time, the average utilisation rate of the service window, and such. If the situation gets more complicated, then research economists use computer simulation models to find the best way of operation. This example shows that economics already is familiar with a model that is stochastic in itself. Note that there are some ways to re-introduce a degree of determinism - as your barbershop may require you to make an appointment. The basic observation that we make here is that the stochastic approach is basically a modeling method, and there is no implication that arrival and service are intrinsically random.