|
Avtomatika i Telemekhanika, 2018, Issue 4, Pages 152–166
(Mi at14849)
|
|
|
|
This article is cited in 12 scientific papers (total in 12 papers)
Intellectual Control Systems, Data Analysis
Stackelberg equilibrium in a dynamic stimulation model with complete information
D. B. Rokhlin, G. A. Ougolnitsky Southern Federal University, Rostov-on-Don, Russia
Abstract:
We consider a stimulation model with Markov dynamics and discounted optimality criteria in case of discrete time and infinite planning horizon. In this model, the regulator has an economic impact on the executor, choosing a stimulating function that depends on the system state and the actions of the executor, who employs positional control strategies. System dynamics, revenues of the regulator and costs of the executor depend on the system state and the executor’s actions. We show that finding an approximate solution of the (inverse) Stackelberg game reduces to solving the optimal control problem with criterion equal to the difference between the revenue of the regulator and the costs of the executor. Here the $\varepsilon$-optimal strategy of the regulator is to economically motivate the executor to follow this optimal control strategy.
Keywords:
two-level incentive model, inverse Stackelberg game, discounted optimality criterion, Bellman equation.
Citation:
D. B. Rokhlin, G. A. Ougolnitsky, “Stackelberg equilibrium in a dynamic stimulation model with complete information”, Avtomat. i Telemekh., 2018, no. 4, 152–166; Autom. Remote Control, 79:4 (2018), 701–712
Linking options:
https://www.mathnet.ru/eng/at14849 https://www.mathnet.ru/eng/at/y2018/i4/p152
|
Statistics & downloads: |
Abstract page: | 333 | Full-text PDF : | 60 | References: | 34 | First page: | 17 |
|