This paper presents a method of comparing two queueing networks. In this context one typically thinks of one network as being a solvable modification of another unsolvable one of practical interest. The approach is essentially based upon evaluating steady-state performance measures by a cumulative reward structure and strongly relies upon the analytical estimation of so-called bias terms. To this end, in contrast with the standard stochastic comparison approach, a Markov reward approach is presented. This approach is based upon a discrete-time transformation and one-step Markov reward or dynamic programming steps. The essential ingredients of this approach, in more detail, are - to analyze steady-state performance measures via expected average rewards; - to use a discrete-time Markov transition structure and to compare the difference of the two systems in its one-step transition structure; - to use inductive arguments to estimate or bound the so-called bias terms for one of the two systems.

Recensione dell'articolo: ( van Dijk, Nico M. "Error bounds and comparison results: the Markov reward approach for queueing networks." - Queueing networks, 397–459, Internat.Ser.Oper.Res.Management Sci., 154, Springer, New York, 2011. )

PASINI, Leonardo
2012-01-01

Abstract

This paper presents a method of comparing two queueing networks. In this context one typically thinks of one network as being a solvable modification of another unsolvable one of practical interest. The approach is essentially based upon evaluating steady-state performance measures by a cumulative reward structure and strongly relies upon the analytical estimation of so-called bias terms. To this end, in contrast with the standard stochastic comparison approach, a Markov reward approach is presented. This approach is based upon a discrete-time transformation and one-step Markov reward or dynamic programming steps. The essential ingredients of this approach, in more detail, are - to analyze steady-state performance measures via expected average rewards; - to use a discrete-time Markov transition structure and to compare the difference of the two systems in its one-step transition structure; - to use inductive arguments to estimate or bound the so-called bias terms for one of the two systems.
2012
0821835831
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11581/353990
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact