0$.
",daniel paulusma,,2018.0,,arXiv,Hof2018,True,,arXiv,Not available,"Simple Games versus Weighted Voting Games: Bounding the Critical
Threshold Value",44acdb77e3e0a8646e538953184efb2a,http://arxiv.org/abs/1810.08841v1
16058," The semigroup game is a two-person zero-sum game defined on a semigroup S as
follows: Players 1 and 2 choose elements x and y in S, respectively, and player
1 receives a payoff f(xy) defined by a function f from S to [-1,1]. If the
semigroup is amenable in the sense of Day and von Neumann, one can extend the
set of classical strategies, namely countably additive probability measures on
S, to include some finitely additive measures in a natural way. This extended
game has a value and the players have optimal strategies. This theorem extends
previous results for the multiplication game on a compact group or on the
positive integers with a specific payoff. We also prove that the procedure of
extending the set of allowed strategies preserves classical solutions: if a
semigroup game has a classical solution, this solution solves also the extended
game.
",valerio capraro,,2011.0,10.1007/s00182-012-0345-7,International Journal of Game Theory 42 (2013) 917-929,Capraro2011,True,,arXiv,Not available,Optimal strategies for a game on amenable semigroups,616e95546f65ff2faa01af422b872879,http://arxiv.org/abs/1104.3098v3
16059," The semigroup game is a two-person zero-sum game defined on a semigroup S as
follows: Players 1 and 2 choose elements x and y in S, respectively, and player
1 receives a payoff f(xy) defined by a function f from S to [-1,1]. If the
semigroup is amenable in the sense of Day and von Neumann, one can extend the
set of classical strategies, namely countably additive probability measures on
S, to include some finitely additive measures in a natural way. This extended
game has a value and the players have optimal strategies. This theorem extends
previous results for the multiplication game on a compact group or on the
positive integers with a specific payoff. We also prove that the procedure of
extending the set of allowed strategies preserves classical solutions: if a
semigroup game has a classical solution, this solution solves also the extended
game.
",kent morrison,,2011.0,10.1007/s00182-012-0345-7,International Journal of Game Theory 42 (2013) 917-929,Capraro2011,True,,arXiv,Not available,Optimal strategies for a game on amenable semigroups,616e95546f65ff2faa01af422b872879,http://arxiv.org/abs/1104.3098v3
16060," A network of cognitive transmitters is considered. Each transmitter has to
decide his power control policy in order to maximize energy-efficiency of his
transmission. For this, a transmitter has two actions to take. He has to decide
whether to sense the power levels of the others or not (which corresponds to a
finite sensing game), and to choose his transmit power level for each block
(which corresponds to a compact power control game). The sensing game is shown
to be a weighted potential game and its set of correlated equilibria is
studied. Interestingly, it is shown that the general hybrid game where each
transmitter can jointly choose the hybrid pair of actions (to sense or not to
sense, transmit power level) leads to an outcome which is worse than the one
obtained by playing the sensing game first, and then playing the power control
game. This is an interesting Braess-type paradox to be aware of for
energy-efficient power control in cognitive networks.
",mael treust,,2012.0,,arXiv,Treust2012,True,,arXiv,Not available,"""To sense"" or ""not to sense"" in energy-efficient power control games",8a19ec24abc7b9dfa2480533a6158d86,http://arxiv.org/abs/1210.6370v1
16061," A network of cognitive transmitters is considered. Each transmitter has to
decide his power control policy in order to maximize energy-efficiency of his
transmission. For this, a transmitter has two actions to take. He has to decide
whether to sense the power levels of the others or not (which corresponds to a
finite sensing game), and to choose his transmit power level for each block
(which corresponds to a compact power control game). The sensing game is shown
to be a weighted potential game and its set of correlated equilibria is
studied. Interestingly, it is shown that the general hybrid game where each
transmitter can jointly choose the hybrid pair of actions (to sense or not to
sense, transmit power level) leads to an outcome which is worse than the one
obtained by playing the sensing game first, and then playing the power control
game. This is an interesting Braess-type paradox to be aware of for
energy-efficient power control in cognitive networks.
",yezekael hayel,,2012.0,,arXiv,Treust2012,True,,arXiv,Not available,"""To sense"" or ""not to sense"" in energy-efficient power control games",8a19ec24abc7b9dfa2480533a6158d86,http://arxiv.org/abs/1210.6370v1
16062," A network of cognitive transmitters is considered. Each transmitter has to
decide his power control policy in order to maximize energy-efficiency of his
transmission. For this, a transmitter has two actions to take. He has to decide
whether to sense the power levels of the others or not (which corresponds to a
finite sensing game), and to choose his transmit power level for each block
(which corresponds to a compact power control game). The sensing game is shown
to be a weighted potential game and its set of correlated equilibria is
studied. Interestingly, it is shown that the general hybrid game where each
transmitter can jointly choose the hybrid pair of actions (to sense or not to
sense, transmit power level) leads to an outcome which is worse than the one
obtained by playing the sensing game first, and then playing the power control
game. This is an interesting Braess-type paradox to be aware of for
energy-efficient power control in cognitive networks.
",samson lasaulce,,2012.0,,arXiv,Treust2012,True,,arXiv,Not available,"""To sense"" or ""not to sense"" in energy-efficient power control games",8a19ec24abc7b9dfa2480533a6158d86,http://arxiv.org/abs/1210.6370v1
16063," There has been significant recent interest in game-theoretic approaches to
security, with much of the recent research focused on utilizing the
leader-follower Stackelberg game model. Among the major applications are the
ARMOR program deployed at LAX Airport and the IRIS program in use by the US
Federal Air Marshals (FAMS). The foundational assumption for using Stackelberg
games is that security forces (leaders), acting first, commit to a randomized
strategy; while their adversaries (followers) choose their best response after
surveillance of this randomized strategy. Yet, in many situations, a leader may
face uncertainty about the follower's surveillance capability. Previous work
fails to address how a leader should compute her strategy given such
uncertainty. We provide five contributions in the context of a general class of
security games. First, we show that the Nash equilibria in security games are
interchangeable, thus alleviating the equilibrium selection problem. Second,
under a natural restriction on security games, any Stackelberg strategy is also
a Nash equilibrium strategy; and furthermore, the solution is unique in a class
of security games of which ARMOR is a key exemplar. Third, when faced with a
follower that can attack multiple targets, many of these properties no longer
hold. Fourth, we show experimentally that in most (but not all) games where the
restriction does not hold, the Stackelberg strategy is still a Nash equilibrium
strategy, but this is no longer true when the attacker can attack multiple
targets. Finally, as a possible direction for future research, we propose an
extensive-form game model that makes the defender's uncertainty about the
attacker's ability to observe explicit.
",christopher kiekintveld,,2014.0,10.1613/jair.3269,"Journal Of Artificial Intelligence Research, Volume 41, pages
297-327, 2011",Korzhyk2014,True,,arXiv,Not available,"Stackelberg vs. Nash in Security Games: An Extended Investigation of
Interchangeability, Equivalence, and Uniqueness",a789c67c90ecbcf51f9809205fa22421,http://arxiv.org/abs/1401.3888v1
16064," A network of cognitive transmitters is considered. Each transmitter has to
decide his power control policy in order to maximize energy-efficiency of his
transmission. For this, a transmitter has two actions to take. He has to decide
whether to sense the power levels of the others or not (which corresponds to a
finite sensing game), and to choose his transmit power level for each block
(which corresponds to a compact power control game). The sensing game is shown
to be a weighted potential game and its set of correlated equilibria is
studied. Interestingly, it is shown that the general hybrid game where each
transmitter can jointly choose the hybrid pair of actions (to sense or not to
sense, transmit power level) leads to an outcome which is worse than the one
obtained by playing the sensing game first, and then playing the power control
game. This is an interesting Braess-type paradox to be aware of for
energy-efficient power control in cognitive networks.
",merouane debbah,,2012.0,,arXiv,Treust2012,True,,arXiv,Not available,"""To sense"" or ""not to sense"" in energy-efficient power control games",8a19ec24abc7b9dfa2480533a6158d86,http://arxiv.org/abs/1210.6370v1
16065," We investigate determinacy of delay games with Borel winning conditions,
infinite-duration two-player games in which one player may delay her moves to
obtain a lookahead on her opponent's moves.
First, we prove determinacy of such games with respect to a fixed evolution
of the lookahead. However, strategies in such games may depend on information
about the evolution. Thus, we introduce different notions of universal
strategies for both players, which are evolution-independent, and determine the
exact amount of information a universal strategy needs about the history of a
play and the evolution of the lookahead to be winning. In particular, we show
that delay games with Borel winning conditions are determined with respect to
universal strategies. Finally, we consider decidability problems, e.g., ""Does a
player have a universal winning strategy for delay games with a given winning
condition?"", for omega-regular and omega-context-free winning conditions.
",felix klein,,2015.0,,arXiv,Klein2015,True,,arXiv,Not available,"What are Strategies in Delay Games? Borel Determinacy for Games with
Lookahead",642715bf97320cde0492a400adccd62f,http://arxiv.org/abs/1504.02627v1
16066," We investigate determinacy of delay games with Borel winning conditions,
infinite-duration two-player games in which one player may delay her moves to
obtain a lookahead on her opponent's moves.
First, we prove determinacy of such games with respect to a fixed evolution
of the lookahead. However, strategies in such games may depend on information
about the evolution. Thus, we introduce different notions of universal
strategies for both players, which are evolution-independent, and determine the
exact amount of information a universal strategy needs about the history of a
play and the evolution of the lookahead to be winning. In particular, we show
that delay games with Borel winning conditions are determined with respect to
universal strategies. Finally, we consider decidability problems, e.g., ""Does a
player have a universal winning strategy for delay games with a given winning
condition?"", for omega-regular and omega-context-free winning conditions.
",martin zimmermann,,2015.0,,arXiv,Klein2015,True,,arXiv,Not available,"What are Strategies in Delay Games? Borel Determinacy for Games with
Lookahead",642715bf97320cde0492a400adccd62f,http://arxiv.org/abs/1504.02627v1
16067," Gvozdeva, Hemaspaandra, and Slinko (2011) have introduced three hierarchies
for simple games in order to measure the distance of a given simple game to the
class of (roughly) weighted voting games. Their third class
$\mathcal{C}_\alpha$ consists of all simple games permitting a weighted
representation such that each winning coalition has a weight of at least 1 and
each losing coalition a weight of at most $\alpha$. For a given game the
minimal possible value of $\alpha$ is called its critical threshold value. We
continue the work on the critical threshold value, initiated by Gvozdeva et
al., and contribute some new results on the possible values for a given number
of voters as well as some general bounds for restricted subclasses of games. A
strong relation beween this concept and the cost of stability, i.e. the minimum
amount of external payment to ensure stability in a coalitional game, is
uncovered.
",josep freixas,,2011.0,,arXiv,Freixas2011,True,,arXiv,Not available,On $α$-roughly weighted games,104bd6a6bb73bdb7342723e678f49707,http://arxiv.org/abs/1112.2861v2
16068," Gvozdeva, Hemaspaandra, and Slinko (2011) have introduced three hierarchies
for simple games in order to measure the distance of a given simple game to the
class of (roughly) weighted voting games. Their third class
$\mathcal{C}_\alpha$ consists of all simple games permitting a weighted
representation such that each winning coalition has a weight of at least 1 and
each losing coalition a weight of at most $\alpha$. For a given game the
minimal possible value of $\alpha$ is called its critical threshold value. We
continue the work on the critical threshold value, initiated by Gvozdeva et
al., and contribute some new results on the possible values for a given number
of voters as well as some general bounds for restricted subclasses of games. A
strong relation beween this concept and the cost of stability, i.e. the minimum
amount of external payment to ensure stability in a coalitional game, is
uncovered.
",sascha kurz,,2011.0,,arXiv,Freixas2011,True,,arXiv,Not available,On $α$-roughly weighted games,104bd6a6bb73bdb7342723e678f49707,http://arxiv.org/abs/1112.2861v2
16069," Algorithms for computing game-theoretic solutions have recently been applied
to a number of security domains. However, many of the techniques developed for
compact representations of security games do not extend to {\em Bayesian}
security games, which allow us to model uncertainty about the attacker's type.
In this paper, we introduce a general framework of {\em catcher-evader} games
that can capture Bayesian security games as well as other game families of
interest. We show that computing Stackelberg strategies is NP-hard, but give an
algorithm for computing a Nash equilibrium that performs well in experiments.
We also prove that the Nash equilibria of these games satisfy the {\em
interchangeability} property, so that equilibrium selection is not an issue.
",yuqian li,,2016.0,,arXiv,Li2016,True,,arXiv,Not available,Catcher-Evader Games,dfcfa20897fea5ef39562b585c869882,http://arxiv.org/abs/1602.01896v2
16070," Algorithms for computing game-theoretic solutions have recently been applied
to a number of security domains. However, many of the techniques developed for
compact representations of security games do not extend to {\em Bayesian}
security games, which allow us to model uncertainty about the attacker's type.
In this paper, we introduce a general framework of {\em catcher-evader} games
that can capture Bayesian security games as well as other game families of
interest. We show that computing Stackelberg strategies is NP-hard, but give an
algorithm for computing a Nash equilibrium that performs well in experiments.
We also prove that the Nash equilibria of these games satisfy the {\em
interchangeability} property, so that equilibrium selection is not an issue.
",vincent conitzer,,2016.0,,arXiv,Li2016,True,,arXiv,Not available,Catcher-Evader Games,dfcfa20897fea5ef39562b585c869882,http://arxiv.org/abs/1602.01896v2
16071," Algorithms for computing game-theoretic solutions have recently been applied
to a number of security domains. However, many of the techniques developed for
compact representations of security games do not extend to {\em Bayesian}
security games, which allow us to model uncertainty about the attacker's type.
In this paper, we introduce a general framework of {\em catcher-evader} games
that can capture Bayesian security games as well as other game families of
interest. We show that computing Stackelberg strategies is NP-hard, but give an
algorithm for computing a Nash equilibrium that performs well in experiments.
We also prove that the Nash equilibria of these games satisfy the {\em
interchangeability} property, so that equilibrium selection is not an issue.
",dmytro korzhyk,,2016.0,,arXiv,Li2016,True,,arXiv,Not available,Catcher-Evader Games,dfcfa20897fea5ef39562b585c869882,http://arxiv.org/abs/1602.01896v2
16072," In mean-payoff games, the objective of the protagonist is to ensure that the
limit average of an infinite sequence of numeric weights is nonnegative. In
energy games, the objective is to ensure that the running sum of weights is
always nonnegative. Generalized mean-payoff and energy games replace individual
weights by tuples, and the limit average (resp. running sum) of each coordinate
must be (resp. remain) nonnegative. These games have applications in the
synthesis of resource-bounded processes with multiple resources.
We prove the finite-memory determinacy of generalized energy games and show
the inter-reducibility of generalized mean-payoff and energy games for
finite-memory strategies. We also improve the computational complexity for
solving both classes of games with finite-memory strategies: while the
previously best known upper bound was EXPSPACE, and no lower bound was known,
we give an optimal coNP-complete bound. For memoryless strategies, we show that
the problem of deciding the existence of a winning strategy for the protagonist
is NP-complete.
",krishnendu chatterjee,,2010.0,,arXiv,Chatterjee2010,True,,arXiv,Not available,Generalized Mean-payoff and Energy Games,1c79159d028be52d91d3ef2f089180b2,http://arxiv.org/abs/1007.1669v4
16073," In mean-payoff games, the objective of the protagonist is to ensure that the
limit average of an infinite sequence of numeric weights is nonnegative. In
energy games, the objective is to ensure that the running sum of weights is
always nonnegative. Generalized mean-payoff and energy games replace individual
weights by tuples, and the limit average (resp. running sum) of each coordinate
must be (resp. remain) nonnegative. These games have applications in the
synthesis of resource-bounded processes with multiple resources.
We prove the finite-memory determinacy of generalized energy games and show
the inter-reducibility of generalized mean-payoff and energy games for
finite-memory strategies. We also improve the computational complexity for
solving both classes of games with finite-memory strategies: while the
previously best known upper bound was EXPSPACE, and no lower bound was known,
we give an optimal coNP-complete bound. For memoryless strategies, we show that
the problem of deciding the existence of a winning strategy for the protagonist
is NP-complete.
",laurent doyen,,2010.0,,arXiv,Chatterjee2010,True,,arXiv,Not available,Generalized Mean-payoff and Energy Games,1c79159d028be52d91d3ef2f089180b2,http://arxiv.org/abs/1007.1669v4
16074," There has been significant recent interest in game-theoretic approaches to
security, with much of the recent research focused on utilizing the
leader-follower Stackelberg game model. Among the major applications are the
ARMOR program deployed at LAX Airport and the IRIS program in use by the US
Federal Air Marshals (FAMS). The foundational assumption for using Stackelberg
games is that security forces (leaders), acting first, commit to a randomized
strategy; while their adversaries (followers) choose their best response after
surveillance of this randomized strategy. Yet, in many situations, a leader may
face uncertainty about the follower's surveillance capability. Previous work
fails to address how a leader should compute her strategy given such
uncertainty. We provide five contributions in the context of a general class of
security games. First, we show that the Nash equilibria in security games are
interchangeable, thus alleviating the equilibrium selection problem. Second,
under a natural restriction on security games, any Stackelberg strategy is also
a Nash equilibrium strategy; and furthermore, the solution is unique in a class
of security games of which ARMOR is a key exemplar. Third, when faced with a
follower that can attack multiple targets, many of these properties no longer
hold. Fourth, we show experimentally that in most (but not all) games where the
restriction does not hold, the Stackelberg strategy is still a Nash equilibrium
strategy, but this is no longer true when the attacker can attack multiple
targets. Finally, as a possible direction for future research, we propose an
extensive-form game model that makes the defender's uncertainty about the
attacker's ability to observe explicit.
",vincent conitzer,,2014.0,10.1613/jair.3269,"Journal Of Artificial Intelligence Research, Volume 41, pages
297-327, 2011",Korzhyk2014,True,,arXiv,Not available,"Stackelberg vs. Nash in Security Games: An Extended Investigation of
Interchangeability, Equivalence, and Uniqueness",a789c67c90ecbcf51f9809205fa22421,http://arxiv.org/abs/1401.3888v1
16075," In mean-payoff games, the objective of the protagonist is to ensure that the
limit average of an infinite sequence of numeric weights is nonnegative. In
energy games, the objective is to ensure that the running sum of weights is
always nonnegative. Generalized mean-payoff and energy games replace individual
weights by tuples, and the limit average (resp. running sum) of each coordinate
must be (resp. remain) nonnegative. These games have applications in the
synthesis of resource-bounded processes with multiple resources.
We prove the finite-memory determinacy of generalized energy games and show
the inter-reducibility of generalized mean-payoff and energy games for
finite-memory strategies. We also improve the computational complexity for
solving both classes of games with finite-memory strategies: while the
previously best known upper bound was EXPSPACE, and no lower bound was known,
we give an optimal coNP-complete bound. For memoryless strategies, we show that
the problem of deciding the existence of a winning strategy for the protagonist
is NP-complete.
",thomas henzinger,,2010.0,,arXiv,Chatterjee2010,True,,arXiv,Not available,Generalized Mean-payoff and Energy Games,1c79159d028be52d91d3ef2f089180b2,http://arxiv.org/abs/1007.1669v4
16076," In mean-payoff games, the objective of the protagonist is to ensure that the
limit average of an infinite sequence of numeric weights is nonnegative. In
energy games, the objective is to ensure that the running sum of weights is
always nonnegative. Generalized mean-payoff and energy games replace individual
weights by tuples, and the limit average (resp. running sum) of each coordinate
must be (resp. remain) nonnegative. These games have applications in the
synthesis of resource-bounded processes with multiple resources.
We prove the finite-memory determinacy of generalized energy games and show
the inter-reducibility of generalized mean-payoff and energy games for
finite-memory strategies. We also improve the computational complexity for
solving both classes of games with finite-memory strategies: while the
previously best known upper bound was EXPSPACE, and no lower bound was known,
we give an optimal coNP-complete bound. For memoryless strategies, we show that
the problem of deciding the existence of a winning strategy for the protagonist
is NP-complete.
",jean-francois raskin,,2010.0,,arXiv,Chatterjee2010,True,,arXiv,Not available,Generalized Mean-payoff and Energy Games,1c79159d028be52d91d3ef2f089180b2,http://arxiv.org/abs/1007.1669v4
16077," A poset game is a two-player game played over a partially ordered set (poset)
in which the players alternate choosing an element of the poset, removing it
and all elements greater than it. The first player unable to select an element
of the poset loses. Polynomial time algorithms exist for certain restricted
classes of poset games, such as the game of Nim. However, until recently the
complexity of arbitrary finite poset games was only known to exist somewhere
between NC^1 and PSPACE. We resolve this discrepancy by showing that deciding
the winner of an arbitrary finite poset game is PSPACE-complete. To this end,
we give an explicit reduction from Node Kayles, a PSPACE-complete game in which
players vie to chose an independent set in a graph.
",daniel grier,,2012.0,10.1007/978-3-642-39206-1_42,"ICALP 2013, Part I, LNCS 7965, 2013, pp 497-503",Grier2012,True,,arXiv,Not available,Deciding the Winner of an Arbitrary Finite Poset Game is PSPACE-Complete,0a6a8bdf70c3f671c0397cc96c88b3fe,http://arxiv.org/abs/1209.1750v2
16078," We study the problem of finding robust equilibria in multiplayer concurrent
games with mean payoff objectives. A $(k,t)$-robust equilibrium is a strategy
profile such that no coalition of size $k$ can improve the payoff of one its
member by deviating, and no coalition of size $t$ can decrease the payoff of
other players. We are interested in pure equilibria, that is, solutions that
can be implemented using non-randomized strategies. We suggest a general
transformation from multiplayer games to two-player games such that pure
equilibria in the first game correspond to winning strategies in the second
one. We then devise from this transformation, an algorithm which computes
equilibria in mean-payoff games. Robust equilibria in mean-payoff games reduce
to winning strategies in multidimensional mean-payoff games for some threshold
satisfying some constraints. We then show that the existence of such equilibria
can be decided in polynomial space, and that the decision problem is
PSPACE-complete.
",romain brenguier,,2013.0,,arXiv,Brenguier2013,True,,arXiv,Not available,Robust Equilibria in Concurrent Games,2516108e039f96b6edd4ac96ed1cbe0c,http://arxiv.org/abs/1311.7683v7
16079," We study infinitely repeated games in settings of imperfect monitoring. We
first prove a family of theorems that show that when the signals observed by
the players satisfy a condition known as $(\epsilon, \gamma)$-differential
privacy, that the folk theorem has little bite: for values of $\epsilon$ and
$\gamma$ sufficiently small, for a fixed discount factor, any equilibrium of
the repeated game involve players playing approximate equilibria of the stage
game in every period. Next, we argue that in large games ($n$ player games in
which unilateral deviations by single players have only a small impact on the
utility of other players), many monitoring settings naturally lead to signals
that satisfy $(\epsilon,\gamma)$-differential privacy, for $\epsilon$ and
$\gamma$ tending to zero as the number of players $n$ grows large. We conclude
that in such settings, the set of equilibria of the repeated game collapse to
the set of equilibria of the stage game.
",mallesh pai,,2014.0,,arXiv,Pai2014,True,,arXiv,Not available,An Anti-Folk Theorem for Large Repeated Games with Imperfect Monitoring,683187401e45dc08474e144a68e9a2fb,http://arxiv.org/abs/1402.2801v2
16080," We study infinitely repeated games in settings of imperfect monitoring. We
first prove a family of theorems that show that when the signals observed by
the players satisfy a condition known as $(\epsilon, \gamma)$-differential
privacy, that the folk theorem has little bite: for values of $\epsilon$ and
$\gamma$ sufficiently small, for a fixed discount factor, any equilibrium of
the repeated game involve players playing approximate equilibria of the stage
game in every period. Next, we argue that in large games ($n$ player games in
which unilateral deviations by single players have only a small impact on the
utility of other players), many monitoring settings naturally lead to signals
that satisfy $(\epsilon,\gamma)$-differential privacy, for $\epsilon$ and
$\gamma$ tending to zero as the number of players $n$ grows large. We conclude
that in such settings, the set of equilibria of the repeated game collapse to
the set of equilibria of the stage game.
",aaron roth,,2014.0,,arXiv,Pai2014,True,,arXiv,Not available,An Anti-Folk Theorem for Large Repeated Games with Imperfect Monitoring,683187401e45dc08474e144a68e9a2fb,http://arxiv.org/abs/1402.2801v2
16081," We study infinitely repeated games in settings of imperfect monitoring. We
first prove a family of theorems that show that when the signals observed by
the players satisfy a condition known as $(\epsilon, \gamma)$-differential
privacy, that the folk theorem has little bite: for values of $\epsilon$ and
$\gamma$ sufficiently small, for a fixed discount factor, any equilibrium of
the repeated game involve players playing approximate equilibria of the stage
game in every period. Next, we argue that in large games ($n$ player games in
which unilateral deviations by single players have only a small impact on the
utility of other players), many monitoring settings naturally lead to signals
that satisfy $(\epsilon,\gamma)$-differential privacy, for $\epsilon$ and
$\gamma$ tending to zero as the number of players $n$ grows large. We conclude
that in such settings, the set of equilibria of the repeated game collapse to
the set of equilibria of the stage game.
",jonathan ullman,,2014.0,,arXiv,Pai2014,True,,arXiv,Not available,An Anti-Folk Theorem for Large Repeated Games with Imperfect Monitoring,683187401e45dc08474e144a68e9a2fb,http://arxiv.org/abs/1402.2801v2
16082," A key feature of wireless communications is the spatial reuse. However, the
spatial aspect is not yet well understood for the purpose of designing
efficient spectrum sharing mechanisms. In this paper, we propose a framework of
spatial spectrum access games on directed interference graphs, which can model
quite general interference relationship with spatial reuse in wireless
networks. We show that a pure Nash equilibrium exists for the two classes of
games: (1) any spatial spectrum access games on directed acyclic graphs, and
(2) any games satisfying the congestion property on directed trees and directed
forests. Under mild technical conditions, the spatial spectrum access games
with random backoff and Aloha channel contention mechanisms on undirected
graphs also have a pure Nash equilibrium. We also quantify the price of anarchy
of the spatial spectrum access game. We then propose a distributed learning
algorithm, which only utilizes users' local observations to adaptively adjust
the spectrum access strategies. We show that the distributed learning algorithm
can converge to an approximate mixed-strategy Nash equilibrium for any spatial
spectrum access games. Numerical results demonstrate that the distributed
learning algorithm achieves up to superior performance improvement over a
random access algorithm.
",xu chen,,2014.0,,arXiv,Chen2014,True,,arXiv,Not available,Spatial Spectrum Access Game,a83d0ca29b32d30ccedfc12e5bfc4a54,http://arxiv.org/abs/1405.3860v1
16083," A key feature of wireless communications is the spatial reuse. However, the
spatial aspect is not yet well understood for the purpose of designing
efficient spectrum sharing mechanisms. In this paper, we propose a framework of
spatial spectrum access games on directed interference graphs, which can model
quite general interference relationship with spatial reuse in wireless
networks. We show that a pure Nash equilibrium exists for the two classes of
games: (1) any spatial spectrum access games on directed acyclic graphs, and
(2) any games satisfying the congestion property on directed trees and directed
forests. Under mild technical conditions, the spatial spectrum access games
with random backoff and Aloha channel contention mechanisms on undirected
graphs also have a pure Nash equilibrium. We also quantify the price of anarchy
of the spatial spectrum access game. We then propose a distributed learning
algorithm, which only utilizes users' local observations to adaptively adjust
the spectrum access strategies. We show that the distributed learning algorithm
can converge to an approximate mixed-strategy Nash equilibrium for any spatial
spectrum access games. Numerical results demonstrate that the distributed
learning algorithm achieves up to superior performance improvement over a
random access algorithm.
",jianwei huang,,2014.0,,arXiv,Chen2014,True,,arXiv,Not available,Spatial Spectrum Access Game,a83d0ca29b32d30ccedfc12e5bfc4a54,http://arxiv.org/abs/1405.3860v1
16084," We study two impartial games introduced by Anderson and Harary and further
developed by Barnes. Both games are played by two players who alternately
select previously unselected elements of a finite group. The first player who
builds a generating set from the jointly selected elements wins the first game.
The first player who cannot select an element without building a generating set
loses the second game. After the development of some general results, we
determine the nim-numbers of these games for abelian and dihedral groups. We
also present some conjectures based on computer calculations. Our main
computational and theoretical tool is the structure diagram of a game, which is
a type of identification digraph of the game digraph that is compatible with
the nim-numbers of the positions. Structure diagrams also provide simple yet
intuitive visualizations of these games that capture the complexity of the
positions.
",dana ernst,,2014.0,,arXiv,Ernst2014,True,,arXiv,Not available,Impartial achievement and avoidance games for generating finite groups,3a5fd7e706b6c960ab58617285f48e32,http://arxiv.org/abs/1407.0784v2
16085," There has been significant recent interest in game-theoretic approaches to
security, with much of the recent research focused on utilizing the
leader-follower Stackelberg game model. Among the major applications are the
ARMOR program deployed at LAX Airport and the IRIS program in use by the US
Federal Air Marshals (FAMS). The foundational assumption for using Stackelberg
games is that security forces (leaders), acting first, commit to a randomized
strategy; while their adversaries (followers) choose their best response after
surveillance of this randomized strategy. Yet, in many situations, a leader may
face uncertainty about the follower's surveillance capability. Previous work
fails to address how a leader should compute her strategy given such
uncertainty. We provide five contributions in the context of a general class of
security games. First, we show that the Nash equilibria in security games are
interchangeable, thus alleviating the equilibrium selection problem. Second,
under a natural restriction on security games, any Stackelberg strategy is also
a Nash equilibrium strategy; and furthermore, the solution is unique in a class
of security games of which ARMOR is a key exemplar. Third, when faced with a
follower that can attack multiple targets, many of these properties no longer
hold. Fourth, we show experimentally that in most (but not all) games where the
restriction does not hold, the Stackelberg strategy is still a Nash equilibrium
strategy, but this is no longer true when the attacker can attack multiple
targets. Finally, as a possible direction for future research, we propose an
extensive-form game model that makes the defender's uncertainty about the
attacker's ability to observe explicit.
",milind tambe,,2014.0,10.1613/jair.3269,"Journal Of Artificial Intelligence Research, Volume 41, pages
297-327, 2011",Korzhyk2014,True,,arXiv,Not available,"Stackelberg vs. Nash in Security Games: An Extended Investigation of
Interchangeability, Equivalence, and Uniqueness",a789c67c90ecbcf51f9809205fa22421,http://arxiv.org/abs/1401.3888v1
16086," We study two impartial games introduced by Anderson and Harary and further
developed by Barnes. Both games are played by two players who alternately
select previously unselected elements of a finite group. The first player who
builds a generating set from the jointly selected elements wins the first game.
The first player who cannot select an element without building a generating set
loses the second game. After the development of some general results, we
determine the nim-numbers of these games for abelian and dihedral groups. We
also present some conjectures based on computer calculations. Our main
computational and theoretical tool is the structure diagram of a game, which is
a type of identification digraph of the game digraph that is compatible with
the nim-numbers of the positions. Structure diagrams also provide simple yet
intuitive visualizations of these games that capture the complexity of the
positions.
",nandor sieben,,2014.0,,arXiv,Ernst2014,True,,arXiv,Not available,Impartial achievement and avoidance games for generating finite groups,3a5fd7e706b6c960ab58617285f48e32,http://arxiv.org/abs/1407.0784v2
16087," A real-valued game has the finite improvement property (FIP), if starting
from an arbitrary strategy profile and letting the players change strategies to
increase their individual payoffs in a sequential but non-deterministic order
always reaches a Nash equilibrium. E.g., potential games have the FIP. Many of
them have the FIP by chance nonetheless, since modifying even a single payoff
may ruin the property. This article characterises (in quadratic time) the class
of the finite games where FIP not only holds but is also preserved when
modifying all the occurrences of an arbitrary payoff. The characterisation
relies on a pattern-matching sufficient condition for games (finite or
infinite) to enjoy the FIP, and is followed by an inductive description of this
class.
A real-valued game is weakly acyclic if the improvement described above can
reach a Nash equilibrium. This article characterises the finite such games
using Markov chains and almost sure convergence to equilibrium. It also gives
an inductive description of the two-player such games.
",stephane roux,,2014.0,,arXiv,Roux2014,True,,arXiv,Not available,On terminating improvement in two-player games,5b6fb078b83b91be9c0a4b7b80cd8624,http://arxiv.org/abs/1409.6489v2
16088," We investigate market forces that would lead to the emergence of new classes
of players in the sponsored search market. We report a 3-fold diversification
triggered by two inherent features of the sponsored search market, namely,
capacity constraints and collusion-vulnerability of current mechanisms. In the
first scenario, we present a comparative study of two models motivated by
capacity constraints - one where the additional capacity is provided by
for-profit agents, who compete for slots in the original auction, draw traffic,
and run their own sub-auctions, and the other, where the additional capacity is
provided by the auctioneer herself, by essentially acting as a mediator and
running a single combined auction. This study was initiated by us in
\cite{SRGR07}, where the mediator-based model was studied. In the present work,
we study the auctioneer-based model and show that this model seems inferior to
the mediator-based model in terms of revenue or efficiency guarantee due to
added capacity. In the second scenario, we initiate a game theoretic study of
current sponsored search auctions, involving incentive driven mediators who
exploit the fact that these mechanisms are not collusion-resistant. In
particular, we show that advertisers can improve their payoffs by using the
services of the mediator compared to directly participating in the auction, and
that the mediator can also obtain monetary benefit, without violating incentive
constraints from the advertisers who do not use its services. We also point out
that the auctioneer can not do very much via mechanism design to avoid such
for-profit mediation without losing badly in terms of revenue, and therefore,
the mediators are likely to prevail.
",sudhir singh,,2007.0,,arXiv,Singh2007,True,,arXiv,Not available,Diversification in the Internet Economy:The Role of For-Profit Mediators,b8d73da59ca186ce4d3aae68ebdb841e,http://arxiv.org/abs/0711.0259v1
16089," We investigate market forces that would lead to the emergence of new classes
of players in the sponsored search market. We report a 3-fold diversification
triggered by two inherent features of the sponsored search market, namely,
capacity constraints and collusion-vulnerability of current mechanisms. In the
first scenario, we present a comparative study of two models motivated by
capacity constraints - one where the additional capacity is provided by
for-profit agents, who compete for slots in the original auction, draw traffic,
and run their own sub-auctions, and the other, where the additional capacity is
provided by the auctioneer herself, by essentially acting as a mediator and
running a single combined auction. This study was initiated by us in
\cite{SRGR07}, where the mediator-based model was studied. In the present work,
we study the auctioneer-based model and show that this model seems inferior to
the mediator-based model in terms of revenue or efficiency guarantee due to
added capacity. In the second scenario, we initiate a game theoretic study of
current sponsored search auctions, involving incentive driven mediators who
exploit the fact that these mechanisms are not collusion-resistant. In
particular, we show that advertisers can improve their payoffs by using the
services of the mediator compared to directly participating in the auction, and
that the mediator can also obtain monetary benefit, without violating incentive
constraints from the advertisers who do not use its services. We also point out
that the auctioneer can not do very much via mechanism design to avoid such
for-profit mediation without losing badly in terms of revenue, and therefore,
the mediators are likely to prevail.
",vwani roychowdhury,,2007.0,,arXiv,Singh2007,True,,arXiv,Not available,Diversification in the Internet Economy:The Role of For-Profit Mediators,b8d73da59ca186ce4d3aae68ebdb841e,http://arxiv.org/abs/0711.0259v1
16090," We investigate market forces that would lead to the emergence of new classes
of players in the sponsored search market. We report a 3-fold diversification
triggered by two inherent features of the sponsored search market, namely,
capacity constraints and collusion-vulnerability of current mechanisms. In the
first scenario, we present a comparative study of two models motivated by
capacity constraints - one where the additional capacity is provided by
for-profit agents, who compete for slots in the original auction, draw traffic,
and run their own sub-auctions, and the other, where the additional capacity is
provided by the auctioneer herself, by essentially acting as a mediator and
running a single combined auction. This study was initiated by us in
\cite{SRGR07}, where the mediator-based model was studied. In the present work,
we study the auctioneer-based model and show that this model seems inferior to
the mediator-based model in terms of revenue or efficiency guarantee due to
added capacity. In the second scenario, we initiate a game theoretic study of
current sponsored search auctions, involving incentive driven mediators who
exploit the fact that these mechanisms are not collusion-resistant. In
particular, we show that advertisers can improve their payoffs by using the
services of the mediator compared to directly participating in the auction, and
that the mediator can also obtain monetary benefit, without violating incentive
constraints from the advertisers who do not use its services. We also point out
that the auctioneer can not do very much via mechanism design to avoid such
for-profit mediation without losing badly in terms of revenue, and therefore,
the mediators are likely to prevail.
",himawan gunadhi,,2007.0,,arXiv,Singh2007,True,,arXiv,Not available,Diversification in the Internet Economy:The Role of For-Profit Mediators,b8d73da59ca186ce4d3aae68ebdb841e,http://arxiv.org/abs/0711.0259v1
16091," We investigate market forces that would lead to the emergence of new classes
of players in the sponsored search market. We report a 3-fold diversification
triggered by two inherent features of the sponsored search market, namely,
capacity constraints and collusion-vulnerability of current mechanisms. In the
first scenario, we present a comparative study of two models motivated by
capacity constraints - one where the additional capacity is provided by
for-profit agents, who compete for slots in the original auction, draw traffic,
and run their own sub-auctions, and the other, where the additional capacity is
provided by the auctioneer herself, by essentially acting as a mediator and
running a single combined auction. This study was initiated by us in
\cite{SRGR07}, where the mediator-based model was studied. In the present work,
we study the auctioneer-based model and show that this model seems inferior to
the mediator-based model in terms of revenue or efficiency guarantee due to
added capacity. In the second scenario, we initiate a game theoretic study of
current sponsored search auctions, involving incentive driven mediators who
exploit the fact that these mechanisms are not collusion-resistant. In
particular, we show that advertisers can improve their payoffs by using the
services of the mediator compared to directly participating in the auction, and
that the mediator can also obtain monetary benefit, without violating incentive
constraints from the advertisers who do not use its services. We also point out
that the auctioneer can not do very much via mechanism design to avoid such
for-profit mediation without losing badly in terms of revenue, and therefore,
the mediators are likely to prevail.
",behnam rezaei,,2007.0,,arXiv,Singh2007,True,,arXiv,Not available,Diversification in the Internet Economy:The Role of For-Profit Mediators,b8d73da59ca186ce4d3aae68ebdb841e,http://arxiv.org/abs/0711.0259v1
16092," A line of recent work provides welfare guarantees of simple combinatorial
auction formats, such as selling m items via simultaneous second price auctions
(SiSPAs) (Christodoulou et al. 2008, Bhawalkar and Roughgarden 2011, Feldman et
al. 2013). These guarantees hold even when the auctions are repeatedly executed
and players use no-regret learning algorithms. Unfortunately, off-the-shelf
no-regret algorithms for these auctions are computationally inefficient as the
number of actions is exponential. We show that this obstacle is insurmountable:
there are no polynomial-time no-regret algorithms for SiSPAs, unless
RP$\supseteq$ NP, even when the bidders are unit-demand. Our lower bound raises
the question of how good outcomes polynomially-bounded bidders may discover in
such auctions.
To answer this question, we propose a novel concept of learning in auctions,
termed ""no-envy learning."" This notion is founded upon Walrasian equilibrium,
and we show that it is both efficiently implementable and results in
approximately optimal welfare, even when the bidders have fractionally
subadditive (XOS) valuations (assuming demand oracles) or coverage valuations
(without demand oracles). No-envy learning outcomes are a relaxation of
no-regret outcomes, which maintain their approximate welfare optimality while
endowing them with computational tractability. Our results extend to other
auction formats that have been studied in the literature via the smoothness
paradigm.
Our results for XOS valuations are enabled by a novel
Follow-The-Perturbed-Leader algorithm for settings where the number of experts
is infinite, and the payoff function of the learner is non-linear. This
algorithm has applications outside of auction settings, such as in security
games. Our result for coverage valuations is based on a novel use of convex
rounding schemes and a reduction to online convex optimization.
",constantinos daskalakis,,2015.0,,arXiv,Daskalakis2015,True,,arXiv,Not available,"Learning in Auctions: Regret is Hard, Envy is Easy",d9dfbdbab7db90e5b3a8ab9fe308680d,http://arxiv.org/abs/1511.01411v6
16093," A line of recent work provides welfare guarantees of simple combinatorial
auction formats, such as selling m items via simultaneous second price auctions
(SiSPAs) (Christodoulou et al. 2008, Bhawalkar and Roughgarden 2011, Feldman et
al. 2013). These guarantees hold even when the auctions are repeatedly executed
and players use no-regret learning algorithms. Unfortunately, off-the-shelf
no-regret algorithms for these auctions are computationally inefficient as the
number of actions is exponential. We show that this obstacle is insurmountable:
there are no polynomial-time no-regret algorithms for SiSPAs, unless
RP$\supseteq$ NP, even when the bidders are unit-demand. Our lower bound raises
the question of how good outcomes polynomially-bounded bidders may discover in
such auctions.
To answer this question, we propose a novel concept of learning in auctions,
termed ""no-envy learning."" This notion is founded upon Walrasian equilibrium,
and we show that it is both efficiently implementable and results in
approximately optimal welfare, even when the bidders have fractionally
subadditive (XOS) valuations (assuming demand oracles) or coverage valuations
(without demand oracles). No-envy learning outcomes are a relaxation of
no-regret outcomes, which maintain their approximate welfare optimality while
endowing them with computational tractability. Our results extend to other
auction formats that have been studied in the literature via the smoothness
paradigm.
Our results for XOS valuations are enabled by a novel
Follow-The-Perturbed-Leader algorithm for settings where the number of experts
is infinite, and the payoff function of the learner is non-linear. This
algorithm has applications outside of auction settings, such as in security
games. Our result for coverage valuations is based on a novel use of convex
rounding schemes and a reduction to online convex optimization.
",vasilis syrgkanis,,2015.0,,arXiv,Daskalakis2015,True,,arXiv,Not available,"Learning in Auctions: Regret is Hard, Envy is Easy",d9dfbdbab7db90e5b3a8ab9fe308680d,http://arxiv.org/abs/1511.01411v6
16094," Modern commercial Internet search engines display advertisements along side
the search results in response to user queries. Such sponsored search relies on
market mechanisms to elicit prices for these advertisements, making use of an
auction among advertisers who bid in order to have their ads shown for specific
keywords. We present an overview of the current systems for such auctions and
also describe the underlying game-theoretic aspects. The game involves three
parties--advertisers, the search engine, and search users--and we present
example research directions that emphasize the role of each. The algorithms for
bidding and pricing in these games use techniques from three mathematical
areas: mechanism design, optimization, and statistical estimation. Finally, we
present some challenges in sponsored search advertising.
",jon feldman,,2008.0,,arXiv,Feldman2008,True,,arXiv,Not available,Algorithmic Methods for Sponsored Search Advertising,21df29ce558c708e11fa9546f4dae141,http://arxiv.org/abs/0805.1759v1
16095," Modern commercial Internet search engines display advertisements along side
the search results in response to user queries. Such sponsored search relies on
market mechanisms to elicit prices for these advertisements, making use of an
auction among advertisers who bid in order to have their ads shown for specific
keywords. We present an overview of the current systems for such auctions and
also describe the underlying game-theoretic aspects. The game involves three
parties--advertisers, the search engine, and search users--and we present
example research directions that emphasize the role of each. The algorithms for
bidding and pricing in these games use techniques from three mathematical
areas: mechanism design, optimization, and statistical estimation. Finally, we
present some challenges in sponsored search advertising.
",s. muthukrishnan,,2008.0,,arXiv,Feldman2008,True,,arXiv,Not available,Algorithmic Methods for Sponsored Search Advertising,21df29ce558c708e11fa9546f4dae141,http://arxiv.org/abs/0805.1759v1
16096," Simple games cover voting systems in which a single alternative, such as a
bill or an amendment, is pitted against the status quo. A simple game or a
yes-no voting system is a set of rules that specifies exactly which collections
of ``yea'' votes yield passage of the issue at hand. A collection of ``yea''
voters forms a winning coalition.
We are interested on performing a complexity analysis of problems on such
games depending on the game representation. We consider four natural explicit
representations, winning, loosing, minimal winning, and maximal loosing. We
first analyze the computational complexity of obtaining a particular
representation of a simple game from a different one. We show that some cases
this transformation can be done in polynomial time while the others require
exponential time. The second question is classifying the complexity for testing
whether a game is simple or weighted. We show that for the four types of
representation both problem can be solved in polynomial time. Finally, we
provide results on the complexity of testing whether a simple game or a
weighted game is of a special type. In this way, we analyze strongness,
properness, decisiveness and homogeneity, which are desirable properties to be
fulfilled for a simple game.
",josep freixas,,2008.0,,arXiv,Freixas2008,True,,arXiv,Not available,The Complexity of Testing Properties of Simple Games,c0de48daa862d8912666b4bc38f0c827,http://arxiv.org/abs/0803.0404v1
16097," We describe an algorithm for computing best response strategies in a class of
two-player infinite games of incomplete information, defined by payoffs
piecewise linear in agents' types and actions, conditional on linear
comparisons of agents' actions. We show that this class includes many
well-known games including a variety of auctions and a novel allocation game.
In some cases, the best-response algorithm can be iterated to compute
Bayes-Nash equilibria. We demonstrate the efficiency of our approach on
existing and new games.
",daniel reeves,,2012.0,,arXiv,Reeves2012,True,,arXiv,Not available,"Computing Best-Response Strategies in Infinite Games of Incomplete
Information",067fb14ef316953fb2525cc774bd6388,http://arxiv.org/abs/1207.4171v1
16098," We describe an algorithm for computing best response strategies in a class of
two-player infinite games of incomplete information, defined by payoffs
piecewise linear in agents' types and actions, conditional on linear
comparisons of agents' actions. We show that this class includes many
well-known games including a variety of auctions and a novel allocation game.
In some cases, the best-response algorithm can be iterated to compute
Bayes-Nash equilibria. We demonstrate the efficiency of our approach on
existing and new games.
",michael wellman,,2012.0,,arXiv,Reeves2012,True,,arXiv,Not available,"Computing Best-Response Strategies in Infinite Games of Incomplete
Information",067fb14ef316953fb2525cc774bd6388,http://arxiv.org/abs/1207.4171v1
16099," Escalation is the fact that in a game (for instance an auction), the agents
play forever. It is not necessary to consider complex examples to establish its
rationality. In particular, the $0,1$-game is an extremely simple infinite game
in which escalation arises naturally and rationally. In some sense, it can be
considered as the paradigm of escalation. Through an example of economic games,
we show the benefit economics can take of coinduction.
",pierre lescanne,,2013.0,,"Dans CALCO 2013 - 5th Conference on Algebra and Coalgebra in
Computer Science, CALCO 2013, Warsaw : Poland (2013)",Lescanne2013,True,,arXiv,Not available,A simple case of rationality of escalation,f5031be006a4e4f9f06ef0bb27e7aea2,http://arxiv.org/abs/1306.2284v1
16100," Some important classical mechanisms considered in Microeconomics and Game
Theory require the solution of a difficult optimization problem. This is true
of mechanisms for combinatorial auctions, which have in recent years assumed
practical importance, and in particular of the gold standard for combinatorial
auctions, the Generalized Vickrey Auction (GVA). Traditional analysis of these
mechanisms - in particular, their truth revelation properties - assumes that
the optimization problems are solved precisely. In reality, these optimization
problems can usually be solved only in an approximate fashion. We investigate
the impact on such mechanisms of replacing exact solutions by approximate ones.
Specifically, we look at a particular greedy optimization method. We show that
the GVA payment scheme does not provide for a truth revealing mechanism. We
introduce another scheme that does guarantee truthfulness for a restricted
class of players. We demonstrate the latter property by identifying natural
properties for combinatorial auctions and showing that, for our restricted
class of players, they imply that truthful strategies are dominant. Those
properties have applicability beyond the specific auction studied.
",daniel lehmann,,2002.0,,"Journal of the ACM Vol. 49, No. 5, September 2002, pp. 577-602",Lehmann2002,True,,arXiv,Not available,Truth Revelation in Approximately Efficient Combinatorial Auctions,e919dc771a244231a5e939befd5f4e38,http://arxiv.org/abs/cs/0202017v1
16101," Some important classical mechanisms considered in Microeconomics and Game
Theory require the solution of a difficult optimization problem. This is true
of mechanisms for combinatorial auctions, which have in recent years assumed
practical importance, and in particular of the gold standard for combinatorial
auctions, the Generalized Vickrey Auction (GVA). Traditional analysis of these
mechanisms - in particular, their truth revelation properties - assumes that
the optimization problems are solved precisely. In reality, these optimization
problems can usually be solved only in an approximate fashion. We investigate
the impact on such mechanisms of replacing exact solutions by approximate ones.
Specifically, we look at a particular greedy optimization method. We show that
the GVA payment scheme does not provide for a truth revealing mechanism. We
introduce another scheme that does guarantee truthfulness for a restricted
class of players. We demonstrate the latter property by identifying natural
properties for combinatorial auctions and showing that, for our restricted
class of players, they imply that truthful strategies are dominant. Those
properties have applicability beyond the specific auction studied.
",liadan o'callaghan,,2002.0,,"Journal of the ACM Vol. 49, No. 5, September 2002, pp. 577-602",Lehmann2002,True,,arXiv,Not available,Truth Revelation in Approximately Efficient Combinatorial Auctions,e919dc771a244231a5e939befd5f4e38,http://arxiv.org/abs/cs/0202017v1
16102," Some important classical mechanisms considered in Microeconomics and Game
Theory require the solution of a difficult optimization problem. This is true
of mechanisms for combinatorial auctions, which have in recent years assumed
practical importance, and in particular of the gold standard for combinatorial
auctions, the Generalized Vickrey Auction (GVA). Traditional analysis of these
mechanisms - in particular, their truth revelation properties - assumes that
the optimization problems are solved precisely. In reality, these optimization
problems can usually be solved only in an approximate fashion. We investigate
the impact on such mechanisms of replacing exact solutions by approximate ones.
Specifically, we look at a particular greedy optimization method. We show that
the GVA payment scheme does not provide for a truth revealing mechanism. We
introduce another scheme that does guarantee truthfulness for a restricted
class of players. We demonstrate the latter property by identifying natural
properties for combinatorial auctions and showing that, for our restricted
class of players, they imply that truthful strategies are dominant. Those
properties have applicability beyond the specific auction studied.
",yoav shoham,,2002.0,,"Journal of the ACM Vol. 49, No. 5, September 2002, pp. 577-602",Lehmann2002,True,,arXiv,Not available,Truth Revelation in Approximately Efficient Combinatorial Auctions,e919dc771a244231a5e939befd5f4e38,http://arxiv.org/abs/cs/0202017v1
16106," We design approximate weakly group strategy-proof mechanisms for resource
reallocation problems using Milgrom and Segal's deferred acceptance auction
framework: the radio spectrum and network bandwidth reallocation problems in
the procurement auction setting and the cost minimization problem with set
cover constraints in the selling auction setting. Our deferred acceptance
auctions are derived from simple greedy algorithms for the underlying
optimization problems and guarantee approximately optimal social welfare (cost)
of the agents retaining their rights (contracts). In the reallocation problems,
we design procurement auctions to purchase agents' broadcast/access rights to
free up some of the resources such that the unpurchased rights can still be
exercised with respect to the remaining resources. In the cost minimization
problem, we design a selling auction to sell early termination rights to agents
with existing contracts such that some minimal constraints are still satisfied
with remaining contracts. In these problems, while the ""allocated"" agents
transact, exchanging rights and payments, the objective and feasibility
constraints are on the ""rejected"" agents.
",anthony kim,,2015.0,,arXiv,Kim2015,True,,arXiv,Not available,"Welfare Maximization with Deferred Acceptance Auctions in Reallocation
Problems",7aead1c2c81a7153c1e1ec9bab0aa4b5,http://arxiv.org/abs/1507.01353v3
16107," Simple games cover voting systems in which a single alternative, such as a
bill or an amendment, is pitted against the status quo. A simple game or a
yes-no voting system is a set of rules that specifies exactly which collections
of ``yea'' votes yield passage of the issue at hand. A collection of ``yea''
voters forms a winning coalition.
We are interested on performing a complexity analysis of problems on such
games depending on the game representation. We consider four natural explicit
representations, winning, loosing, minimal winning, and maximal loosing. We
first analyze the computational complexity of obtaining a particular
representation of a simple game from a different one. We show that some cases
this transformation can be done in polynomial time while the others require
exponential time. The second question is classifying the complexity for testing
whether a game is simple or weighted. We show that for the four types of
representation both problem can be solved in polynomial time. Finally, we
provide results on the complexity of testing whether a simple game or a
weighted game is of a special type. In this way, we analyze strongness,
properness, decisiveness and homogeneity, which are desirable properties to be
fulfilled for a simple game.
",xavier molinero,,2008.0,,arXiv,Freixas2008,True,,arXiv,Not available,The Complexity of Testing Properties of Simple Games,c0de48daa862d8912666b4bc38f0c827,http://arxiv.org/abs/0803.0404v1
16108," In this paper, we study online double auctions, where multiple sellers and
multiple buyers arrive and depart dynamically to exchange one commodity. We
show that there is no deterministic online double auction that is truthful and
competitive for maximising social welfare in an adversarial model. However,
given the prior information that sellers are patient and the demand is not more
than the supply, a deterministic and truthful greedy mechanism is actually
2-competitive, i.e. it guarantees that the social welfare of its allocation is
at least half of the optimal one achievable offline. Moreover, if the number of
incoming buyers is predictable, we demonstrate that an online double auction
can be reduced to an online one-sided auction, and the truthfulness and
competitiveness of the reduced online double auction follow that of the online
one-sided auction. Notably, by using the reduction, we find a truthful
mechanism that is almost 1-competitive, when buyers arrive randomly. Finally,
we argue that these mechanisms also have a promising applicability in more
general settings without assuming that sellers are patient, by decomposing a
market into multiple sub-markets.
",dengji zhao,,2013.0,,arXiv,Zhao2013,True,,arXiv,Not available,Decomposing Truthful and Competitive Online Double Auctions,152bfc45e8a40537cc0bc683a0781adf,http://arxiv.org/abs/1311.0198v1
16109," In this paper, we study online double auctions, where multiple sellers and
multiple buyers arrive and depart dynamically to exchange one commodity. We
show that there is no deterministic online double auction that is truthful and
competitive for maximising social welfare in an adversarial model. However,
given the prior information that sellers are patient and the demand is not more
than the supply, a deterministic and truthful greedy mechanism is actually
2-competitive, i.e. it guarantees that the social welfare of its allocation is
at least half of the optimal one achievable offline. Moreover, if the number of
incoming buyers is predictable, we demonstrate that an online double auction
can be reduced to an online one-sided auction, and the truthfulness and
competitiveness of the reduced online double auction follow that of the online
one-sided auction. Notably, by using the reduction, we find a truthful
mechanism that is almost 1-competitive, when buyers arrive randomly. Finally,
we argue that these mechanisms also have a promising applicability in more
general settings without assuming that sellers are patient, by decomposing a
market into multiple sub-markets.
",dongmo zhang,,2013.0,,arXiv,Zhao2013,True,,arXiv,Not available,Decomposing Truthful and Competitive Online Double Auctions,152bfc45e8a40537cc0bc683a0781adf,http://arxiv.org/abs/1311.0198v1
16110," In this paper, we study online double auctions, where multiple sellers and
multiple buyers arrive and depart dynamically to exchange one commodity. We
show that there is no deterministic online double auction that is truthful and
competitive for maximising social welfare in an adversarial model. However,
given the prior information that sellers are patient and the demand is not more
than the supply, a deterministic and truthful greedy mechanism is actually
2-competitive, i.e. it guarantees that the social welfare of its allocation is
at least half of the optimal one achievable offline. Moreover, if the number of
incoming buyers is predictable, we demonstrate that an online double auction
can be reduced to an online one-sided auction, and the truthfulness and
competitiveness of the reduced online double auction follow that of the online
one-sided auction. Notably, by using the reduction, we find a truthful
mechanism that is almost 1-competitive, when buyers arrive randomly. Finally,
we argue that these mechanisms also have a promising applicability in more
general settings without assuming that sellers are patient, by decomposing a
market into multiple sub-markets.
",laurent perrussel,,2013.0,,arXiv,Zhao2013,True,,arXiv,Not available,Decomposing Truthful and Competitive Online Double Auctions,152bfc45e8a40537cc0bc683a0781adf,http://arxiv.org/abs/1311.0198v1
16111," We consider the problem of an auctioneer who faces the task of selling a good
(drawn from a known distribution) to a set of buyers, when the auctioneer does
not have the capacity to describe to the buyers the exact identity of the good
that he is selling. Instead, he must come up with a constrained signalling
scheme: a (non injective) mapping from goods to signals, that satisfies the
constraints of his setting. For example, the auctioneer may be able to
communicate only a bounded length message for each good, or he might be legally
constrained in how he can advertise the item being sold. Each candidate
signaling scheme induces an incomplete-information game among the buyers, and
the goal of the auctioneer is to choose the signaling scheme and accompanying
auction format that optimizes welfare. In this paper, we use techniques from
submodular function maximization and no-regret learning to give algorithms for
computing constrained signaling schemes for a variety of constrained signaling
problems.
",shaddin dughmi,,2013.0,,arXiv,Dughmi2013,True,,arXiv,Not available,Constrained Signaling in Auction Design,e8607e2212b03edd029793dafe3bd12f,http://arxiv.org/abs/1302.4713v2
16112," We consider the problem of an auctioneer who faces the task of selling a good
(drawn from a known distribution) to a set of buyers, when the auctioneer does
not have the capacity to describe to the buyers the exact identity of the good
that he is selling. Instead, he must come up with a constrained signalling
scheme: a (non injective) mapping from goods to signals, that satisfies the
constraints of his setting. For example, the auctioneer may be able to
communicate only a bounded length message for each good, or he might be legally
constrained in how he can advertise the item being sold. Each candidate
signaling scheme induces an incomplete-information game among the buyers, and
the goal of the auctioneer is to choose the signaling scheme and accompanying
auction format that optimizes welfare. In this paper, we use techniques from
submodular function maximization and no-regret learning to give algorithms for
computing constrained signaling schemes for a variety of constrained signaling
problems.
",nicole immorlica,,2013.0,,arXiv,Dughmi2013,True,,arXiv,Not available,Constrained Signaling in Auction Design,e8607e2212b03edd029793dafe3bd12f,http://arxiv.org/abs/1302.4713v2
16113," We consider the problem of an auctioneer who faces the task of selling a good
(drawn from a known distribution) to a set of buyers, when the auctioneer does
not have the capacity to describe to the buyers the exact identity of the good
that he is selling. Instead, he must come up with a constrained signalling
scheme: a (non injective) mapping from goods to signals, that satisfies the
constraints of his setting. For example, the auctioneer may be able to
communicate only a bounded length message for each good, or he might be legally
constrained in how he can advertise the item being sold. Each candidate
signaling scheme induces an incomplete-information game among the buyers, and
the goal of the auctioneer is to choose the signaling scheme and accompanying
auction format that optimizes welfare. In this paper, we use techniques from
submodular function maximization and no-regret learning to give algorithms for
computing constrained signaling schemes for a variety of constrained signaling
problems.
",aaron roth,,2013.0,,arXiv,Dughmi2013,True,,arXiv,Not available,Constrained Signaling in Auction Design,e8607e2212b03edd029793dafe3bd12f,http://arxiv.org/abs/1302.4713v2
16114," We consider Gillette's two-person zero-sum stochastic games with perfect
information. For each $k \in \ZZ_+$ we introduce an effective reward function,
called $k$-total. For $k = 0$ and $1$ this function is known as {\it mean
payoff} and {\it total reward}, respectively. We restrict our attention to the
deterministic case. For all $k$, we prove the existence of a saddle point which
can be realized by uniformly optimal pure stationary strategies. We also
demonstrate that $k$-total reward games can be embedded into $(k+1)$-total
reward games.
",endre boros,,2014.0,,arXiv,Boros2014,True,,arXiv,Not available,A Nested Family of $k$-total Effective Rewards for Positional Games,e1aa2c92d6d2fa063f12ab9838deddbe,http://arxiv.org/abs/1412.6072v2
16115," We consider Gillette's two-person zero-sum stochastic games with perfect
information. For each $k \in \ZZ_+$ we introduce an effective reward function,
called $k$-total. For $k = 0$ and $1$ this function is known as {\it mean
payoff} and {\it total reward}, respectively. We restrict our attention to the
deterministic case. For all $k$, we prove the existence of a saddle point which
can be realized by uniformly optimal pure stationary strategies. We also
demonstrate that $k$-total reward games can be embedded into $(k+1)$-total
reward games.
",khaled elbassioni,,2014.0,,arXiv,Boros2014,True,,arXiv,Not available,A Nested Family of $k$-total Effective Rewards for Positional Games,e1aa2c92d6d2fa063f12ab9838deddbe,http://arxiv.org/abs/1412.6072v2
16116," We consider Gillette's two-person zero-sum stochastic games with perfect
information. For each $k \in \ZZ_+$ we introduce an effective reward function,
called $k$-total. For $k = 0$ and $1$ this function is known as {\it mean
payoff} and {\it total reward}, respectively. We restrict our attention to the
deterministic case. For all $k$, we prove the existence of a saddle point which
can be realized by uniformly optimal pure stationary strategies. We also
demonstrate that $k$-total reward games can be embedded into $(k+1)$-total
reward games.
",vladimir gurvich,,2014.0,,arXiv,Boros2014,True,,arXiv,Not available,A Nested Family of $k$-total Effective Rewards for Positional Games,e1aa2c92d6d2fa063f12ab9838deddbe,http://arxiv.org/abs/1412.6072v2
16117," We consider Gillette's two-person zero-sum stochastic games with perfect
information. For each $k \in \ZZ_+$ we introduce an effective reward function,
called $k$-total. For $k = 0$ and $1$ this function is known as {\it mean
payoff} and {\it total reward}, respectively. We restrict our attention to the
deterministic case. For all $k$, we prove the existence of a saddle point which
can be realized by uniformly optimal pure stationary strategies. We also
demonstrate that $k$-total reward games can be embedded into $(k+1)$-total
reward games.
",kazuhisa makino,,2014.0,,arXiv,Boros2014,True,,arXiv,Not available,A Nested Family of $k$-total Effective Rewards for Positional Games,e1aa2c92d6d2fa063f12ab9838deddbe,http://arxiv.org/abs/1412.6072v2
16118," In this work we apply methods from cryptography to enable any number of
mutually distrusting players to implement broad classes of mediated equilibria
of strategic games without the need for trusted mediation.
Our implementation makes use of a (standard) pre-play ""cheap talk"" phase, in
which players engage in free and non-binding communication prior to playing in
the original game. In our cheap talk phase, the players execute a secure
multi-party computation protocol to sample an action profile from an
equilibrium of a ""cryptographically blinded"" version of the original game, in
which actions are encrypted. The essence of our approach is to exploit the
power of encryption to selectively restrict the information available to
players about sampled action profiles, such that these desirable equilibria can
be stably achieved. In contrast to previous applications of cryptography to
game theory, this work is the first to employ the paradigm of using encryption
to allow players to benefit from hiding information \emph{from themselves},
rather than from others; and we stress that rational players would
\emph{choose} to hide the information from themselves.
",sunoo park,,2014.0,,arXiv,Hubáček2014,True,,arXiv,Not available,"Cryptographically Blinded Games: Leveraging Players' Limitations for
Equilibria and Profit",9797bd50ddf014cc1222c111574dc773,http://arxiv.org/abs/1411.3747v1
16119," Simple games cover voting systems in which a single alternative, such as a
bill or an amendment, is pitted against the status quo. A simple game or a
yes-no voting system is a set of rules that specifies exactly which collections
of ``yea'' votes yield passage of the issue at hand. A collection of ``yea''
voters forms a winning coalition.
We are interested on performing a complexity analysis of problems on such
games depending on the game representation. We consider four natural explicit
representations, winning, loosing, minimal winning, and maximal loosing. We
first analyze the computational complexity of obtaining a particular
representation of a simple game from a different one. We show that some cases
this transformation can be done in polynomial time while the others require
exponential time. The second question is classifying the complexity for testing
whether a game is simple or weighted. We show that for the four types of
representation both problem can be solved in polynomial time. Finally, we
provide results on the complexity of testing whether a simple game or a
weighted game is of a special type. In this way, we analyze strongness,
properness, decisiveness and homogeneity, which are desirable properties to be
fulfilled for a simple game.
",martin olsen,,2008.0,,arXiv,Freixas2008,True,,arXiv,Not available,The Complexity of Testing Properties of Simple Games,c0de48daa862d8912666b4bc38f0c827,http://arxiv.org/abs/0803.0404v1
16120," We establish a network formation game for the Internet's Autonomous System
(AS) interconnection topology. The game includes different types of players,
accounting for the heterogeneity of ASs in the Internet. We incorporate
reliability considerations in the player's utility function, and analyze static
properties of the game as well as its dynamic evolution. We provide dynamic
analysis of its topological quantities, and explain the prevalence of some
""network motifs"" in the Internet graph. We assess our predictions with
real-world data.
",eli meirom,,2014.0,,arXiv,Meirom2014,True,,arXiv,Not available,Formation Games of Reliable Networks,4dde21086bd208b15935383f72297ab1,http://arxiv.org/abs/1412.8501v1
16121," We establish a network formation game for the Internet's Autonomous System
(AS) interconnection topology. The game includes different types of players,
accounting for the heterogeneity of ASs in the Internet. We incorporate
reliability considerations in the player's utility function, and analyze static
properties of the game as well as its dynamic evolution. We provide dynamic
analysis of its topological quantities, and explain the prevalence of some
""network motifs"" in the Internet graph. We assess our predictions with
real-world data.
",shie mannor,,2014.0,,arXiv,Meirom2014,True,,arXiv,Not available,Formation Games of Reliable Networks,4dde21086bd208b15935383f72297ab1,http://arxiv.org/abs/1412.8501v1
16122," We establish a network formation game for the Internet's Autonomous System
(AS) interconnection topology. The game includes different types of players,
accounting for the heterogeneity of ASs in the Internet. We incorporate
reliability considerations in the player's utility function, and analyze static
properties of the game as well as its dynamic evolution. We provide dynamic
analysis of its topological quantities, and explain the prevalence of some
""network motifs"" in the Internet graph. We assess our predictions with
real-world data.
",ariel orda,,2014.0,,arXiv,Meirom2014,True,,arXiv,Not available,Formation Games of Reliable Networks,4dde21086bd208b15935383f72297ab1,http://arxiv.org/abs/1412.8501v1
16123," This paper studies strategic decentralization in binary choice composite
network congestion games. A player decentralizes if she lets some autonomous
agents to decide respectively how to send different parts of her stock from the
origin to the destination. This paper shows that, with convex, strictly
increasing and differentiable arc cost functions, an atomic splittable player
always has an optimal unilateral decentralization strategy. Besides, unilateral
decentralization gives her the same advantage as being the leader in a
Stackelberg congestion game. Finally, unilateral decentralization of an atomic
player has a negative impact on the social cost and on the costs of the other
players at the equilibrium of the congestion game.
",cheng wan,,2015.0,,arXiv,Wan2015,True,,arXiv,Not available,Strategic decentralization in binary choice composite congestion games,831c8f6ae9b32b72ab69a56bac04990b,http://arxiv.org/abs/1506.03479v2
16124," Among the strategic choices made by today's economic actors are choices about
algorithms and computational resources. Different access to computational
resources may result in a kind of economic asymmetry analogous to information
asymmetry. In order to represent strategic computational choices within a game
theoretic framework, we propose a new game specification, Strategic Bayesian
Networks (SBN). In an SBN, random variables are represented as nodes in a
graph, with edges indicating probabilistic dependence. For some nodes, players
can choose conditional probability distributions as a strategic choice. Using
SBN, we present two games that demonstrate computational asymmetry. These games
are symmetric except for the computational limitations of the actors. We show
that the better computationally endowed player receives greater payoff.
",sebastian benthall,,2012.0,,arXiv,Benthall2012,True,,arXiv,Not available,Computational Asymmetry in Strategic Bayesian Networks,1ad00499a3527fb1e5d221f587521593,http://arxiv.org/abs/1206.2878v1
16125," Among the strategic choices made by today's economic actors are choices about
algorithms and computational resources. Different access to computational
resources may result in a kind of economic asymmetry analogous to information
asymmetry. In order to represent strategic computational choices within a game
theoretic framework, we propose a new game specification, Strategic Bayesian
Networks (SBN). In an SBN, random variables are represented as nodes in a
graph, with edges indicating probabilistic dependence. For some nodes, players
can choose conditional probability distributions as a strategic choice. Using
SBN, we present two games that demonstrate computational asymmetry. These games
are symmetric except for the computational limitations of the actors. We show
that the better computationally endowed player receives greater payoff.
",john chuang,,2012.0,,arXiv,Benthall2012,True,,arXiv,Not available,Computational Asymmetry in Strategic Bayesian Networks,1ad00499a3527fb1e5d221f587521593,http://arxiv.org/abs/1206.2878v1
16126," We present an algorithm that identifies the reasoning patterns of agents in a
game, by iteratively examining the graph structure of its Multi-Agent Influence
Diagram (MAID) representation. If the decision of an agent participates in no
reasoning patterns, then we can effectively ignore that decision for the
purpose of calculating a Nash equilibrium for the game. In some cases, this can
lead to exponential time savings in the process of equilibrium calculation.
Moreover, our algorithm can be used to enumerate the reasoning patterns in a
game, which can be useful for constructing more effective computerized agents
interacting with humans.
",dimitrios antos,,2012.0,,arXiv,Antos2012,True,,arXiv,Not available,Identifying reasoning patterns in games,b00ef19c87b5dc2b983428b79869139b,http://arxiv.org/abs/1206.3235v1
16127," We present an algorithm that identifies the reasoning patterns of agents in a
game, by iteratively examining the graph structure of its Multi-Agent Influence
Diagram (MAID) representation. If the decision of an agent participates in no
reasoning patterns, then we can effectively ignore that decision for the
purpose of calculating a Nash equilibrium for the game. In some cases, this can
lead to exponential time savings in the process of equilibrium calculation.
Moreover, our algorithm can be used to enumerate the reasoning patterns in a
game, which can be useful for constructing more effective computerized agents
interacting with humans.
",avi pfeffer,,2012.0,,arXiv,Antos2012,True,,arXiv,Not available,Identifying reasoning patterns in games,b00ef19c87b5dc2b983428b79869139b,http://arxiv.org/abs/1206.3235v1
16128," We introduce the novel notion of winning cores in parity games and develop a
deterministic polynomial-time under-approximation algorithm for solving parity
games based on winning core approximation. Underlying this algorithm are a
number properties about winning cores which are interesting in their own right.
In particular, we show that the winning core and the winning region for a
player in a parity game are equivalently empty. Moreover, the winning core
contains all fatal attractors but is not necessarily a dominion itself.
Experimental results are very positive both with respect to quality of
approximation and running time. It outperforms existing state-of-the-art
algorithms significantly on most benchmarks.
",steen vester,,2016.0,,arXiv,Vester2016,True,,arXiv,Not available,Winning Cores in Parity Games,d604b53cbd8b722b7f38ca02c09537f9,http://arxiv.org/abs/1602.01963v1
16129," A mean-field-type game is a game in which the instantaneous payoffs and/or
the state dynamics functions involve not only the state and the action profile
but also the joint distributions of state-action pairs. This article presents
some engineering applications of mean-field-type games including road traffic
networks, multi-level building evacuation, millimeter wave wireless
communications, distributed power networks, virus spread over networks, virtual
machine resource management in cloud networks, synchronization of oscillators,
energy-efficient buildings, online meeting and mobile crowdsensing.
",boualem djehiche,,2016.0,,arXiv,Djehiche2016,True,,arXiv,Not available,Mean-Field-Type Games in Engineering,4cbb199cef052b3ed640424f5de34f7b,http://arxiv.org/abs/1605.03281v3
16130," Simple games cover voting systems in which a single alternative, such as a
bill or an amendment, is pitted against the status quo. A simple game or a
yes-no voting system is a set of rules that specifies exactly which collections
of ``yea'' votes yield passage of the issue at hand. A collection of ``yea''
voters forms a winning coalition.
We are interested on performing a complexity analysis of problems on such
games depending on the game representation. We consider four natural explicit
representations, winning, loosing, minimal winning, and maximal loosing. We
first analyze the computational complexity of obtaining a particular
representation of a simple game from a different one. We show that some cases
this transformation can be done in polynomial time while the others require
exponential time. The second question is classifying the complexity for testing
whether a game is simple or weighted. We show that for the four types of
representation both problem can be solved in polynomial time. Finally, we
provide results on the complexity of testing whether a simple game or a
weighted game is of a special type. In this way, we analyze strongness,
properness, decisiveness and homogeneity, which are desirable properties to be
fulfilled for a simple game.
",maria serna,,2008.0,,arXiv,Freixas2008,True,,arXiv,Not available,The Complexity of Testing Properties of Simple Games,c0de48daa862d8912666b4bc38f0c827,http://arxiv.org/abs/0803.0404v1
16131," A mean-field-type game is a game in which the instantaneous payoffs and/or
the state dynamics functions involve not only the state and the action profile
but also the joint distributions of state-action pairs. This article presents
some engineering applications of mean-field-type games including road traffic
networks, multi-level building evacuation, millimeter wave wireless
communications, distributed power networks, virus spread over networks, virtual
machine resource management in cloud networks, synchronization of oscillators,
energy-efficient buildings, online meeting and mobile crowdsensing.
",alain tcheukam,,2016.0,,arXiv,Djehiche2016,True,,arXiv,Not available,Mean-Field-Type Games in Engineering,4cbb199cef052b3ed640424f5de34f7b,http://arxiv.org/abs/1605.03281v3
16132," A mean-field-type game is a game in which the instantaneous payoffs and/or
the state dynamics functions involve not only the state and the action profile
but also the joint distributions of state-action pairs. This article presents
some engineering applications of mean-field-type games including road traffic
networks, multi-level building evacuation, millimeter wave wireless
communications, distributed power networks, virus spread over networks, virtual
machine resource management in cloud networks, synchronization of oscillators,
energy-efficient buildings, online meeting and mobile crowdsensing.
",hamidou tembine,,2016.0,,arXiv,Djehiche2016,True,,arXiv,Not available,Mean-Field-Type Games in Engineering,4cbb199cef052b3ed640424f5de34f7b,http://arxiv.org/abs/1605.03281v3
16133," We propose a generic strategic network resource sharing game between a set of
players representing operators. The players negotiate which sets of players
share given resources, serving users with varying sensitivity to interference.
We prove that the proposed game has a Nash equilibrium, to which a greedily
played game converges. Furthermore, simulation results show that, when applied
to inter-operator spectrum sharing in small-cell indoor office environment, the
convergence is fast and there is a significant performance improvement for the
operators when compared to the default resource usage configuration.
",sofonias hailu,,2016.0,,arXiv,Hailu2016,True,,arXiv,Not available,Network Resource Sharing Games with Instantaneous Reciprocity,0d158d086f3a80187e5c4c618793264a,http://arxiv.org/abs/1605.09194v1
16134," We propose a generic strategic network resource sharing game between a set of
players representing operators. The players negotiate which sets of players
share given resources, serving users with varying sensitivity to interference.
We prove that the proposed game has a Nash equilibrium, to which a greedily
played game converges. Furthermore, simulation results show that, when applied
to inter-operator spectrum sharing in small-cell indoor office environment, the
convergence is fast and there is a significant performance improvement for the
operators when compared to the default resource usage configuration.
",ragnar freij-hollanti,,2016.0,,arXiv,Hailu2016,True,,arXiv,Not available,Network Resource Sharing Games with Instantaneous Reciprocity,0d158d086f3a80187e5c4c618793264a,http://arxiv.org/abs/1605.09194v1
16135," We propose a generic strategic network resource sharing game between a set of
players representing operators. The players negotiate which sets of players
share given resources, serving users with varying sensitivity to interference.
We prove that the proposed game has a Nash equilibrium, to which a greedily
played game converges. Furthermore, simulation results show that, when applied
to inter-operator spectrum sharing in small-cell indoor office environment, the
convergence is fast and there is a significant performance improvement for the
operators when compared to the default resource usage configuration.
",alexis dowhuszko,,2016.0,,arXiv,Hailu2016,True,,arXiv,Not available,Network Resource Sharing Games with Instantaneous Reciprocity,0d158d086f3a80187e5c4c618793264a,http://arxiv.org/abs/1605.09194v1
16136," We propose a generic strategic network resource sharing game between a set of
players representing operators. The players negotiate which sets of players
share given resources, serving users with varying sensitivity to interference.
We prove that the proposed game has a Nash equilibrium, to which a greedily
played game converges. Furthermore, simulation results show that, when applied
to inter-operator spectrum sharing in small-cell indoor office environment, the
convergence is fast and there is a significant performance improvement for the
operators when compared to the default resource usage configuration.
",olav tirkkonen,,2016.0,,arXiv,Hailu2016,True,,arXiv,Not available,Network Resource Sharing Games with Instantaneous Reciprocity,0d158d086f3a80187e5c4c618793264a,http://arxiv.org/abs/1605.09194v1
16137," EcoTRADE is a multi player network game of a virtual biodiversity credit
market. Each player controls the land use of a certain amount of parcels on a
virtual landscape. The biodiversity credits of a particular parcel depend on
neighboring parcels, which may be owned by other players. The game can be used
to study the strategies of players in experiments or classroom games and also
as a communication tool for stakeholders participating in credit markets that
include spatially interdependent credits.
",florian hartig,,2008.0,10.1016/j.envsoft.2009.01.003,"Environmental Modelling & Software, 2010, 25, 1479-1480",Hartig2008,True,,arXiv,Not available,"EcoTRADE - a multi player network game of a tradable permit market for
biodiversity credits",24700e05a940c50b4d23ee0c48ca2d9f,http://arxiv.org/abs/0812.0956v2
16138," EcoTRADE is a multi player network game of a virtual biodiversity credit
market. Each player controls the land use of a certain amount of parcels on a
virtual landscape. The biodiversity credits of a particular parcel depend on
neighboring parcels, which may be owned by other players. The game can be used
to study the strategies of players in experiments or classroom games and also
as a communication tool for stakeholders participating in credit markets that
include spatially interdependent credits.
",martin horn,,2008.0,10.1016/j.envsoft.2009.01.003,"Environmental Modelling & Software, 2010, 25, 1479-1480",Hartig2008,True,,arXiv,Not available,"EcoTRADE - a multi player network game of a tradable permit market for
biodiversity credits",24700e05a940c50b4d23ee0c48ca2d9f,http://arxiv.org/abs/0812.0956v2
16139," EcoTRADE is a multi player network game of a virtual biodiversity credit
market. Each player controls the land use of a certain amount of parcels on a
virtual landscape. The biodiversity credits of a particular parcel depend on
neighboring parcels, which may be owned by other players. The game can be used
to study the strategies of players in experiments or classroom games and also
as a communication tool for stakeholders participating in credit markets that
include spatially interdependent credits.
",martin drechsler,,2008.0,10.1016/j.envsoft.2009.01.003,"Environmental Modelling & Software, 2010, 25, 1479-1480",Hartig2008,True,,arXiv,Not available,"EcoTRADE - a multi player network game of a tradable permit market for
biodiversity credits",24700e05a940c50b4d23ee0c48ca2d9f,http://arxiv.org/abs/0812.0956v2
16140," In this note we provide a new proof for the results of Lipton et al. on the
existence of an approximate Nash equilibrium with logarithmic support size.
Besides its simplicity, the new proof leads to the following contributions:
1. For n-player games, we improve the bound on the size of the support of an
approximate Nash equilibrium.
2. We generalize the result of Daskalakis and Papadimitriou on small
probability games from the two-player case to the general n-player case.
3. We provide a logarithmic bound on the size of the support of an
approximate Nash equilibrium in the case of graphical games.
",yakov babichenko,,2013.0,,arXiv,Babichenko2013,True,,arXiv,Not available,Small Support Equilibria in Large Games,b48f2380d389837c7436468f6cb8e04a,http://arxiv.org/abs/1305.2432v2
16141," Over the years, numerous experiments have been accumulated to show that
cooperation is not casual and depends on the payoffs of the game. These
findings suggest that humans have attitude to cooperation by nature and the
same person may act more or less cooperatively depending on the particular
payoffs. In other words, people do not act a priori as single agents, but they
forecast how the game would be played if they formed coalitions and then they
play according to their best forecast. In this paper we formalize this idea and
we define a new solution concept for one-shot normal form games. We prove that
this \emph{cooperative equilibrium} exists for all finite games and it explains
a number of different experimental findings, such as (1) the rate of
cooperation in the Prisoner's dilemma depends on the cost-benefit ratio; (2)
the rate of cooperation in the Traveler's dilemma depends on the bonus/penalty;
(3) the rate of cooperation in the Publig Goods game depends on the pro-capite
marginal return and on the numbers of players; (4) the rate of cooperation in
the Bertrand competition depends on the number of players; (5) players tend to
be fair in the bargaining problem; (6) players tend to be fair in the Ultimatum
game; (7) players tend to be altruist in the Dictator game; (8) offers in the
Ultimatum game are larger than offers in the Dictator game.
",valerio capraro,,2013.0,,arXiv,Capraro2013,True,,arXiv,Not available,A solution concept for games with altruism and cooperation,7aa265fc41292a3a0cb3af64b9183be3,http://arxiv.org/abs/1302.3988v4
16142," Within the private-values paradigm, we construct a tractable empirical model
of equilibrium behavior at first-price auctions when bidders' valuations are
potentially dependent, but not necessarily affiliated. We develop a test of
affiliation and apply our framework to data from low-price, sealed-bid auctions
held by the Department of Transportation in the State of Michigan to procure
road-resurfacing services: we do not reject the hypothesis of affiliation in
cost signals.
",luciano castro,,2011.0,10.1214/10-AOAS344,"Annals of Applied Statistics 2010, Vol. 4, No. 4, 2073-2098",Castro2011,True,,arXiv,Not available,"Testing affiliation in private-values models of first-price auctions
using grid distributions",d0e03e342c749edae2d00a6f8f045506,http://arxiv.org/abs/1101.1398v1
16143," Within the private-values paradigm, we construct a tractable empirical model
of equilibrium behavior at first-price auctions when bidders' valuations are
potentially dependent, but not necessarily affiliated. We develop a test of
affiliation and apply our framework to data from low-price, sealed-bid auctions
held by the Department of Transportation in the State of Michigan to procure
road-resurfacing services: we do not reject the hypothesis of affiliation in
cost signals.
",harry paarsch,,2011.0,10.1214/10-AOAS344,"Annals of Applied Statistics 2010, Vol. 4, No. 4, 2073-2098",Castro2011,True,,arXiv,Not available,"Testing affiliation in private-values models of first-price auctions
using grid distributions",d0e03e342c749edae2d00a6f8f045506,http://arxiv.org/abs/1101.1398v1
16144," Auctions are becoming an increasingly popular method for transacting
business, especially over the Internet. This article presents a general
approach to building autonomous bidding agents to bid in multiple simultaneous
auctions for interacting goods. A core component of our approach learns a model
of the empirical price dynamics based on past data and uses the model to
analytically calculate, to the greatest extent possible, optimal bids. We
introduce a new and general boosting-based algorithm for conditional density
estimation problems of this kind, i.e., supervised learning problems in which
the goal is to estimate the entire conditional distribution of the real-valued
label. This approach is fully implemented as ATTac-2001, a top-scoring agent in
the second Trading Agent Competition (TAC-01). We present experiments
demonstrating the effectiveness of our boosting-based price predictor relative
to several reasonable alternatives.
",j. csirik,,2011.0,10.1613/jair.1200,"Journal Of Artificial Intelligence Research, Volume 19, pages
209-242, 2003",Csirik2011,True,,arXiv,Not available,"Decision-Theoretic Bidding Based on Learned Density Models in
Simultaneous, Interacting Auctions",28a06efb16850d26b5ca3035e4819d0a,http://arxiv.org/abs/1106.5270v1
16145," Auctions are becoming an increasingly popular method for transacting
business, especially over the Internet. This article presents a general
approach to building autonomous bidding agents to bid in multiple simultaneous
auctions for interacting goods. A core component of our approach learns a model
of the empirical price dynamics based on past data and uses the model to
analytically calculate, to the greatest extent possible, optimal bids. We
introduce a new and general boosting-based algorithm for conditional density
estimation problems of this kind, i.e., supervised learning problems in which
the goal is to estimate the entire conditional distribution of the real-valued
label. This approach is fully implemented as ATTac-2001, a top-scoring agent in
the second Trading Agent Competition (TAC-01). We present experiments
demonstrating the effectiveness of our boosting-based price predictor relative
to several reasonable alternatives.
",m. littman,,2011.0,10.1613/jair.1200,"Journal Of Artificial Intelligence Research, Volume 19, pages
209-242, 2003",Csirik2011,True,,arXiv,Not available,"Decision-Theoretic Bidding Based on Learned Density Models in
Simultaneous, Interacting Auctions",28a06efb16850d26b5ca3035e4819d0a,http://arxiv.org/abs/1106.5270v1
16146," Auctions are becoming an increasingly popular method for transacting
business, especially over the Internet. This article presents a general
approach to building autonomous bidding agents to bid in multiple simultaneous
auctions for interacting goods. A core component of our approach learns a model
of the empirical price dynamics based on past data and uses the model to
analytically calculate, to the greatest extent possible, optimal bids. We
introduce a new and general boosting-based algorithm for conditional density
estimation problems of this kind, i.e., supervised learning problems in which
the goal is to estimate the entire conditional distribution of the real-valued
label. This approach is fully implemented as ATTac-2001, a top-scoring agent in
the second Trading Agent Competition (TAC-01). We present experiments
demonstrating the effectiveness of our boosting-based price predictor relative
to several reasonable alternatives.
",d. mcallester,,2011.0,10.1613/jair.1200,"Journal Of Artificial Intelligence Research, Volume 19, pages
209-242, 2003",Csirik2011,True,,arXiv,Not available,"Decision-Theoretic Bidding Based on Learned Density Models in
Simultaneous, Interacting Auctions",28a06efb16850d26b5ca3035e4819d0a,http://arxiv.org/abs/1106.5270v1
16147," Auctions are becoming an increasingly popular method for transacting
business, especially over the Internet. This article presents a general
approach to building autonomous bidding agents to bid in multiple simultaneous
auctions for interacting goods. A core component of our approach learns a model
of the empirical price dynamics based on past data and uses the model to
analytically calculate, to the greatest extent possible, optimal bids. We
introduce a new and general boosting-based algorithm for conditional density
estimation problems of this kind, i.e., supervised learning problems in which
the goal is to estimate the entire conditional distribution of the real-valued
label. This approach is fully implemented as ATTac-2001, a top-scoring agent in
the second Trading Agent Competition (TAC-01). We present experiments
demonstrating the effectiveness of our boosting-based price predictor relative
to several reasonable alternatives.
",r. schapire,,2011.0,10.1613/jair.1200,"Journal Of Artificial Intelligence Research, Volume 19, pages
209-242, 2003",Csirik2011,True,,arXiv,Not available,"Decision-Theoretic Bidding Based on Learned Density Models in
Simultaneous, Interacting Auctions",28a06efb16850d26b5ca3035e4819d0a,http://arxiv.org/abs/1106.5270v1
16148," Auctions are becoming an increasingly popular method for transacting
business, especially over the Internet. This article presents a general
approach to building autonomous bidding agents to bid in multiple simultaneous
auctions for interacting goods. A core component of our approach learns a model
of the empirical price dynamics based on past data and uses the model to
analytically calculate, to the greatest extent possible, optimal bids. We
introduce a new and general boosting-based algorithm for conditional density
estimation problems of this kind, i.e., supervised learning problems in which
the goal is to estimate the entire conditional distribution of the real-valued
label. This approach is fully implemented as ATTac-2001, a top-scoring agent in
the second Trading Agent Competition (TAC-01). We present experiments
demonstrating the effectiveness of our boosting-based price predictor relative
to several reasonable alternatives.
",p. stone,,2011.0,10.1613/jair.1200,"Journal Of Artificial Intelligence Research, Volume 19, pages
209-242, 2003",Csirik2011,True,,arXiv,Not available,"Decision-Theoretic Bidding Based on Learned Density Models in
Simultaneous, Interacting Auctions",28a06efb16850d26b5ca3035e4819d0a,http://arxiv.org/abs/1106.5270v1
16149," As computational agents are developed for increasingly complicated e-commerce
applications, the complexity of the decisions they face demands advances in
artificial intelligence techniques. For example, an agent representing a seller
in an auction should try to maximize the seller's profit by reasoning about a
variety of possibly uncertain pieces of information, such as the maximum prices
various buyers might be willing to pay, the possible prices being offered by
competing sellers, the rules by which the auction operates, the dynamic arrival
and matching of offers to buy and sell, and so on. A naive application of
multiagent reasoning techniques would require the seller's agent to explicitly
model all of the other agents through an extended time horizon, rendering the
problem intractable for many realistically-sized problems. We have instead
devised a new strategy that an agent can use to determine its bid price based
on a more tractable Markov chain model of the auction process. We have
experimentally identified the conditions under which our new strategy works
well, as well as how well it works in comparison to the optimal performance the
agent could have achieved had it known the future. Our results show that our
new strategy in general performs well, outperforming other tractable heuristic
strategies in a majority of experiments, and is particularly effective in a
'seller?s market', where many buy offers are available.
",w. birmingham,,2011.0,10.1613/jair.1466,"Journal Of Artificial Intelligence Research, Volume 22, pages
175-214, 2004",Birmingham2011,True,,arXiv,Not available,"Use of Markov Chains to Design an Agent Bidding Strategy for Continuous
Double Auctions",a3abeb90085a52778958b95a30a94d80,http://arxiv.org/abs/1106.6022v1
16150," As computational agents are developed for increasingly complicated e-commerce
applications, the complexity of the decisions they face demands advances in
artificial intelligence techniques. For example, an agent representing a seller
in an auction should try to maximize the seller's profit by reasoning about a
variety of possibly uncertain pieces of information, such as the maximum prices
various buyers might be willing to pay, the possible prices being offered by
competing sellers, the rules by which the auction operates, the dynamic arrival
and matching of offers to buy and sell, and so on. A naive application of
multiagent reasoning techniques would require the seller's agent to explicitly
model all of the other agents through an extended time horizon, rendering the
problem intractable for many realistically-sized problems. We have instead
devised a new strategy that an agent can use to determine its bid price based
on a more tractable Markov chain model of the auction process. We have
experimentally identified the conditions under which our new strategy works
well, as well as how well it works in comparison to the optimal performance the
agent could have achieved had it known the future. Our results show that our
new strategy in general performs well, outperforming other tractable heuristic
strategies in a majority of experiments, and is particularly effective in a
'seller?s market', where many buy offers are available.
",e. durfee,,2011.0,10.1613/jair.1466,"Journal Of Artificial Intelligence Research, Volume 22, pages
175-214, 2004",Birmingham2011,True,,arXiv,Not available,"Use of Markov Chains to Design an Agent Bidding Strategy for Continuous
Double Auctions",a3abeb90085a52778958b95a30a94d80,http://arxiv.org/abs/1106.6022v1
16151," As computational agents are developed for increasingly complicated e-commerce
applications, the complexity of the decisions they face demands advances in
artificial intelligence techniques. For example, an agent representing a seller
in an auction should try to maximize the seller's profit by reasoning about a
variety of possibly uncertain pieces of information, such as the maximum prices
various buyers might be willing to pay, the possible prices being offered by
competing sellers, the rules by which the auction operates, the dynamic arrival
and matching of offers to buy and sell, and so on. A naive application of
multiagent reasoning techniques would require the seller's agent to explicitly
model all of the other agents through an extended time horizon, rendering the
problem intractable for many realistically-sized problems. We have instead
devised a new strategy that an agent can use to determine its bid price based
on a more tractable Markov chain model of the auction process. We have
experimentally identified the conditions under which our new strategy works
well, as well as how well it works in comparison to the optimal performance the
agent could have achieved had it known the future. Our results show that our
new strategy in general performs well, outperforming other tractable heuristic
strategies in a majority of experiments, and is particularly effective in a
'seller?s market', where many buy offers are available.
",s. park,,2011.0,10.1613/jair.1466,"Journal Of Artificial Intelligence Research, Volume 22, pages
175-214, 2004",Birmingham2011,True,,arXiv,Not available,"Use of Markov Chains to Design an Agent Bidding Strategy for Continuous
Double Auctions",a3abeb90085a52778958b95a30a94d80,http://arxiv.org/abs/1106.6022v1
16152," In two-player zero-sum games, if both players minimize their average external
regret, then the average of the strategy profiles converges to a Nash
equilibrium. For n-player general-sum games, however, theoretical guarantees
for regret minimization are less understood. Nonetheless, Counterfactual Regret
Minimization (CFR), a popular regret minimization algorithm for extensive-form
games, has generated winning three-player Texas Hold'em agents in the Annual
Computer Poker Competition (ACPC). In this paper, we provide the first set of
theoretical properties for regret minimization algorithms in non-zero-sum games
by proving that solutions eliminate iterative strict domination. We formally
define \emph{dominated actions} in extensive-form games, show that CFR avoids
iteratively strictly dominated actions and strategies, and demonstrate that
removing iteratively dominated actions is enough to win a mock tournament in a
small poker game. In addition, for two-player non-zero-sum games, we bound the
worst case performance and show that in practice, regret minimization can yield
strategies very close to equilibrium. Our theoretical advancements lead us to a
new modification of CFR for games with more than two players that is more
efficient and may be used to generate stronger strategies than previously
possible. Furthermore, we present a new three-player Texas Hold'em poker agent
that was built using CFR and a novel game decomposition method. Our new agent
wins the three-player events of the 2012 ACPC and defeats the winning
three-player programs from previous competitions while requiring less resources
to generate than the 2011 winner. Finally, we show that our CFR modification
computes a strategy of equal quality to our new agent in a quarter of the time
of standard CFR using half the memory.
",richard gibson,,2013.0,,arXiv,Gibson2013,True,,arXiv,Not available,"Regret Minimization in Non-Zero-Sum Games with Applications to Building
Champion Multiplayer Computer Poker Agents",e4ccce935b0511007766e93e11f08f04,http://arxiv.org/abs/1305.0034v1
16153," The success of online auctions has given buyers access to greater product
diversity with potentially lower prices. It has provided sellers with access to
large numbers of potential buyers and reduced transaction costs by enabling
auctions to take place without regard to time or place. However it is difficult
to spend more time period with system and closely monitor the auction until
auction participant wins the bid or closing of the auction. Determining which
items to bid on or what may be the recommended bid and when to bid it are
difficult questions to answer for online auction participants. The multi agent
auction advisor system JADE and TRACE, which is connected with decision support
system, gives the recommended bid to buyers for online auctions. The auction
advisor system relies on intelligent agents both for the retrieval of relevant
auction data and for the processing of that data to enable meaningful
recommendations, statistical reports and market prediction report to be made to
auction participants.
",a. martin,,2011.0,10.4156/jcit.vol4.issue2.martin,"Journal of Convergence Information Technology Volume 4, Number 2,
June 2009, 154-163",Martin2011,True,,arXiv,Not available,"Multi Agent Communication System for Online Auction with Decision
Support System by JADE and TRACE",4ffffcdb43bccd2ae22bf88755a2b5f9,http://arxiv.org/abs/1109.1093v1
16154," The success of online auctions has given buyers access to greater product
diversity with potentially lower prices. It has provided sellers with access to
large numbers of potential buyers and reduced transaction costs by enabling
auctions to take place without regard to time or place. However it is difficult
to spend more time period with system and closely monitor the auction until
auction participant wins the bid or closing of the auction. Determining which
items to bid on or what may be the recommended bid and when to bid it are
difficult questions to answer for online auction participants. The multi agent
auction advisor system JADE and TRACE, which is connected with decision support
system, gives the recommended bid to buyers for online auctions. The auction
advisor system relies on intelligent agents both for the retrieval of relevant
auction data and for the processing of that data to enable meaningful
recommendations, statistical reports and market prediction report to be made to
auction participants.
",t. lakshmi,,2011.0,10.4156/jcit.vol4.issue2.martin,"Journal of Convergence Information Technology Volume 4, Number 2,
June 2009, 154-163",Martin2011,True,,arXiv,Not available,"Multi Agent Communication System for Online Auction with Decision
Support System by JADE and TRACE",4ffffcdb43bccd2ae22bf88755a2b5f9,http://arxiv.org/abs/1109.1093v1
16155," The success of online auctions has given buyers access to greater product
diversity with potentially lower prices. It has provided sellers with access to
large numbers of potential buyers and reduced transaction costs by enabling
auctions to take place without regard to time or place. However it is difficult
to spend more time period with system and closely monitor the auction until
auction participant wins the bid or closing of the auction. Determining which
items to bid on or what may be the recommended bid and when to bid it are
difficult questions to answer for online auction participants. The multi agent
auction advisor system JADE and TRACE, which is connected with decision support
system, gives the recommended bid to buyers for online auctions. The auction
advisor system relies on intelligent agents both for the retrieval of relevant
auction data and for the processing of that data to enable meaningful
recommendations, statistical reports and market prediction report to be made to
auction participants.
",j. madhusudanan,,2011.0,10.4156/jcit.vol4.issue2.martin,"Journal of Convergence Information Technology Volume 4, Number 2,
June 2009, 154-163",Martin2011,True,,arXiv,Not available,"Multi Agent Communication System for Online Auction with Decision
Support System by JADE and TRACE",4ffffcdb43bccd2ae22bf88755a2b5f9,http://arxiv.org/abs/1109.1093v1
16156," We introduce and treat rigorously a new multi-agent model of the continuous
double auction or in other words the order book (OB). It is designed to explain
collective behaviour of the market when new information affecting the market
arrives. The novel feature of the model is two additional slow changing
parameters, the so-called sentiment functions. These sentiment functions
measure the conception of the fair price of two groups of investors, namely,
bulls and bears. Our model specifies differential equations for the time
evolution of sentiment functions and constitutes a nonlinear Markov process
which exhibits long term correlations. We explain the intuition behind
equations for sentiment functions and present numerical simulations which show
that the behaviour of our model is similar to the behaviour of the real market.
We also obtain a diffusion limit of the model, the Ornstein-Uhlenbeck type
process with variable volatility. The volatility is proportional to the
difference of opinions of bulls and bears about the fair price of a security.
The paper is complimentary to our previous work where mathematical proofs are
presented.
",a. lykov,,2012.0,,arXiv,Lykov2012,True,,arXiv,Not available,"Investor's sentiment in multi-agent model of the continuous double
auction",3fd7641411c647e0b9ae192aefc12833,http://arxiv.org/abs/1208.3083v4
16157," We introduce and treat rigorously a new multi-agent model of the continuous
double auction or in other words the order book (OB). It is designed to explain
collective behaviour of the market when new information affecting the market
arrives. The novel feature of the model is two additional slow changing
parameters, the so-called sentiment functions. These sentiment functions
measure the conception of the fair price of two groups of investors, namely,
bulls and bears. Our model specifies differential equations for the time
evolution of sentiment functions and constitutes a nonlinear Markov process
which exhibits long term correlations. We explain the intuition behind
equations for sentiment functions and present numerical simulations which show
that the behaviour of our model is similar to the behaviour of the real market.
We also obtain a diffusion limit of the model, the Ornstein-Uhlenbeck type
process with variable volatility. The volatility is proportional to the
difference of opinions of bulls and bears about the fair price of a security.
The paper is complimentary to our previous work where mathematical proofs are
presented.
",s. muzychka,,2012.0,,arXiv,Lykov2012,True,,arXiv,Not available,"Investor's sentiment in multi-agent model of the continuous double
auction",3fd7641411c647e0b9ae192aefc12833,http://arxiv.org/abs/1208.3083v4
16158," We introduce and treat rigorously a new multi-agent model of the continuous
double auction or in other words the order book (OB). It is designed to explain
collective behaviour of the market when new information affecting the market
arrives. The novel feature of the model is two additional slow changing
parameters, the so-called sentiment functions. These sentiment functions
measure the conception of the fair price of two groups of investors, namely,
bulls and bears. Our model specifies differential equations for the time
evolution of sentiment functions and constitutes a nonlinear Markov process
which exhibits long term correlations. We explain the intuition behind
equations for sentiment functions and present numerical simulations which show
that the behaviour of our model is similar to the behaviour of the real market.
We also obtain a diffusion limit of the model, the Ornstein-Uhlenbeck type
process with variable volatility. The volatility is proportional to the
difference of opinions of bulls and bears about the fair price of a security.
The paper is complimentary to our previous work where mathematical proofs are
presented.
",k. vaninsky,,2012.0,,arXiv,Lykov2012,True,,arXiv,Not available,"Investor's sentiment in multi-agent model of the continuous double
auction",3fd7641411c647e0b9ae192aefc12833,http://arxiv.org/abs/1208.3083v4
16159," In the paper, a statistical procedure for estimating the parameters of zero
intelligence models by means of tick-by-tick quote (L1) data is proposed. A
large class of existing zero intelligence models is reviewed. It is shown that
all those models fail to describe the actual behavior of limit order books
close to the ask price. A generalized model, accommodating the discrepancies
found, is proposed and shown to give significant results for L1 data from three
US electronic markets. It is also demonstrated that the generalized model
preforms significantly better than the reviewed models.
",martin smid,,2013.0,,arXiv,Šmíd2013,True,,arXiv,Not available,"Zero Intelligence Models of the Continuous Double Auction: Econometrics,
Empirical Evidence and Generalization",90dd66a4ecf6f2d3c8c78aa3cac935df,http://arxiv.org/abs/1303.6765v4
16160," We consider a simplified model of the continuous double auction where prices
are integers varying from $1$ to $N$ with limit orders and market orders, but
quantity per order limited to a single share. For this model, the order process
is equivalent to two $M/M/1$ queues. We study the behaviour of the auction in
the low-traffic limit where limit orders are immediately transformed into
market orders. In this limit, the distribution of prices can be computed
exactly and gives a reasonable approximation of the price distribution when the
ratio between the rate of order arrivals and the rate of order executions is
below $1/2$. This is further confirmed by the analysis of the first passage
time in $1$ or $N$.
",enrico scalas,,2016.0,10.1016/j.physa.2017.05.020,arXiv,Scalas2016,True,,arXiv,Not available,"Low-traffic limit and first-passage times for a simple model of the
continuous double auction",cc164df41a9f7e525a85ebdd4740af14,http://arxiv.org/abs/1603.09666v1
16161," We consider a simplified model of the continuous double auction where prices
are integers varying from $1$ to $N$ with limit orders and market orders, but
quantity per order limited to a single share. For this model, the order process
is equivalent to two $M/M/1$ queues. We study the behaviour of the auction in
the low-traffic limit where limit orders are immediately transformed into
market orders. In this limit, the distribution of prices can be computed
exactly and gives a reasonable approximation of the price distribution when the
ratio between the rate of order arrivals and the rate of order executions is
below $1/2$. This is further confirmed by the analysis of the first passage
time in $1$ or $N$.
",fabio rapallo,,2016.0,10.1016/j.physa.2017.05.020,arXiv,Scalas2016,True,,arXiv,Not available,"Low-traffic limit and first-passage times for a simple model of the
continuous double auction",cc164df41a9f7e525a85ebdd4740af14,http://arxiv.org/abs/1603.09666v1
16162," We consider a simplified model of the continuous double auction where prices
are integers varying from $1$ to $N$ with limit orders and market orders, but
quantity per order limited to a single share. For this model, the order process
is equivalent to two $M/M/1$ queues. We study the behaviour of the auction in
the low-traffic limit where limit orders are immediately transformed into
market orders. In this limit, the distribution of prices can be computed
exactly and gives a reasonable approximation of the price distribution when the
ratio between the rate of order arrivals and the rate of order executions is
below $1/2$. This is further confirmed by the analysis of the first passage
time in $1$ or $N$.
",tijana radivojevic,,2016.0,10.1016/j.physa.2017.05.020,arXiv,Scalas2016,True,,arXiv,Not available,"Low-traffic limit and first-passage times for a simple model of the
continuous double auction",cc164df41a9f7e525a85ebdd4740af14,http://arxiv.org/abs/1603.09666v1
16163," In an $\epsilon$-Nash equilibrium, a player can gain at most $\epsilon$ by
unilaterally changing his behaviour. For two-player (bimatrix) games with
payoffs in $[0,1]$, the best-known$\epsilon$ achievable in polynomial time is
0.3393. In general, for $n$-player games an $\epsilon$-Nash equilibrium can be
computed in polynomial time for an $\epsilon$ that is an increasing function of
$n$ but does not depend on the number of strategies of the players. For
three-player and four-player games the corresponding values of $\epsilon$ are
0.6022 and 0.7153, respectively. Polymatrix games are a restriction of general
$n$-player games where a player's payoff is the sum of payoffs from a number of
bimatrix games. There exists a very small but constant $\epsilon$ such that
computing an $\epsilon$-Nash equilibrium of a polymatrix game is \PPAD-hard.
Our main result is that a $(0.5+\delta)$-Nash equilibrium of an $n$-player
polymatrix game can be computed in time polynomial in the input size and
$\frac{1}{\delta}$. Inspired by the algorithm of Tsaknakis and Spirakis, our
algorithm uses gradient descent on the maximum regret of the players. We also
show that this algorithm can be applied to efficiently find a
$(0.5+\delta)$-Nash equilibrium in a two-player Bayesian game.
",argyrios deligkas,,2014.0,,arXiv,Deligkas2014,True,,arXiv,Not available,Computing Approximate Nash Equilibria in Polymatrix Games,deee9e5596b2ba0e9ce19db9b54313f7,http://arxiv.org/abs/1409.3741v2
16164," We study a phenomenological model for the continuous double auction,
equivalent to two independent $M/M/1$ queues. The continuous double auction
defines a continuous-time random walk for trade prices. The conditions for
ergodicity of the auction are derived and, as a consequence, three possible
regimes in the behavior of prices and logarithmic returns are observed. In the
ergodic regime, prices are unstable and one can observe an intermittent
behavior in the logarithmic returns. On the contrary, non-ergodicity triggers
stability of prices, even if two different regimes can be seen.
",tijana radivojevic,,2013.0,10.1371/journal.pone.0088095,arXiv,Radivojević2013,True,,arXiv,Not available,Ergodic transition in a simple model of the continuous double auction,a881476e5f6634320480e67f42b2de2d,http://arxiv.org/abs/1305.2716v1
16165," We study a phenomenological model for the continuous double auction,
equivalent to two independent $M/M/1$ queues. The continuous double auction
defines a continuous-time random walk for trade prices. The conditions for
ergodicity of the auction are derived and, as a consequence, three possible
regimes in the behavior of prices and logarithmic returns are observed. In the
ergodic regime, prices are unstable and one can observe an intermittent
behavior in the logarithmic returns. On the contrary, non-ergodicity triggers
stability of prices, even if two different regimes can be seen.
",jonatha anselmi,,2013.0,10.1371/journal.pone.0088095,arXiv,Radivojević2013,True,,arXiv,Not available,Ergodic transition in a simple model of the continuous double auction,a881476e5f6634320480e67f42b2de2d,http://arxiv.org/abs/1305.2716v1
16166," We study a phenomenological model for the continuous double auction,
equivalent to two independent $M/M/1$ queues. The continuous double auction
defines a continuous-time random walk for trade prices. The conditions for
ergodicity of the auction are derived and, as a consequence, three possible
regimes in the behavior of prices and logarithmic returns are observed. In the
ergodic regime, prices are unstable and one can observe an intermittent
behavior in the logarithmic returns. On the contrary, non-ergodicity triggers
stability of prices, even if two different regimes can be seen.
",enrico scalas,,2013.0,10.1371/journal.pone.0088095,arXiv,Radivojević2013,True,,arXiv,Not available,Ergodic transition in a simple model of the continuous double auction,a881476e5f6634320480e67f42b2de2d,http://arxiv.org/abs/1305.2716v1
16167," The idea of having the geolocation database monitor the secondary use of TV
white space (TVWS) spectrum and assist in coordinating the secondary usage is
gaining ground. Considering the home networking use case, we leverage the
geolocation database for interference-aware coordinated TVWS sharing among
secondary users (home networks) using {\em short-term auctions}, thereby
realize a dynamic secondary market. To enable this auctioning based coordinated
TVWS sharing framework, we propose an enhanced {\em market-driven TVWS spectrum
access model}. For the short-term auctions, we propose an online multi-unit,
iterative truthful mechanism called VERUM that takes into consideration
spatially heterogeneous spectrum availability, an inherent characteristic in
the TVWS context. We prove that VERUM is truthful (i.e., the best strategy for
every bidder is to bid based on its true valuation) and is also efficient in
that it allocates spectrum to users who value it the most. Evaluation results
from scenarios with real home distributions in urban and dense-urban
environments and using realistic TVWS spectrum availability maps show that
VERUM performs close to optimal allocation in terms of revenue for the
coordinating spectrum manager. Comparison with two existing efficient and
truthful multi-unit spectrum auction schemes, VERITAS and SATYA, shows that
VERUM fares better in terms of revenue, spectrum utilisation and percentage of
winning bidders in diverse conditions. Taking all of the above together, VERUM
can be seen to offer incentives to subscribed users encouraging them to use
TVWS spectrum through greater spectrum availability (as measured by percentage
of winning bidders) as well as to the coordinating spectrum manager through
revenue generation.
",saravana manickam,,2013.0,,arXiv,Manickam2013,True,,arXiv,Not available,"Auctioning based Coordinated TV White Space Spectrum Sharing for Home
Networks",28607bb6b5d6f6da144b069e7657ca2b,http://arxiv.org/abs/1307.0962v2
16168," The idea of having the geolocation database monitor the secondary use of TV
white space (TVWS) spectrum and assist in coordinating the secondary usage is
gaining ground. Considering the home networking use case, we leverage the
geolocation database for interference-aware coordinated TVWS sharing among
secondary users (home networks) using {\em short-term auctions}, thereby
realize a dynamic secondary market. To enable this auctioning based coordinated
TVWS sharing framework, we propose an enhanced {\em market-driven TVWS spectrum
access model}. For the short-term auctions, we propose an online multi-unit,
iterative truthful mechanism called VERUM that takes into consideration
spatially heterogeneous spectrum availability, an inherent characteristic in
the TVWS context. We prove that VERUM is truthful (i.e., the best strategy for
every bidder is to bid based on its true valuation) and is also efficient in
that it allocates spectrum to users who value it the most. Evaluation results
from scenarios with real home distributions in urban and dense-urban
environments and using realistic TVWS spectrum availability maps show that
VERUM performs close to optimal allocation in terms of revenue for the
coordinating spectrum manager. Comparison with two existing efficient and
truthful multi-unit spectrum auction schemes, VERITAS and SATYA, shows that
VERUM fares better in terms of revenue, spectrum utilisation and percentage of
winning bidders in diverse conditions. Taking all of the above together, VERUM
can be seen to offer incentives to subscribed users encouraging them to use
TVWS spectrum through greater spectrum availability (as measured by percentage
of winning bidders) as well as to the coordinating spectrum manager through
revenue generation.
",mahesh marina,,2013.0,,arXiv,Manickam2013,True,,arXiv,Not available,"Auctioning based Coordinated TV White Space Spectrum Sharing for Home
Networks",28607bb6b5d6f6da144b069e7657ca2b,http://arxiv.org/abs/1307.0962v2
16169," The idea of having the geolocation database monitor the secondary use of TV
white space (TVWS) spectrum and assist in coordinating the secondary usage is
gaining ground. Considering the home networking use case, we leverage the
geolocation database for interference-aware coordinated TVWS sharing among
secondary users (home networks) using {\em short-term auctions}, thereby
realize a dynamic secondary market. To enable this auctioning based coordinated
TVWS sharing framework, we propose an enhanced {\em market-driven TVWS spectrum
access model}. For the short-term auctions, we propose an online multi-unit,
iterative truthful mechanism called VERUM that takes into consideration
spatially heterogeneous spectrum availability, an inherent characteristic in
the TVWS context. We prove that VERUM is truthful (i.e., the best strategy for
every bidder is to bid based on its true valuation) and is also efficient in
that it allocates spectrum to users who value it the most. Evaluation results
from scenarios with real home distributions in urban and dense-urban
environments and using realistic TVWS spectrum availability maps show that
VERUM performs close to optimal allocation in terms of revenue for the
coordinating spectrum manager. Comparison with two existing efficient and
truthful multi-unit spectrum auction schemes, VERITAS and SATYA, shows that
VERUM fares better in terms of revenue, spectrum utilisation and percentage of
winning bidders in diverse conditions. Taking all of the above together, VERUM
can be seen to offer incentives to subscribed users encouraging them to use
TVWS spectrum through greater spectrum availability (as measured by percentage
of winning bidders) as well as to the coordinating spectrum manager through
revenue generation.
",sofia pediaditaki,,2013.0,,arXiv,Manickam2013,True,,arXiv,Not available,"Auctioning based Coordinated TV White Space Spectrum Sharing for Home
Networks",28607bb6b5d6f6da144b069e7657ca2b,http://arxiv.org/abs/1307.0962v2
16170," The idea of having the geolocation database monitor the secondary use of TV
white space (TVWS) spectrum and assist in coordinating the secondary usage is
gaining ground. Considering the home networking use case, we leverage the
geolocation database for interference-aware coordinated TVWS sharing among
secondary users (home networks) using {\em short-term auctions}, thereby
realize a dynamic secondary market. To enable this auctioning based coordinated
TVWS sharing framework, we propose an enhanced {\em market-driven TVWS spectrum
access model}. For the short-term auctions, we propose an online multi-unit,
iterative truthful mechanism called VERUM that takes into consideration
spatially heterogeneous spectrum availability, an inherent characteristic in
the TVWS context. We prove that VERUM is truthful (i.e., the best strategy for
every bidder is to bid based on its true valuation) and is also efficient in
that it allocates spectrum to users who value it the most. Evaluation results
from scenarios with real home distributions in urban and dense-urban
environments and using realistic TVWS spectrum availability maps show that
VERUM performs close to optimal allocation in terms of revenue for the
coordinating spectrum manager. Comparison with two existing efficient and
truthful multi-unit spectrum auction schemes, VERITAS and SATYA, shows that
VERUM fares better in terms of revenue, spectrum utilisation and percentage of
winning bidders in diverse conditions. Taking all of the above together, VERUM
can be seen to offer incentives to subscribed users encouraging them to use
TVWS spectrum through greater spectrum availability (as measured by percentage
of winning bidders) as well as to the coordinating spectrum manager through
revenue generation.
",maziar nekovee,,2013.0,,arXiv,Manickam2013,True,,arXiv,Not available,"Auctioning based Coordinated TV White Space Spectrum Sharing for Home
Networks",28607bb6b5d6f6da144b069e7657ca2b,http://arxiv.org/abs/1307.0962v2
16171," It is well-known that a market equilibrium with uniform prices often does not
exist in non-convex day-ahead electricity auctions. We consider the case of the
non-convex, uniform-price Pan-European day-ahead electricity market ""PCR""
(Price Coupling of Regions), with non-convexities arising from so-called
complex and block orders. Extending previous results, we propose a new
primal-dual framework for these auctions, which has applications in both
economic analysis and algorithm design. The contribution here is threefold.
First, from the algorithmic point of view, we give a non-trivial exact (i.e.
not approximate) linearization of a non-convex 'minimum income condition' that
must hold for complex orders arising from the Spanish market, avoiding the
introduction of any auxiliary variables, and allowing us to solve market
clearing instances involving most of the bidding products proposed in PCR using
off-the-shelf MIP solvers. Second, from the economic analysis point of view, we
give the first MILP formulations of optimization problems such as the
maximization of the traded volume, or the minimization of opportunity costs of
paradoxically rejected block bids. We first show on a toy example that these
two objectives are distinct from maximizing welfare. We also recover directly a
previously noted property of an alternative market model. Third, we provide
numerical experiments on realistic large-scale instances. They illustrate the
efficiency of the approach, as well as the economics trade-offs that may occur
in practice.
",mehdi madani,,2014.0,,arXiv,Madani2014,True,,arXiv,Not available,"A MIP framework for non-convex uniform price day-ahead electricity
auctions",e6308ce545a96b4eb3266863a5c46764,http://arxiv.org/abs/1410.4468v2
16172," It is well-known that a market equilibrium with uniform prices often does not
exist in non-convex day-ahead electricity auctions. We consider the case of the
non-convex, uniform-price Pan-European day-ahead electricity market ""PCR""
(Price Coupling of Regions), with non-convexities arising from so-called
complex and block orders. Extending previous results, we propose a new
primal-dual framework for these auctions, which has applications in both
economic analysis and algorithm design. The contribution here is threefold.
First, from the algorithmic point of view, we give a non-trivial exact (i.e.
not approximate) linearization of a non-convex 'minimum income condition' that
must hold for complex orders arising from the Spanish market, avoiding the
introduction of any auxiliary variables, and allowing us to solve market
clearing instances involving most of the bidding products proposed in PCR using
off-the-shelf MIP solvers. Second, from the economic analysis point of view, we
give the first MILP formulations of optimization problems such as the
maximization of the traded volume, or the minimization of opportunity costs of
paradoxically rejected block bids. We first show on a toy example that these
two objectives are distinct from maximizing welfare. We also recover directly a
previously noted property of an alternative market model. Third, we provide
numerical experiments on realistic large-scale instances. They illustrate the
efficiency of the approach, as well as the economics trade-offs that may occur
in practice.
",mathieu vyve,,2014.0,,arXiv,Madani2014,True,,arXiv,Not available,"A MIP framework for non-convex uniform price day-ahead electricity
auctions",e6308ce545a96b4eb3266863a5c46764,http://arxiv.org/abs/1410.4468v2
16173," We study a nonzero-sum game of two players which is a generalization of the
antagonistic noisy duel of discrete type. The game is considered from the point
of view of various criterions of optimality. We prove existence of
epsilon-equilibrium situations and show that the epsilon-equilibrium strategies
that we have found are epsilon-maxmin. Conditions under which the equilibrium
plays are Pareto-optimal are given.
Keywords: noisy duel, payoff function, strategy, equilibrium situation,
Pareto optimality, the value of a game.
",lyubov positselskaya,,2007.0,,arXiv,Positselskaya2007,True,,arXiv,Not available,"Nonantagonistic noisy duels of discrete type with an arbitrary number of
actions",74fbc3c7751b1d167c574fec75c7ecef,http://arxiv.org/abs/0708.2023v2
16174," In an $\epsilon$-Nash equilibrium, a player can gain at most $\epsilon$ by
unilaterally changing his behaviour. For two-player (bimatrix) games with
payoffs in $[0,1]$, the best-known$\epsilon$ achievable in polynomial time is
0.3393. In general, for $n$-player games an $\epsilon$-Nash equilibrium can be
computed in polynomial time for an $\epsilon$ that is an increasing function of
$n$ but does not depend on the number of strategies of the players. For
three-player and four-player games the corresponding values of $\epsilon$ are
0.6022 and 0.7153, respectively. Polymatrix games are a restriction of general
$n$-player games where a player's payoff is the sum of payoffs from a number of
bimatrix games. There exists a very small but constant $\epsilon$ such that
computing an $\epsilon$-Nash equilibrium of a polymatrix game is \PPAD-hard.
Our main result is that a $(0.5+\delta)$-Nash equilibrium of an $n$-player
polymatrix game can be computed in time polynomial in the input size and
$\frac{1}{\delta}$. Inspired by the algorithm of Tsaknakis and Spirakis, our
algorithm uses gradient descent on the maximum regret of the players. We also
show that this algorithm can be applied to efficiently find a
$(0.5+\delta)$-Nash equilibrium in a two-player Bayesian game.
",john fearnley,,2014.0,,arXiv,Deligkas2014,True,,arXiv,Not available,Computing Approximate Nash Equilibria in Polymatrix Games,deee9e5596b2ba0e9ce19db9b54313f7,http://arxiv.org/abs/1409.3741v2
16175," Modern commercial Internet search engines display advertisements along side
the search results in response to user queries. Such sponsored search relies on
market mechanisms to elicit prices for these advertisements, making use of an
auction among advertisers who bid in order to have their ads shown for specific
keywords. We present an overview of the current systems for such auctions and
also describe the underlying game-theoretic aspects. The game involves three
parties--advertisers, the search engine, and search users--and we present
example research directions that emphasize the role of each. The algorithms for
bidding and pricing in these games use techniques from three mathematical
areas: mechanism design, optimization, and statistical estimation. Finally, we
present some challenges in sponsored search advertising.
",jon feldman,,2008.0,,arXiv,Feldman2008,True,,arXiv,Not available,Algorithmic Methods for Sponsored Search Advertising,21df29ce558c708e11fa9546f4dae141,http://arxiv.org/abs/0805.1759v1
16176," Modern commercial Internet search engines display advertisements along side
the search results in response to user queries. Such sponsored search relies on
market mechanisms to elicit prices for these advertisements, making use of an
auction among advertisers who bid in order to have their ads shown for specific
keywords. We present an overview of the current systems for such auctions and
also describe the underlying game-theoretic aspects. The game involves three
parties--advertisers, the search engine, and search users--and we present
example research directions that emphasize the role of each. The algorithms for
bidding and pricing in these games use techniques from three mathematical
areas: mechanism design, optimization, and statistical estimation. Finally, we
present some challenges in sponsored search advertising.
",s. muthukrishnan,,2008.0,,arXiv,Feldman2008,True,,arXiv,Not available,Algorithmic Methods for Sponsored Search Advertising,21df29ce558c708e11fa9546f4dae141,http://arxiv.org/abs/0805.1759v1
16177," We give a new proof that any candy-passing game on a graph G with at least
4|E(G)|-|V(G)| candies stabilizes. (This result was first proven in
arXiv:0807.4450.) Unlike the prior literature on candy-passing games, we use
methods from the general theory of chip-firing games which allow us to obtain a
polynomial bound on the number of rounds before stabilization.
",paul kominers,,2008.0,,arXiv,Kominers2008,True,,arXiv,Not available,"Candy-passing Games on General Graphs, II",c1e22f94678a37a78f937c8ed461d31e,http://arxiv.org/abs/0807.4655v1
16178," We give a new proof that any candy-passing game on a graph G with at least
4|E(G)|-|V(G)| candies stabilizes. (This result was first proven in
arXiv:0807.4450.) Unlike the prior literature on candy-passing games, we use
methods from the general theory of chip-firing games which allow us to obtain a
polynomial bound on the number of rounds before stabilization.
",scott kominers,,2008.0,,arXiv,Kominers2008,True,,arXiv,Not available,"Candy-passing Games on General Graphs, II",c1e22f94678a37a78f937c8ed461d31e,http://arxiv.org/abs/0807.4655v1
16179," We consider games played on finite graphs, whose goal is to obtain a trace
belonging to a given set of winning traces. We focus on those states from which
Player 1 cannot force a win. We explore and compare several criteria for
establishing what is the preferable behavior of Player 1 from those states.
Along the way, we prove several results of theoretical and practical
interest, such as a characterization of admissible strategies, which also
provides a simple algorithm for computing such strategies for various common
goals, and the equivalence between the existence of positional winning
strategies and the existence of positional subgame perfect strategies.
",marco faella,,2008.0,,arXiv,Faella2008,True,,arXiv,Not available,Best-Effort Strategies for Losing States,404ff0b0098a2c92d719d5d645b2fa1d,http://arxiv.org/abs/0811.1664v1
16180," Alpaga is a solver for two-player parity games with imperfect information.
Given the description of a game, it determines whether the first player can
ensure to win and, if so, it constructs a winning strategy. The tool provides a
symbolic implementation of a recent algorithm based on antichains.
",dietmar berwanger,,2009.0,,arXiv,Berwanger2009,True,,arXiv,Not available,Alpaga: A Tool for Solving Parity Games with Imperfect Information,a0a65ef623681761ac63b5481c7f0bcf,http://arxiv.org/abs/0901.4728v1
16181," Alpaga is a solver for two-player parity games with imperfect information.
Given the description of a game, it determines whether the first player can
ensure to win and, if so, it constructs a winning strategy. The tool provides a
symbolic implementation of a recent algorithm based on antichains.
",krishnendu chatterjee,,2009.0,,arXiv,Berwanger2009,True,,arXiv,Not available,Alpaga: A Tool for Solving Parity Games with Imperfect Information,a0a65ef623681761ac63b5481c7f0bcf,http://arxiv.org/abs/0901.4728v1
16182," Alpaga is a solver for two-player parity games with imperfect information.
Given the description of a game, it determines whether the first player can
ensure to win and, if so, it constructs a winning strategy. The tool provides a
symbolic implementation of a recent algorithm based on antichains.
",martin wulf,,2009.0,,arXiv,Berwanger2009,True,,arXiv,Not available,Alpaga: A Tool for Solving Parity Games with Imperfect Information,a0a65ef623681761ac63b5481c7f0bcf,http://arxiv.org/abs/0901.4728v1
16183," Alpaga is a solver for two-player parity games with imperfect information.
Given the description of a game, it determines whether the first player can
ensure to win and, if so, it constructs a winning strategy. The tool provides a
symbolic implementation of a recent algorithm based on antichains.
",laurent doyen,,2009.0,,arXiv,Berwanger2009,True,,arXiv,Not available,Alpaga: A Tool for Solving Parity Games with Imperfect Information,a0a65ef623681761ac63b5481c7f0bcf,http://arxiv.org/abs/0901.4728v1
16184," Alpaga is a solver for two-player parity games with imperfect information.
Given the description of a game, it determines whether the first player can
ensure to win and, if so, it constructs a winning strategy. The tool provides a
symbolic implementation of a recent algorithm based on antichains.
",thomas henzinger,,2009.0,,arXiv,Berwanger2009,True,,arXiv,Not available,Alpaga: A Tool for Solving Parity Games with Imperfect Information,a0a65ef623681761ac63b5481c7f0bcf,http://arxiv.org/abs/0901.4728v1
16185," In an $\epsilon$-Nash equilibrium, a player can gain at most $\epsilon$ by
unilaterally changing his behaviour. For two-player (bimatrix) games with
payoffs in $[0,1]$, the best-known$\epsilon$ achievable in polynomial time is
0.3393. In general, for $n$-player games an $\epsilon$-Nash equilibrium can be
computed in polynomial time for an $\epsilon$ that is an increasing function of
$n$ but does not depend on the number of strategies of the players. For
three-player and four-player games the corresponding values of $\epsilon$ are
0.6022 and 0.7153, respectively. Polymatrix games are a restriction of general
$n$-player games where a player's payoff is the sum of payoffs from a number of
bimatrix games. There exists a very small but constant $\epsilon$ such that
computing an $\epsilon$-Nash equilibrium of a polymatrix game is \PPAD-hard.
Our main result is that a $(0.5+\delta)$-Nash equilibrium of an $n$-player
polymatrix game can be computed in time polynomial in the input size and
$\frac{1}{\delta}$. Inspired by the algorithm of Tsaknakis and Spirakis, our
algorithm uses gradient descent on the maximum regret of the players. We also
show that this algorithm can be applied to efficiently find a
$(0.5+\delta)$-Nash equilibrium in a two-player Bayesian game.
",rahul savani,,2014.0,,arXiv,Deligkas2014,True,,arXiv,Not available,Computing Approximate Nash Equilibria in Polymatrix Games,deee9e5596b2ba0e9ce19db9b54313f7,http://arxiv.org/abs/1409.3741v2
16186," Stochastic games are a natural model for the synthesis of controllers
confronted to adversarial and/or random actions. In particular,
$\omega$-regular games of infinite length can represent reactive systems which
are not expected to reach a correct state, but rather to handle a continuous
stream of events. One critical resource in such applications is the memory used
by the controller. In this paper, we study the amount of memory that can be
saved through the use of randomisation in strategies, and present matching
upper and lower bounds for stochastic Muller games.
",florian horn,,2009.0,,"26th International Symposium on Theoretical Aspects of Computer
Science - STACS 2009 (2009) 541-552",Horn2009,True,,arXiv,Not available,Random Fruits on the Zielonka Tree,ca54a1162e7f4390471e09aded16b77b,http://arxiv.org/abs/0902.2736v1
16187," We discuss a duel-type game in which Player I uses his resource continuously
and Player II distributes it by discrete portions. Each player knows how much
resources he and his opponent have at every moment of time. The solution of the
game is given in an explicit form.
Keywords: noisy duel, payoff, strategy, the value of a game, consumption of
resource.
",lyubov positselskaya,,2009.0,,arXiv,Positselskaya2009,True,,arXiv,Not available,Noisy fighter-bomber duel,483dc4cee377eb18bcb0fcec248313d3,http://arxiv.org/abs/0910.0548v1
16188," We address the equilibrium concept of a reverse auction game so that no one
can enhance the individual payoff by a unilateral change when all the others
follow a certain strategy. In this approach the combinatorial possibilities to
consider become very much involved even for a small number of players, which
has hindered a precise analysis in previous works. We here present a systematic
way to reach the solution for a general number of players, and show that this
game is an example of conflict between the group and the individual interests.
",seung baek,,2010.0,10.1142/S0219477510000071,"Fluctuation and Noise Letters, 9:1, pp. 61-68 (2010)",Baek2010,True,,arXiv,Not available,Equilibrium solution to the lowest unique positive integer game,d4c58181d36866f7e28e092a07265217,http://arxiv.org/abs/1001.1065v1
16189," We address the equilibrium concept of a reverse auction game so that no one
can enhance the individual payoff by a unilateral change when all the others
follow a certain strategy. In this approach the combinatorial possibilities to
consider become very much involved even for a small number of players, which
has hindered a precise analysis in previous works. We here present a systematic
way to reach the solution for a general number of players, and show that this
game is an example of conflict between the group and the individual interests.
",sebastian bernhardsson,,2010.0,10.1142/S0219477510000071,"Fluctuation and Noise Letters, 9:1, pp. 61-68 (2010)",Baek2010,True,,arXiv,Not available,Equilibrium solution to the lowest unique positive integer game,d4c58181d36866f7e28e092a07265217,http://arxiv.org/abs/1001.1065v1
16190," Within the context of games on networks S. Goyal (Goya (2007), pg. 39) posed
the following problem. Under any arbitrary but fixed topology, does there exist
at least one pure Nash equilibrium that exhibits a positive relation between
the cardinality of a player's set of neighbors and its utility payoff? In this
paper we present a class of topologies/games in which pure Nash equilibria with
the above property do not exist.
",ali kakhbod,,2010.0,,"Economic Bulletin (EB). vol 31, no. 3, pp. 2177-2184, 2011",Kakhbod2010,True,,arXiv,Not available,Games on Social Networks: On a Problem Posed by Goyal,c7ba3747a9f50c624ea6a7f8547d2163,http://arxiv.org/abs/1001.3896v4
16191," Within the context of games on networks S. Goyal (Goya (2007), pg. 39) posed
the following problem. Under any arbitrary but fixed topology, does there exist
at least one pure Nash equilibrium that exhibits a positive relation between
the cardinality of a player's set of neighbors and its utility payoff? In this
paper we present a class of topologies/games in which pure Nash equilibria with
the above property do not exist.
",demosthenis teneketzis,,2010.0,,"Economic Bulletin (EB). vol 31, no. 3, pp. 2177-2184, 2011",Kakhbod2010,True,,arXiv,Not available,Games on Social Networks: On a Problem Posed by Goyal,c7ba3747a9f50c624ea6a7f8547d2163,http://arxiv.org/abs/1001.3896v4
16192," Calibrated strategies can be obtained by performing strategies that have no
internal regret in some auxiliary game. Such strategies can be constructed
explicitly with the use of Blackwell's approachability theorem, in an other
auxiliary game. We establish the converse: a strategy that approaches a convex
$B$-set can be derived from the construction of a calibrated strategy. We
develop these tools in the framework of a game with partial monitoring, where
players do not observe the actions of their opponents but receive random
signals, to define a notion of internal regret and construct strategies that
have no such regret.
",vianney perchet,,2010.0,,arXiv,Perchet2010,True,,arXiv,Not available,Calibration and Internal no-Regret with Partial Monitoring,25d9669e384ea20e2045565fb5869748,http://arxiv.org/abs/1006.1746v1
16193," The cornerstone underpinning deep learning is the guarantee that gradient
descent on an objective converges to local minima. Unfortunately, this
guarantee fails in settings, such as generative adversarial nets, where there
are multiple interacting losses. The behavior of gradient-based methods in
games is not well understood -- and is becoming increasingly important as
adversarial and multi-objective architectures proliferate. In this paper, we
develop new techniques to understand and control the dynamics in general games.
The key result is to decompose the second-order dynamics into two components.
The first is related to potential games, which reduce to gradient descent on an
implicit function; the second relates to Hamiltonian games, a new class of
games that obey a conservation law, akin to conservation laws in classical
mechanical systems. The decomposition motivates Symplectic Gradient Adjustment
(SGA), a new algorithm for finding stable fixed points in general games. Basic
experiments show SGA is competitive with recently proposed algorithms for
finding stable fixed points in GANs -- whilst at the same time being applicable
to -- and having guarantees in -- much more general games.
",david balduzzi,,2018.0,,"PMLR volume 80, 2018",Balduzzi2018,True,,arXiv,Not available,The Mechanics of n-Player Differentiable Games,48bcea52c4fc493a73c3f55d079ea0a2,http://arxiv.org/abs/1802.05642v2
16194," The cornerstone underpinning deep learning is the guarantee that gradient
descent on an objective converges to local minima. Unfortunately, this
guarantee fails in settings, such as generative adversarial nets, where there
are multiple interacting losses. The behavior of gradient-based methods in
games is not well understood -- and is becoming increasingly important as
adversarial and multi-objective architectures proliferate. In this paper, we
develop new techniques to understand and control the dynamics in general games.
The key result is to decompose the second-order dynamics into two components.
The first is related to potential games, which reduce to gradient descent on an
implicit function; the second relates to Hamiltonian games, a new class of
games that obey a conservation law, akin to conservation laws in classical
mechanical systems. The decomposition motivates Symplectic Gradient Adjustment
(SGA), a new algorithm for finding stable fixed points in general games. Basic
experiments show SGA is competitive with recently proposed algorithms for
finding stable fixed points in GANs -- whilst at the same time being applicable
to -- and having guarantees in -- much more general games.
",sebastien racaniere,,2018.0,,"PMLR volume 80, 2018",Balduzzi2018,True,,arXiv,Not available,The Mechanics of n-Player Differentiable Games,48bcea52c4fc493a73c3f55d079ea0a2,http://arxiv.org/abs/1802.05642v2
16195," The cornerstone underpinning deep learning is the guarantee that gradient
descent on an objective converges to local minima. Unfortunately, this
guarantee fails in settings, such as generative adversarial nets, where there
are multiple interacting losses. The behavior of gradient-based methods in
games is not well understood -- and is becoming increasingly important as
adversarial and multi-objective architectures proliferate. In this paper, we
develop new techniques to understand and control the dynamics in general games.
The key result is to decompose the second-order dynamics into two components.
The first is related to potential games, which reduce to gradient descent on an
implicit function; the second relates to Hamiltonian games, a new class of
games that obey a conservation law, akin to conservation laws in classical
mechanical systems. The decomposition motivates Symplectic Gradient Adjustment
(SGA), a new algorithm for finding stable fixed points in general games. Basic
experiments show SGA is competitive with recently proposed algorithms for
finding stable fixed points in GANs -- whilst at the same time being applicable
to -- and having guarantees in -- much more general games.
",james martens,,2018.0,,"PMLR volume 80, 2018",Balduzzi2018,True,,arXiv,Not available,The Mechanics of n-Player Differentiable Games,48bcea52c4fc493a73c3f55d079ea0a2,http://arxiv.org/abs/1802.05642v2
16196," In an $\epsilon$-Nash equilibrium, a player can gain at most $\epsilon$ by
unilaterally changing his behaviour. For two-player (bimatrix) games with
payoffs in $[0,1]$, the best-known$\epsilon$ achievable in polynomial time is
0.3393. In general, for $n$-player games an $\epsilon$-Nash equilibrium can be
computed in polynomial time for an $\epsilon$ that is an increasing function of
$n$ but does not depend on the number of strategies of the players. For
three-player and four-player games the corresponding values of $\epsilon$ are
0.6022 and 0.7153, respectively. Polymatrix games are a restriction of general
$n$-player games where a player's payoff is the sum of payoffs from a number of
bimatrix games. There exists a very small but constant $\epsilon$ such that
computing an $\epsilon$-Nash equilibrium of a polymatrix game is \PPAD-hard.
Our main result is that a $(0.5+\delta)$-Nash equilibrium of an $n$-player
polymatrix game can be computed in time polynomial in the input size and
$\frac{1}{\delta}$. Inspired by the algorithm of Tsaknakis and Spirakis, our
algorithm uses gradient descent on the maximum regret of the players. We also
show that this algorithm can be applied to efficiently find a
$(0.5+\delta)$-Nash equilibrium in a two-player Bayesian game.
",paul spirakis,,2014.0,,arXiv,Deligkas2014,True,,arXiv,Not available,Computing Approximate Nash Equilibria in Polymatrix Games,deee9e5596b2ba0e9ce19db9b54313f7,http://arxiv.org/abs/1409.3741v2
16197," The cornerstone underpinning deep learning is the guarantee that gradient
descent on an objective converges to local minima. Unfortunately, this
guarantee fails in settings, such as generative adversarial nets, where there
are multiple interacting losses. The behavior of gradient-based methods in
games is not well understood -- and is becoming increasingly important as
adversarial and multi-objective architectures proliferate. In this paper, we
develop new techniques to understand and control the dynamics in general games.
The key result is to decompose the second-order dynamics into two components.
The first is related to potential games, which reduce to gradient descent on an
implicit function; the second relates to Hamiltonian games, a new class of
games that obey a conservation law, akin to conservation laws in classical
mechanical systems. The decomposition motivates Symplectic Gradient Adjustment
(SGA), a new algorithm for finding stable fixed points in general games. Basic
experiments show SGA is competitive with recently proposed algorithms for
finding stable fixed points in GANs -- whilst at the same time being applicable
to -- and having guarantees in -- much more general games.
",jakob foerster,,2018.0,,"PMLR volume 80, 2018",Balduzzi2018,True,,arXiv,Not available,The Mechanics of n-Player Differentiable Games,48bcea52c4fc493a73c3f55d079ea0a2,http://arxiv.org/abs/1802.05642v2
16198," The cornerstone underpinning deep learning is the guarantee that gradient
descent on an objective converges to local minima. Unfortunately, this
guarantee fails in settings, such as generative adversarial nets, where there
are multiple interacting losses. The behavior of gradient-based methods in
games is not well understood -- and is becoming increasingly important as
adversarial and multi-objective architectures proliferate. In this paper, we
develop new techniques to understand and control the dynamics in general games.
The key result is to decompose the second-order dynamics into two components.
The first is related to potential games, which reduce to gradient descent on an
implicit function; the second relates to Hamiltonian games, a new class of
games that obey a conservation law, akin to conservation laws in classical
mechanical systems. The decomposition motivates Symplectic Gradient Adjustment
(SGA), a new algorithm for finding stable fixed points in general games. Basic
experiments show SGA is competitive with recently proposed algorithms for
finding stable fixed points in GANs -- whilst at the same time being applicable
to -- and having guarantees in -- much more general games.
",karl tuyls,,2018.0,,"PMLR volume 80, 2018",Balduzzi2018,True,,arXiv,Not available,The Mechanics of n-Player Differentiable Games,48bcea52c4fc493a73c3f55d079ea0a2,http://arxiv.org/abs/1802.05642v2
16199," The cornerstone underpinning deep learning is the guarantee that gradient
descent on an objective converges to local minima. Unfortunately, this
guarantee fails in settings, such as generative adversarial nets, where there
are multiple interacting losses. The behavior of gradient-based methods in
games is not well understood -- and is becoming increasingly important as
adversarial and multi-objective architectures proliferate. In this paper, we
develop new techniques to understand and control the dynamics in general games.
The key result is to decompose the second-order dynamics into two components.
The first is related to potential games, which reduce to gradient descent on an
implicit function; the second relates to Hamiltonian games, a new class of
games that obey a conservation law, akin to conservation laws in classical
mechanical systems. The decomposition motivates Symplectic Gradient Adjustment
(SGA), a new algorithm for finding stable fixed points in general games. Basic
experiments show SGA is competitive with recently proposed algorithms for
finding stable fixed points in GANs -- whilst at the same time being applicable
to -- and having guarantees in -- much more general games.
",thore graepel,,2018.0,,"PMLR volume 80, 2018",Balduzzi2018,True,,arXiv,Not available,The Mechanics of n-Player Differentiable Games,48bcea52c4fc493a73c3f55d079ea0a2,http://arxiv.org/abs/1802.05642v2
16200," This paper studies the optimization of strategies in the context of possibly
randomized two players zero-sum games with incomplete information. We compare 5
algorithms for tuning the parameters of strategies over a benchmark of 12
games. A first evolutionary approach consists in designing a highly randomized
opponent (called naive opponent) and optimizing the parametric strategy against
it; a second one is optimizing iteratively the strategy, i.e. constructing a
sequence of strategies starting from the naive one. 2 versions of coevolutions,
real and approximate, are also tested as well as a seed method. The coevolution
methods were performing well, but results were not stable from one game to
another. In spite of its simplicity, the seed method, which can be seen as an
extremal version of coevolution, works even when nothing else works.
Incidentally, these methods brought out some unexpected strategies for some
games, such as Batawaf or the game of War, which seem, at first view, purely
random games without any structured actions possible for the players or Guess
Who, where a dichotomy between the characters seems to be the most reasonable
strategy. All source codes of games are written in Matlab/Octave and are freely
available for download.
",marie-liesse cauwet,,2018.0,,"IEEE Congress on Evolutionary Computation, Jul 2018, Rio de
Janeiro, Brazil",Cauwet2018,True,,arXiv,Not available,"Surprising strategies obtained by stochastic optimization in partially
observable games",63d2d359d2d15e8a31100d0e197af54d,http://arxiv.org/abs/1807.01877v1
16201," This paper studies the optimization of strategies in the context of possibly
randomized two players zero-sum games with incomplete information. We compare 5
algorithms for tuning the parameters of strategies over a benchmark of 12
games. A first evolutionary approach consists in designing a highly randomized
opponent (called naive opponent) and optimizing the parametric strategy against
it; a second one is optimizing iteratively the strategy, i.e. constructing a
sequence of strategies starting from the naive one. 2 versions of coevolutions,
real and approximate, are also tested as well as a seed method. The coevolution
methods were performing well, but results were not stable from one game to
another. In spite of its simplicity, the seed method, which can be seen as an
extremal version of coevolution, works even when nothing else works.
Incidentally, these methods brought out some unexpected strategies for some
games, such as Batawaf or the game of War, which seem, at first view, purely
random games without any structured actions possible for the players or Guess
Who, where a dichotomy between the characters seems to be the most reasonable
strategy. All source codes of games are written in Matlab/Octave and are freely
available for download.
",olivier teytaud,,2018.0,,"IEEE Congress on Evolutionary Computation, Jul 2018, Rio de
Janeiro, Brazil",Cauwet2018,True,,arXiv,Not available,"Surprising strategies obtained by stochastic optimization in partially
observable games",63d2d359d2d15e8a31100d0e197af54d,http://arxiv.org/abs/1807.01877v1
16202," We study the effects of individual perceptions of payoffs in two-player
games. In particular we consider the setting in which individuals' perceptions
of the game are influenced by their previous experiences and outcomes.
Accordingly, we introduce a framework based on evolutionary games where
individuals have the capacity to perceive their interactions in different ways.
Starting from the narrative of social behaviors in a pub as an illustration, we
first study the combination of the prisoner's dilemma and harmony game as two
alternative perceptions of the same situation. Considering a selection of game
pairs, our results show that the interplay between perception dynamics and game
payoffs gives rise to non-linear phenomena unexpected in each of the games
separately, such as catastrophic phase transitions in the cooperation basin of
attraction, Hopf bifurcations and cycles of cooperation and defection.
Combining analytical techniques with multi-agent simulations we also show how
introducing individual perceptions can cause non-trivial dynamical behaviors to
emerge, which cannot be obtained by analyzing the system as a whole.
Specifically, initial heterogeneities at the microscopic level can yield a
polarization effect that is unpredictable at the macroscopic level. This
framework opens the door to the exploration of new ways of understanding the
link between the emergence of cooperation and individual preferences and
perceptions, with potential applications beyond social interactions.
",alberto antonioni,,2018.0,,arXiv,Antonioni2018,True,,arXiv,Not available,Individual perception dynamics in drunk games,2eb2165b041e3b27657da0e4775eb6f9,http://arxiv.org/abs/1807.08635v1
16203," We study the effects of individual perceptions of payoffs in two-player
games. In particular we consider the setting in which individuals' perceptions
of the game are influenced by their previous experiences and outcomes.
Accordingly, we introduce a framework based on evolutionary games where
individuals have the capacity to perceive their interactions in different ways.
Starting from the narrative of social behaviors in a pub as an illustration, we
first study the combination of the prisoner's dilemma and harmony game as two
alternative perceptions of the same situation. Considering a selection of game
pairs, our results show that the interplay between perception dynamics and game
payoffs gives rise to non-linear phenomena unexpected in each of the games
separately, such as catastrophic phase transitions in the cooperation basin of
attraction, Hopf bifurcations and cycles of cooperation and defection.
Combining analytical techniques with multi-agent simulations we also show how
introducing individual perceptions can cause non-trivial dynamical behaviors to
emerge, which cannot be obtained by analyzing the system as a whole.
Specifically, initial heterogeneities at the microscopic level can yield a
polarization effect that is unpredictable at the macroscopic level. This
framework opens the door to the exploration of new ways of understanding the
link between the emergence of cooperation and individual preferences and
perceptions, with potential applications beyond social interactions.
",luis martinez-vaquero,,2018.0,,arXiv,Antonioni2018,True,,arXiv,Not available,Individual perception dynamics in drunk games,2eb2165b041e3b27657da0e4775eb6f9,http://arxiv.org/abs/1807.08635v1
16204," We study the effects of individual perceptions of payoffs in two-player
games. In particular we consider the setting in which individuals' perceptions
of the game are influenced by their previous experiences and outcomes.
Accordingly, we introduce a framework based on evolutionary games where
individuals have the capacity to perceive their interactions in different ways.
Starting from the narrative of social behaviors in a pub as an illustration, we
first study the combination of the prisoner's dilemma and harmony game as two
alternative perceptions of the same situation. Considering a selection of game
pairs, our results show that the interplay between perception dynamics and game
payoffs gives rise to non-linear phenomena unexpected in each of the games
separately, such as catastrophic phase transitions in the cooperation basin of
attraction, Hopf bifurcations and cycles of cooperation and defection.
Combining analytical techniques with multi-agent simulations we also show how
introducing individual perceptions can cause non-trivial dynamical behaviors to
emerge, which cannot be obtained by analyzing the system as a whole.
Specifically, initial heterogeneities at the microscopic level can yield a
polarization effect that is unpredictable at the macroscopic level. This
framework opens the door to the exploration of new ways of understanding the
link between the emergence of cooperation and individual preferences and
perceptions, with potential applications beyond social interactions.
",cole mathis,,2018.0,,arXiv,Antonioni2018,True,,arXiv,Not available,Individual perception dynamics in drunk games,2eb2165b041e3b27657da0e4775eb6f9,http://arxiv.org/abs/1807.08635v1
16205," We study the effects of individual perceptions of payoffs in two-player
games. In particular we consider the setting in which individuals' perceptions
of the game are influenced by their previous experiences and outcomes.
Accordingly, we introduce a framework based on evolutionary games where
individuals have the capacity to perceive their interactions in different ways.
Starting from the narrative of social behaviors in a pub as an illustration, we
first study the combination of the prisoner's dilemma and harmony game as two
alternative perceptions of the same situation. Considering a selection of game
pairs, our results show that the interplay between perception dynamics and game
payoffs gives rise to non-linear phenomena unexpected in each of the games
separately, such as catastrophic phase transitions in the cooperation basin of
attraction, Hopf bifurcations and cycles of cooperation and defection.
Combining analytical techniques with multi-agent simulations we also show how
introducing individual perceptions can cause non-trivial dynamical behaviors to
emerge, which cannot be obtained by analyzing the system as a whole.
Specifically, initial heterogeneities at the microscopic level can yield a
polarization effect that is unpredictable at the macroscopic level. This
framework opens the door to the exploration of new ways of understanding the
link between the emergence of cooperation and individual preferences and
perceptions, with potential applications beyond social interactions.
",leto peel,,2018.0,,arXiv,Antonioni2018,True,,arXiv,Not available,Individual perception dynamics in drunk games,2eb2165b041e3b27657da0e4775eb6f9,http://arxiv.org/abs/1807.08635v1
16206," We study the effects of individual perceptions of payoffs in two-player
games. In particular we consider the setting in which individuals' perceptions
of the game are influenced by their previous experiences and outcomes.
Accordingly, we introduce a framework based on evolutionary games where
individuals have the capacity to perceive their interactions in different ways.
Starting from the narrative of social behaviors in a pub as an illustration, we
first study the combination of the prisoner's dilemma and harmony game as two
alternative perceptions of the same situation. Considering a selection of game
pairs, our results show that the interplay between perception dynamics and game
payoffs gives rise to non-linear phenomena unexpected in each of the games
separately, such as catastrophic phase transitions in the cooperation basin of
attraction, Hopf bifurcations and cycles of cooperation and defection.
Combining analytical techniques with multi-agent simulations we also show how
introducing individual perceptions can cause non-trivial dynamical behaviors to
emerge, which cannot be obtained by analyzing the system as a whole.
Specifically, initial heterogeneities at the microscopic level can yield a
polarization effect that is unpredictable at the macroscopic level. This
framework opens the door to the exploration of new ways of understanding the
link between the emergence of cooperation and individual preferences and
perceptions, with potential applications beyond social interactions.
",massimo stella,,2018.0,,arXiv,Antonioni2018,True,,arXiv,Not available,Individual perception dynamics in drunk games,2eb2165b041e3b27657da0e4775eb6f9,http://arxiv.org/abs/1807.08635v1
16207," We consider a multilevel network game, where nodes can improve their
communication costs by connecting to a high-speed network. The $n$ nodes are
connected by a static network and each node can decide individually to become a
gateway to the high-speed network. The goal of a node $v$ is to minimize its
private costs, i.e., the sum (SUM-game) or maximum (MAX-game) of communication
distances from $v$ to all other nodes plus a fixed price $\alpha > 0$ if it
decides to be a gateway. Between gateways the communication distance is $0$,
and gateways also improve other nodes' distances by behaving as shortcuts. For
the SUM-game, we show that for $\alpha \leq n-1$, the price of anarchy is
$\Theta(n/\sqrt{\alpha})$ and in this range equilibria always exist. In range
$\alpha \in (n-1,n(n-1))$ the price of anarchy is $\Theta(\sqrt{\alpha})$, and
for $\alpha \geq n(n-1)$ it is constant. For the MAX-game, we show that the
price of anarchy is either $\Theta(1 + n/\sqrt{\alpha})$, for $\alpha\geq 1$,
or else $1$. Given a graph with girth of at least $4\alpha$, equilibria always
exist. Concerning the dynamics, both the SUM-game and the MAX-game are not
potential games. For the SUM-game, we even show that it is not weakly acyclic.
",sebastian abshoff,,2014.0,,arXiv,Abshoff2014,True,,arXiv,Not available,Multilevel Network Games,f191d00dcd151d7e14cb47baece4cc6f,http://arxiv.org/abs/1409.5383v1
16208," The decisions that human beings make to allocate time has significant bearing
on economic output and to the sustenance of social networks. The time
allocation problem motivates our formal analysis of the resource allocation
game, where agents on a social network, who have asymmetric, private
interaction preferences, make decisions on how to allocate time, a bounded
endowment, over their neighbors. Unlike the well-known opinion formation game
on a social network, our game appears not to be a potential game, and the
Best-Response dynamics is non-differentiable making the analysis of
Best-Response dynamics non-trivial.
In our game, we consider two types of player behavior, namely optimistic or
pessimistic, based on how they use their time endowment over their neighbors.
To analyze Best-Response dynamics, we circumvent the problem of the game not
being a potential game, through the lens of a novel two-level potential
function approach. We show that the Best-Response dynamics converges point-wise
to a Nash Equilibrium when players are all: optimistic; pessimistic; a mix of
both types. Finally, we show that the Nash Equilibrium set is non-convex but
connected, and Price of Anarchy is unbounded while Price of Stability is one.
Extensive simulations over a stylized grid reveals that the distribution of
quality of the convergence points is unimodal-we conjecture that presence of
unimodality is tied to the connectedness of Nash Equilibrium.
",wei-chun lee,,2018.0,,arXiv,Lee2018,True,,arXiv,Not available,"Resource Allocation Game on Social Networks: Best Response Dynamics and
Convergence",c590901f0ccf64a552f87197644a0df9,http://arxiv.org/abs/1808.08260v1
16209," The decisions that human beings make to allocate time has significant bearing
on economic output and to the sustenance of social networks. The time
allocation problem motivates our formal analysis of the resource allocation
game, where agents on a social network, who have asymmetric, private
interaction preferences, make decisions on how to allocate time, a bounded
endowment, over their neighbors. Unlike the well-known opinion formation game
on a social network, our game appears not to be a potential game, and the
Best-Response dynamics is non-differentiable making the analysis of
Best-Response dynamics non-trivial.
In our game, we consider two types of player behavior, namely optimistic or
pessimistic, based on how they use their time endowment over their neighbors.
To analyze Best-Response dynamics, we circumvent the problem of the game not
being a potential game, through the lens of a novel two-level potential
function approach. We show that the Best-Response dynamics converges point-wise
to a Nash Equilibrium when players are all: optimistic; pessimistic; a mix of
both types. Finally, we show that the Nash Equilibrium set is non-convex but
connected, and Price of Anarchy is unbounded while Price of Stability is one.
Extensive simulations over a stylized grid reveals that the distribution of
quality of the convergence points is unimodal-we conjecture that presence of
unimodality is tied to the connectedness of Nash Equilibrium.
",vasilis livanos,,2018.0,,arXiv,Lee2018,True,,arXiv,Not available,"Resource Allocation Game on Social Networks: Best Response Dynamics and
Convergence",c590901f0ccf64a552f87197644a0df9,http://arxiv.org/abs/1808.08260v1
16210," The decisions that human beings make to allocate time has significant bearing
on economic output and to the sustenance of social networks. The time
allocation problem motivates our formal analysis of the resource allocation
game, where agents on a social network, who have asymmetric, private
interaction preferences, make decisions on how to allocate time, a bounded
endowment, over their neighbors. Unlike the well-known opinion formation game
on a social network, our game appears not to be a potential game, and the
Best-Response dynamics is non-differentiable making the analysis of
Best-Response dynamics non-trivial.
In our game, we consider two types of player behavior, namely optimistic or
pessimistic, based on how they use their time endowment over their neighbors.
To analyze Best-Response dynamics, we circumvent the problem of the game not
being a potential game, through the lens of a novel two-level potential
function approach. We show that the Best-Response dynamics converges point-wise
to a Nash Equilibrium when players are all: optimistic; pessimistic; a mix of
both types. Finally, we show that the Nash Equilibrium set is non-convex but
connected, and Price of Anarchy is unbounded while Price of Stability is one.
Extensive simulations over a stylized grid reveals that the distribution of
quality of the convergence points is unimodal-we conjecture that presence of
unimodality is tied to the connectedness of Nash Equilibrium.
",ruta mehta,,2018.0,,arXiv,Lee2018,True,,arXiv,Not available,"Resource Allocation Game on Social Networks: Best Response Dynamics and
Convergence",c590901f0ccf64a552f87197644a0df9,http://arxiv.org/abs/1808.08260v1
16211," The decisions that human beings make to allocate time has significant bearing
on economic output and to the sustenance of social networks. The time
allocation problem motivates our formal analysis of the resource allocation
game, where agents on a social network, who have asymmetric, private
interaction preferences, make decisions on how to allocate time, a bounded
endowment, over their neighbors. Unlike the well-known opinion formation game
on a social network, our game appears not to be a potential game, and the
Best-Response dynamics is non-differentiable making the analysis of
Best-Response dynamics non-trivial.
In our game, we consider two types of player behavior, namely optimistic or
pessimistic, based on how they use their time endowment over their neighbors.
To analyze Best-Response dynamics, we circumvent the problem of the game not
being a potential game, through the lens of a novel two-level potential
function approach. We show that the Best-Response dynamics converges point-wise
to a Nash Equilibrium when players are all: optimistic; pessimistic; a mix of
both types. Finally, we show that the Nash Equilibrium set is non-convex but
connected, and Price of Anarchy is unbounded while Price of Stability is one.
Extensive simulations over a stylized grid reveals that the distribution of
quality of the convergence points is unimodal-we conjecture that presence of
unimodality is tied to the connectedness of Nash Equilibrium.
",hari sundaram,,2018.0,,arXiv,Lee2018,True,,arXiv,Not available,"Resource Allocation Game on Social Networks: Best Response Dynamics and
Convergence",c590901f0ccf64a552f87197644a0df9,http://arxiv.org/abs/1808.08260v1
16212," We present a polynomial-time algorithm for computing $d^{d+o(d)}$-approximate
(pure) Nash equilibria in weighted congestion games with polynomial cost
functions of degree at most $d$. This is an exponential improvement of the
approximation factor with respect to the previously best algorithm. An
appealing additional feature of our algorithm is that it uses only
best-improvement steps in the actual game, as opposed to earlier approaches
that first had to transform the game itself. Our algorithm is an adaptation of
the seminal algorithm by Caragiannis et al. [FOCS'11, TEAC 2015], but we
utilize an approximate potential function directly on the original game instead
of an exact one on a modified game.
A critical component of our analysis, which is of independent interest, is
the derivation of a novel bound of $[d/\mathcal{W}(d/\rho)]^{d+1}$ for the
Price of Anarchy (PoA) of $\rho$-approximate equilibria in weighted congestion
games, where $\mathcal{W}$ is the Lambert-W function. More specifically, we
show that this PoA is exactly equal to $\Phi_{d,\rho}^{d+1}$, where
$\Phi_{d,\rho}$ is the unique positive solution of the equation $\rho
(x+1)^d=x^{d+1}$. Our upper bound is derived via a smoothness-like argument,
and thus holds even for mixed Nash and correlated equilibria, while our lower
bound is simple enough to apply even to singleton congestion games.
",yiannis giannakopoulos,,2018.0,,arXiv,Giannakopoulos2018,True,,arXiv,Not available,"An Improved Algorithm for Computing Approximate Equilibria in Weighted
Congestion Games",f12ec24a6248798c6c2a9888b6be8dab,http://arxiv.org/abs/1810.12806v2
16213," We present a polynomial-time algorithm for computing $d^{d+o(d)}$-approximate
(pure) Nash equilibria in weighted congestion games with polynomial cost
functions of degree at most $d$. This is an exponential improvement of the
approximation factor with respect to the previously best algorithm. An
appealing additional feature of our algorithm is that it uses only
best-improvement steps in the actual game, as opposed to earlier approaches
that first had to transform the game itself. Our algorithm is an adaptation of
the seminal algorithm by Caragiannis et al. [FOCS'11, TEAC 2015], but we
utilize an approximate potential function directly on the original game instead
of an exact one on a modified game.
A critical component of our analysis, which is of independent interest, is
the derivation of a novel bound of $[d/\mathcal{W}(d/\rho)]^{d+1}$ for the
Price of Anarchy (PoA) of $\rho$-approximate equilibria in weighted congestion
games, where $\mathcal{W}$ is the Lambert-W function. More specifically, we
show that this PoA is exactly equal to $\Phi_{d,\rho}^{d+1}$, where
$\Phi_{d,\rho}$ is the unique positive solution of the equation $\rho
(x+1)^d=x^{d+1}$. Our upper bound is derived via a smoothness-like argument,
and thus holds even for mixed Nash and correlated equilibria, while our lower
bound is simple enough to apply even to singleton congestion games.
",georgy noarov,,2018.0,,arXiv,Giannakopoulos2018,True,,arXiv,Not available,"An Improved Algorithm for Computing Approximate Equilibria in Weighted
Congestion Games",f12ec24a6248798c6c2a9888b6be8dab,http://arxiv.org/abs/1810.12806v2
16214," We present a polynomial-time algorithm for computing $d^{d+o(d)}$-approximate
(pure) Nash equilibria in weighted congestion games with polynomial cost
functions of degree at most $d$. This is an exponential improvement of the
approximation factor with respect to the previously best algorithm. An
appealing additional feature of our algorithm is that it uses only
best-improvement steps in the actual game, as opposed to earlier approaches
that first had to transform the game itself. Our algorithm is an adaptation of
the seminal algorithm by Caragiannis et al. [FOCS'11, TEAC 2015], but we
utilize an approximate potential function directly on the original game instead
of an exact one on a modified game.
A critical component of our analysis, which is of independent interest, is
the derivation of a novel bound of $[d/\mathcal{W}(d/\rho)]^{d+1}$ for the
Price of Anarchy (PoA) of $\rho$-approximate equilibria in weighted congestion
games, where $\mathcal{W}$ is the Lambert-W function. More specifically, we
show that this PoA is exactly equal to $\Phi_{d,\rho}^{d+1}$, where
$\Phi_{d,\rho}$ is the unique positive solution of the equation $\rho
(x+1)^d=x^{d+1}$. Our upper bound is derived via a smoothness-like argument,
and thus holds even for mixed Nash and correlated equilibria, while our lower
bound is simple enough to apply even to singleton congestion games.
",andreas schulz,,2018.0,,arXiv,Giannakopoulos2018,True,,arXiv,Not available,"An Improved Algorithm for Computing Approximate Equilibria in Weighted
Congestion Games",f12ec24a6248798c6c2a9888b6be8dab,http://arxiv.org/abs/1810.12806v2
16215," In applied game theory the motivation of players is a key element. It is
encoded in the payoffs of the game form and often based on utility functions.
But there are cases were formal descriptions in the form of a utility function
do not exist. In this paper we introduce a representation of games where
players' goals are modeled based on so-called higher-order functions. Our
representation provides a general and powerful way to mathematically summarize
players' intentions. In our framework utility functions as well as preference
relations are special cases to describe players' goals. We show that in
higher-order functions formal descriptions of players may still exist where
utility functions do not using a classical example, a variant of Keynes' beauty
contest. We also show that equilibrium conditions based on Nash can be easily
adapted to our framework. Lastly, this framework serves as a stepping stone to
powerful tools from computer science that can be usefully applied to economic
game theory in the future such as computational and computability aspects.
",jules hedges,,2015.0,,arXiv,Hedges2015,True,,arXiv,Not available,Higher-Order Game Theory,cc6adb53a5daa9bdb2bf0b04661f0d85,http://arxiv.org/abs/1506.01002v2
16216," In applied game theory the motivation of players is a key element. It is
encoded in the payoffs of the game form and often based on utility functions.
But there are cases were formal descriptions in the form of a utility function
do not exist. In this paper we introduce a representation of games where
players' goals are modeled based on so-called higher-order functions. Our
representation provides a general and powerful way to mathematically summarize
players' intentions. In our framework utility functions as well as preference
relations are special cases to describe players' goals. We show that in
higher-order functions formal descriptions of players may still exist where
utility functions do not using a classical example, a variant of Keynes' beauty
contest. We also show that equilibrium conditions based on Nash can be easily
adapted to our framework. Lastly, this framework serves as a stepping stone to
powerful tools from computer science that can be usefully applied to economic
game theory in the future such as computational and computability aspects.
",paulo oliva,,2015.0,,arXiv,Hedges2015,True,,arXiv,Not available,Higher-Order Game Theory,cc6adb53a5daa9bdb2bf0b04661f0d85,http://arxiv.org/abs/1506.01002v2
16217," In applied game theory the motivation of players is a key element. It is
encoded in the payoffs of the game form and often based on utility functions.
But there are cases were formal descriptions in the form of a utility function
do not exist. In this paper we introduce a representation of games where
players' goals are modeled based on so-called higher-order functions. Our
representation provides a general and powerful way to mathematically summarize
players' intentions. In our framework utility functions as well as preference
relations are special cases to describe players' goals. We show that in
higher-order functions formal descriptions of players may still exist where
utility functions do not using a classical example, a variant of Keynes' beauty
contest. We also show that equilibrium conditions based on Nash can be easily
adapted to our framework. Lastly, this framework serves as a stepping stone to
powerful tools from computer science that can be usefully applied to economic
game theory in the future such as computational and computability aspects.
",evguenia sprits,,2015.0,,arXiv,Hedges2015,True,,arXiv,Not available,Higher-Order Game Theory,cc6adb53a5daa9bdb2bf0b04661f0d85,http://arxiv.org/abs/1506.01002v2
16218," We consider a multilevel network game, where nodes can improve their
communication costs by connecting to a high-speed network. The $n$ nodes are
connected by a static network and each node can decide individually to become a
gateway to the high-speed network. The goal of a node $v$ is to minimize its
private costs, i.e., the sum (SUM-game) or maximum (MAX-game) of communication
distances from $v$ to all other nodes plus a fixed price $\alpha > 0$ if it
decides to be a gateway. Between gateways the communication distance is $0$,
and gateways also improve other nodes' distances by behaving as shortcuts. For
the SUM-game, we show that for $\alpha \leq n-1$, the price of anarchy is
$\Theta(n/\sqrt{\alpha})$ and in this range equilibria always exist. In range
$\alpha \in (n-1,n(n-1))$ the price of anarchy is $\Theta(\sqrt{\alpha})$, and
for $\alpha \geq n(n-1)$ it is constant. For the MAX-game, we show that the
price of anarchy is either $\Theta(1 + n/\sqrt{\alpha})$, for $\alpha\geq 1$,
or else $1$. Given a graph with girth of at least $4\alpha$, equilibria always
exist. Concerning the dynamics, both the SUM-game and the MAX-game are not
potential games. For the SUM-game, we even show that it is not weakly acyclic.
",andreas cord-landwehr,,2014.0,,arXiv,Abshoff2014,True,,arXiv,Not available,Multilevel Network Games,f191d00dcd151d7e14cb47baece4cc6f,http://arxiv.org/abs/1409.5383v1
16219," In applied game theory the motivation of players is a key element. It is
encoded in the payoffs of the game form and often based on utility functions.
But there are cases were formal descriptions in the form of a utility function
do not exist. In this paper we introduce a representation of games where
players' goals are modeled based on so-called higher-order functions. Our
representation provides a general and powerful way to mathematically summarize
players' intentions. In our framework utility functions as well as preference
relations are special cases to describe players' goals. We show that in
higher-order functions formal descriptions of players may still exist where
utility functions do not using a classical example, a variant of Keynes' beauty
contest. We also show that equilibrium conditions based on Nash can be easily
adapted to our framework. Lastly, this framework serves as a stepping stone to
powerful tools from computer science that can be usefully applied to economic
game theory in the future such as computational and computability aspects.
",viktor winschel,,2015.0,,arXiv,Hedges2015,True,,arXiv,Not available,Higher-Order Game Theory,cc6adb53a5daa9bdb2bf0b04661f0d85,http://arxiv.org/abs/1506.01002v2
16220," In applied game theory the motivation of players is a key element. It is
encoded in the payoffs of the game form and often based on utility functions.
But there are cases were formal descriptions in the form of a utility function
do not exist. In this paper we introduce a representation of games where
players' goals are modeled based on so-called higher-order functions. Our
representation provides a general and powerful way to mathematically summarize
players' intentions. In our framework utility functions as well as preference
relations are special cases to describe players' goals. We show that in
higher-order functions formal descriptions of players may still exist where
utility functions do not using a classical example, a variant of Keynes' beauty
contest. We also show that equilibrium conditions based on Nash can be easily
adapted to our framework. Lastly, this framework serves as a stepping stone to
powerful tools from computer science that can be usefully applied to economic
game theory in the future such as computational and computability aspects.
",philipp zahn,,2015.0,,arXiv,Hedges2015,True,,arXiv,Not available,Higher-Order Game Theory,cc6adb53a5daa9bdb2bf0b04661f0d85,http://arxiv.org/abs/1506.01002v2
16221," This paper has a twofold scope. The first one is to clarify and put in
evidence the isomorphic character of two theories developed in quite different
fields: on one side, threshold logic, on the other side, simple games. One of
the main purposes in both theories is to determine when a simple game is
representable as a weighted game, which allows a very compact and easily
comprehensible representation. Deep results were found in threshold logic in
the sixties and seventies for this problem. However, game theory has taken the
lead and some new results have been obtained for the problem in the last two
decades. The second and main goal of this paper is to provide some new results
on this problem and propose several open questions and conjectures for future
research. The results we obtain depend on two significant parameters of the
game: the number of types of equivalent players and the number of types of
shift-minimal winning coalitions.
",josep freixas,,2016.0,10.1007/s11238-017-9606-z,arXiv,Freixas2016,True,,arXiv,Not available,"Characterization of threshold functions: state of the art, some new
contributions and open problems",2e007d9f57ac1ad33fd7c74f69732ce3,http://arxiv.org/abs/1603.00329v2
16222," This paper has a twofold scope. The first one is to clarify and put in
evidence the isomorphic character of two theories developed in quite different
fields: on one side, threshold logic, on the other side, simple games. One of
the main purposes in both theories is to determine when a simple game is
representable as a weighted game, which allows a very compact and easily
comprehensible representation. Deep results were found in threshold logic in
the sixties and seventies for this problem. However, game theory has taken the
lead and some new results have been obtained for the problem in the last two
decades. The second and main goal of this paper is to provide some new results
on this problem and propose several open questions and conjectures for future
research. The results we obtain depend on two significant parameters of the
game: the number of types of equivalent players and the number of types of
shift-minimal winning coalitions.
",marc freixas,,2016.0,10.1007/s11238-017-9606-z,arXiv,Freixas2016,True,,arXiv,Not available,"Characterization of threshold functions: state of the art, some new
contributions and open problems",2e007d9f57ac1ad33fd7c74f69732ce3,http://arxiv.org/abs/1603.00329v2
16223," This paper has a twofold scope. The first one is to clarify and put in
evidence the isomorphic character of two theories developed in quite different
fields: on one side, threshold logic, on the other side, simple games. One of
the main purposes in both theories is to determine when a simple game is
representable as a weighted game, which allows a very compact and easily
comprehensible representation. Deep results were found in threshold logic in
the sixties and seventies for this problem. However, game theory has taken the
lead and some new results have been obtained for the problem in the last two
decades. The second and main goal of this paper is to provide some new results
on this problem and propose several open questions and conjectures for future
research. The results we obtain depend on two significant parameters of the
game: the number of types of equivalent players and the number of types of
shift-minimal winning coalitions.
",sascha kurz,,2016.0,10.1007/s11238-017-9606-z,arXiv,Freixas2016,True,,arXiv,Not available,"Characterization of threshold functions: state of the art, some new
contributions and open problems",2e007d9f57ac1ad33fd7c74f69732ce3,http://arxiv.org/abs/1603.00329v2
16224," The minimum-effort coordination game, having potentially important
implications in both evolutionary biology and sociology, draws recently more
attention for the fact that human behavior in this social dilemma is often
inconsistent with the predictions of classic game theory. In the framework of
classic game theory, any common effort level is a strict and trembling hand
perfect Nash equilibrium, so that no desideratum is provided for selecting
among them. Behavior experiments, however, show that the effort levels employed
by subjects are inversely related to the effort costs. Here, we combine
coalescence theory and evolutionary game theory to investigate this game in
finite populations. Both analytic results and individual-based simulations show
that effort costs play a key role in the evolution of contribution levels,
which is in good agreement with those observed experimentally. Besides
well-mixed populations, set structured populations, where the population
structure itself is a consequence of the evolutionary process, have also been
taken into consideration. Therein we find that large number of sets and
moderate migration rate greatly promote effort levels, especially for high
effort costs. Our results may provide theoretical explanations for coordination
behaviors observed in real life from an evolutionary perspective.
",kun li,,2016.0,,arXiv,Li2016,True,,arXiv,Not available,Stochastic evolutionary dynamics of minimum-effort coordination games,4d7ab34b1d569ca1b4cf4c114d7faca7,http://arxiv.org/abs/1603.06114v1
16225," The minimum-effort coordination game, having potentially important
implications in both evolutionary biology and sociology, draws recently more
attention for the fact that human behavior in this social dilemma is often
inconsistent with the predictions of classic game theory. In the framework of
classic game theory, any common effort level is a strict and trembling hand
perfect Nash equilibrium, so that no desideratum is provided for selecting
among them. Behavior experiments, however, show that the effort levels employed
by subjects are inversely related to the effort costs. Here, we combine
coalescence theory and evolutionary game theory to investigate this game in
finite populations. Both analytic results and individual-based simulations show
that effort costs play a key role in the evolution of contribution levels,
which is in good agreement with those observed experimentally. Besides
well-mixed populations, set structured populations, where the population
structure itself is a consequence of the evolutionary process, have also been
taken into consideration. Therein we find that large number of sets and
moderate migration rate greatly promote effort levels, especially for high
effort costs. Our results may provide theoretical explanations for coordination
behaviors observed in real life from an evolutionary perspective.
",rui cong,,2016.0,,arXiv,Li2016,True,,arXiv,Not available,Stochastic evolutionary dynamics of minimum-effort coordination games,4d7ab34b1d569ca1b4cf4c114d7faca7,http://arxiv.org/abs/1603.06114v1
16226," The minimum-effort coordination game, having potentially important
implications in both evolutionary biology and sociology, draws recently more
attention for the fact that human behavior in this social dilemma is often
inconsistent with the predictions of classic game theory. In the framework of
classic game theory, any common effort level is a strict and trembling hand
perfect Nash equilibrium, so that no desideratum is provided for selecting
among them. Behavior experiments, however, show that the effort levels employed
by subjects are inversely related to the effort costs. Here, we combine
coalescence theory and evolutionary game theory to investigate this game in
finite populations. Both analytic results and individual-based simulations show
that effort costs play a key role in the evolution of contribution levels,
which is in good agreement with those observed experimentally. Besides
well-mixed populations, set structured populations, where the population
structure itself is a consequence of the evolutionary process, have also been
taken into consideration. Therein we find that large number of sets and
moderate migration rate greatly promote effort levels, especially for high
effort costs. Our results may provide theoretical explanations for coordination
behaviors observed in real life from an evolutionary perspective.
",long wang,,2016.0,,arXiv,Li2016,True,,arXiv,Not available,Stochastic evolutionary dynamics of minimum-effort coordination games,4d7ab34b1d569ca1b4cf4c114d7faca7,http://arxiv.org/abs/1603.06114v1
16227," A communication game consists of distributed parties attempting to jointly
complete a task with restricted communication. Such games are useful tools for
studying limitations of physical theories. A theory exhibits preparation
contextuality whenever its predictions cannot be explained by a preparation
noncontextual model. Here, we show that communication games performed in
operational theories reveal the preparation contextuality of that theory. For
statistics obtained in a particular family of communication games, we show a
direct correspondance with correlations in space-like separated events obeying
the no-signaling principle. Using this, we prove that all mixed quantum states
of any finite dimension are preparation contextual. We report on an
experimental realization of a communication game involving three-level quantum
systems from which we observe a strong violation of the constraints of
preparation noncontextuality.
",alley hameedi,,2017.0,10.1103/PhysRevLett.119.220402,"Phys. Rev. Lett. 119, 220402 (2017)",Hameedi2017,True,,arXiv,Not available,Communication games reveal preparation contextuality,0766c9ab64d969eb2d3376a1b96674c8,http://arxiv.org/abs/1704.08223v3
16228," A communication game consists of distributed parties attempting to jointly
complete a task with restricted communication. Such games are useful tools for
studying limitations of physical theories. A theory exhibits preparation
contextuality whenever its predictions cannot be explained by a preparation
noncontextual model. Here, we show that communication games performed in
operational theories reveal the preparation contextuality of that theory. For
statistics obtained in a particular family of communication games, we show a
direct correspondance with correlations in space-like separated events obeying
the no-signaling principle. Using this, we prove that all mixed quantum states
of any finite dimension are preparation contextual. We report on an
experimental realization of a communication game involving three-level quantum
systems from which we observe a strong violation of the constraints of
preparation noncontextuality.
",armin tavakoli,,2017.0,10.1103/PhysRevLett.119.220402,"Phys. Rev. Lett. 119, 220402 (2017)",Hameedi2017,True,,arXiv,Not available,Communication games reveal preparation contextuality,0766c9ab64d969eb2d3376a1b96674c8,http://arxiv.org/abs/1704.08223v3
16229," Evolutionarily stable strategy (ESS) is a key concept in evolutionary game
theory. ESS provides an evolutionary stability criterion for biological, social
and economical behaviors. In this paper, we develop a new approach to evaluate
ESS in symmetric two player games with fuzzy payoffs. Particularly, every
strategy is assigned a fuzzy membership that describes to what degree it is an
ESS in presence of uncertainty. The fuzzy set of ESS characterize the nature of
ESS. The proposed approach avoids loss of any information that happens by the
defuzzification method in games and handles uncertainty of payoffs through all
steps of finding an ESS. We use the satisfaction function to compare fuzzy
payoffs, and adopts the fuzzy decision rule to obtain the membership function
of the fuzzy set of ESS. The theorem shows the relation between fuzzy ESS and
fuzzy Nash quilibrium. The numerical results illustrate the proposed method is
an appropriate generalization of ESS to fuzzy payoff games.
",haozhen situ,,2015.0,,arXiv,Situ2015,True,,arXiv,Not available,Evolutionary Stable Strategies in Games with Fuzzy Payoffs,0a88009f89c5e7562561e6cb2bd31e45,http://arxiv.org/abs/1501.04265v2
16230," We consider a multilevel network game, where nodes can improve their
communication costs by connecting to a high-speed network. The $n$ nodes are
connected by a static network and each node can decide individually to become a
gateway to the high-speed network. The goal of a node $v$ is to minimize its
private costs, i.e., the sum (SUM-game) or maximum (MAX-game) of communication
distances from $v$ to all other nodes plus a fixed price $\alpha > 0$ if it
decides to be a gateway. Between gateways the communication distance is $0$,
and gateways also improve other nodes' distances by behaving as shortcuts. For
the SUM-game, we show that for $\alpha \leq n-1$, the price of anarchy is
$\Theta(n/\sqrt{\alpha})$ and in this range equilibria always exist. In range
$\alpha \in (n-1,n(n-1))$ the price of anarchy is $\Theta(\sqrt{\alpha})$, and
for $\alpha \geq n(n-1)$ it is constant. For the MAX-game, we show that the
price of anarchy is either $\Theta(1 + n/\sqrt{\alpha})$, for $\alpha\geq 1$,
or else $1$. Given a graph with girth of at least $4\alpha$, equilibria always
exist. Concerning the dynamics, both the SUM-game and the MAX-game are not
potential games. For the SUM-game, we even show that it is not weakly acyclic.
",daniel jung,,2014.0,,arXiv,Abshoff2014,True,,arXiv,Not available,Multilevel Network Games,f191d00dcd151d7e14cb47baece4cc6f,http://arxiv.org/abs/1409.5383v1
16231," A communication game consists of distributed parties attempting to jointly
complete a task with restricted communication. Such games are useful tools for
studying limitations of physical theories. A theory exhibits preparation
contextuality whenever its predictions cannot be explained by a preparation
noncontextual model. Here, we show that communication games performed in
operational theories reveal the preparation contextuality of that theory. For
statistics obtained in a particular family of communication games, we show a
direct correspondance with correlations in space-like separated events obeying
the no-signaling principle. Using this, we prove that all mixed quantum states
of any finite dimension are preparation contextual. We report on an
experimental realization of a communication game involving three-level quantum
systems from which we observe a strong violation of the constraints of
preparation noncontextuality.
",breno marques,,2017.0,10.1103/PhysRevLett.119.220402,"Phys. Rev. Lett. 119, 220402 (2017)",Hameedi2017,True,,arXiv,Not available,Communication games reveal preparation contextuality,0766c9ab64d969eb2d3376a1b96674c8,http://arxiv.org/abs/1704.08223v3
16232," A communication game consists of distributed parties attempting to jointly
complete a task with restricted communication. Such games are useful tools for
studying limitations of physical theories. A theory exhibits preparation
contextuality whenever its predictions cannot be explained by a preparation
noncontextual model. Here, we show that communication games performed in
operational theories reveal the preparation contextuality of that theory. For
statistics obtained in a particular family of communication games, we show a
direct correspondance with correlations in space-like separated events obeying
the no-signaling principle. Using this, we prove that all mixed quantum states
of any finite dimension are preparation contextual. We report on an
experimental realization of a communication game involving three-level quantum
systems from which we observe a strong violation of the constraints of
preparation noncontextuality.
",mohamed bourennane,,2017.0,10.1103/PhysRevLett.119.220402,"Phys. Rev. Lett. 119, 220402 (2017)",Hameedi2017,True,,arXiv,Not available,Communication games reveal preparation contextuality,0766c9ab64d969eb2d3376a1b96674c8,http://arxiv.org/abs/1704.08223v3
16233," Due to the spectrum reuse in small cell network, the inter-cell interference
has great effect on MEC's performance. In this paper, for reducing the energy
consumption and latency of MEC, we propose a game theory based jointing
offloading decision and resource allocation algorithm for multi-user MEC. In
this algorithm, the transmission power, offloading decision, and mobile user's
CPU capability are determined jointly. We prove that this game is an exact
potential game and the NE of this game exists and is unique. For reaching the
NE, the best response dynamic is applied. We calculate the best responses of
these three variables. Moreover, we investigate the properties of this
algorithm, including the convergence, the computation complexity, and the price
of anarchy. The theoretical analysis shows that the inter-cell interference has
great effect on the performance of MEC. The NE of this game is Pareto
efficiency and also the global optimal solution of the proposed optimal issue.
Finally, we evaluate the performance of this algorithm by simulation. The
simulation results illustrates that this algorithm is effective on improving
the performance of the multi-user MEC system.
",ning li,,2018.0,,arXiv,Li2018,True,,arXiv,Not available,"Distributed Joint Offloading Decision and Resource Allocation for
Multi-User Mobile Edge Computing: A Game Theory Approach",2d76f526648cb3d80630f29474b9d2fb,http://arxiv.org/abs/1805.02182v1
16234," Due to the spectrum reuse in small cell network, the inter-cell interference
has great effect on MEC's performance. In this paper, for reducing the energy
consumption and latency of MEC, we propose a game theory based jointing
offloading decision and resource allocation algorithm for multi-user MEC. In
this algorithm, the transmission power, offloading decision, and mobile user's
CPU capability are determined jointly. We prove that this game is an exact
potential game and the NE of this game exists and is unique. For reaching the
NE, the best response dynamic is applied. We calculate the best responses of
these three variables. Moreover, we investigate the properties of this
algorithm, including the convergence, the computation complexity, and the price
of anarchy. The theoretical analysis shows that the inter-cell interference has
great effect on the performance of MEC. The NE of this game is Pareto
efficiency and also the global optimal solution of the proposed optimal issue.
Finally, we evaluate the performance of this algorithm by simulation. The
simulation results illustrates that this algorithm is effective on improving
the performance of the multi-user MEC system.
",jose-fernan martinez-ortega,,2018.0,,arXiv,Li2018,True,,arXiv,Not available,"Distributed Joint Offloading Decision and Resource Allocation for
Multi-User Mobile Edge Computing: A Game Theory Approach",2d76f526648cb3d80630f29474b9d2fb,http://arxiv.org/abs/1805.02182v1
16235," Due to the spectrum reuse in small cell network, the inter-cell interference
has great effect on MEC's performance. In this paper, for reducing the energy
consumption and latency of MEC, we propose a game theory based jointing
offloading decision and resource allocation algorithm for multi-user MEC. In
this algorithm, the transmission power, offloading decision, and mobile user's
CPU capability are determined jointly. We prove that this game is an exact
potential game and the NE of this game exists and is unique. For reaching the
NE, the best response dynamic is applied. We calculate the best responses of
these three variables. Moreover, we investigate the properties of this
algorithm, including the convergence, the computation complexity, and the price
of anarchy. The theoretical analysis shows that the inter-cell interference has
great effect on the performance of MEC. The NE of this game is Pareto
efficiency and also the global optimal solution of the proposed optimal issue.
Finally, we evaluate the performance of this algorithm by simulation. The
simulation results illustrates that this algorithm is effective on improving
the performance of the multi-user MEC system.
",gregorio rubio,,2018.0,,arXiv,Li2018,True,,arXiv,Not available,"Distributed Joint Offloading Decision and Resource Allocation for
Multi-User Mobile Edge Computing: A Game Theory Approach",2d76f526648cb3d80630f29474b9d2fb,http://arxiv.org/abs/1805.02182v1
16236," Generating good revenue is one of the most important problems in Bayesian
auction design, and many (approximately) optimal dominant-strategy incentive
compatible (DSIC) Bayesian mechanisms have been constructed for various auction
settings. However, most existing studies do not consider the complexity for the
seller to carry out the mechanism. It is assumed that the seller knows ""each
single bit"" of the distributions and is able to optimize perfectly based on the
entire distributions. Unfortunately, this is a strong assumption and may not
hold in reality: for example, when the value distributions have exponentially
large supports or do not have succinct representations.
In this work we consider, for the first time, the query complexity of
Bayesian mechanisms. We only allow the seller to have limited oracle accesses
to the players' value distributions, via quantile queries and value queries.
For a large class of auction settings, we prove logarithmic lower-bounds for
the query complexity for any DSIC Bayesian mechanism to be of any constant
approximation to the optimal revenue. For single-item auctions and multi-item
auctions with unit-demand or additive valuation functions, we prove tight
upper-bounds via efficient query schemes, without requiring the distributions
to be regular or have monotone hazard rate. Thus, in those auction settings the
seller needs to access much less than the full distributions in order to
achieve approximately optimal revenue.
",jing chen,,2018.0,,arXiv,Chen2018,True,,arXiv,Not available,Bayesian Auctions with Efficient Queries,ae432a2715c85dd7d7f06e9a0d6bb12d,http://arxiv.org/abs/1804.07451v1
16237," Generating good revenue is one of the most important problems in Bayesian
auction design, and many (approximately) optimal dominant-strategy incentive
compatible (DSIC) Bayesian mechanisms have been constructed for various auction
settings. However, most existing studies do not consider the complexity for the
seller to carry out the mechanism. It is assumed that the seller knows ""each
single bit"" of the distributions and is able to optimize perfectly based on the
entire distributions. Unfortunately, this is a strong assumption and may not
hold in reality: for example, when the value distributions have exponentially
large supports or do not have succinct representations.
In this work we consider, for the first time, the query complexity of
Bayesian mechanisms. We only allow the seller to have limited oracle accesses
to the players' value distributions, via quantile queries and value queries.
For a large class of auction settings, we prove logarithmic lower-bounds for
the query complexity for any DSIC Bayesian mechanism to be of any constant
approximation to the optimal revenue. For single-item auctions and multi-item
auctions with unit-demand or additive valuation functions, we prove tight
upper-bounds via efficient query schemes, without requiring the distributions
to be regular or have monotone hazard rate. Thus, in those auction settings the
seller needs to access much less than the full distributions in order to
achieve approximately optimal revenue.
",bo li,,2018.0,,arXiv,Chen2018,True,,arXiv,Not available,Bayesian Auctions with Efficient Queries,ae432a2715c85dd7d7f06e9a0d6bb12d,http://arxiv.org/abs/1804.07451v1
16238," Generating good revenue is one of the most important problems in Bayesian
auction design, and many (approximately) optimal dominant-strategy incentive
compatible (DSIC) Bayesian mechanisms have been constructed for various auction
settings. However, most existing studies do not consider the complexity for the
seller to carry out the mechanism. It is assumed that the seller knows ""each
single bit"" of the distributions and is able to optimize perfectly based on the
entire distributions. Unfortunately, this is a strong assumption and may not
hold in reality: for example, when the value distributions have exponentially
large supports or do not have succinct representations.
In this work we consider, for the first time, the query complexity of
Bayesian mechanisms. We only allow the seller to have limited oracle accesses
to the players' value distributions, via quantile queries and value queries.
For a large class of auction settings, we prove logarithmic lower-bounds for
the query complexity for any DSIC Bayesian mechanism to be of any constant
approximation to the optimal revenue. For single-item auctions and multi-item
auctions with unit-demand or additive valuation functions, we prove tight
upper-bounds via efficient query schemes, without requiring the distributions
to be regular or have monotone hazard rate. Thus, in those auction settings the
seller needs to access much less than the full distributions in order to
achieve approximately optimal revenue.
",yingkai li,,2018.0,,arXiv,Chen2018,True,,arXiv,Not available,Bayesian Auctions with Efficient Queries,ae432a2715c85dd7d7f06e9a0d6bb12d,http://arxiv.org/abs/1804.07451v1
16239," Generating good revenue is one of the most important problems in Bayesian
auction design, and many (approximately) optimal dominant-strategy incentive
compatible (DSIC) Bayesian mechanisms have been constructed for various auction
settings. However, most existing studies do not consider the complexity for the
seller to carry out the mechanism. It is assumed that the seller knows ""each
single bit"" of the distributions and is able to optimize perfectly based on the
entire distributions. Unfortunately, this is a strong assumption and may not
hold in reality: for example, when the value distributions have exponentially
large supports or do not have succinct representations.
In this work we consider, for the first time, the query complexity of
Bayesian mechanisms. We only allow the seller to have limited oracle accesses
to the players' value distributions, via quantile queries and value queries.
For a large class of auction settings, we prove logarithmic lower-bounds for
the query complexity for any DSIC Bayesian mechanism to be of any constant
approximation to the optimal revenue. For single-item auctions and multi-item
auctions with unit-demand or additive valuation functions, we prove tight
upper-bounds via efficient query schemes, without requiring the distributions
to be regular or have monotone hazard rate. Thus, in those auction settings the
seller needs to access much less than the full distributions in order to
achieve approximately optimal revenue.
",pinyan lu,,2018.0,,arXiv,Chen2018,True,,arXiv,Not available,Bayesian Auctions with Efficient Queries,ae432a2715c85dd7d7f06e9a0d6bb12d,http://arxiv.org/abs/1804.07451v1
16240," Sponsored search in E-commerce platforms such as Amazon, Taobao and Tmall
provides sellers an effective way to reach potential buyers with most relevant
purpose. In this paper, we study the auction mechanism optimization problem in
sponsored search on Alibaba's mobile E-commerce platform. Besides generating
revenue, we are supposed to maintain an efficient marketplace with plenty of
quality users, guarantee a reasonable return on investment (ROI) for
advertisers, and meanwhile, facilitate a pleasant shopping experience for the
users. These requirements essentially pose a constrained optimization problem.
Directly optimizing over auction parameters yields a discontinuous, non-convex
problem that denies effective solutions. One of our major contribution is a
practical convex optimization formulation of the original problem. We devise a
novel re-parametrization of auction mechanism with discrete sets of
representative instances. To construct the optimization problem, we build an
auction simulation system which estimates the resulted business indicators of
the selected parameters by replaying the auctions recorded from real online
requests. We summarized the experiments on real search traffics to analyze the
effects of fidelity of auction simulation, the efficacy under various
constraint targets and the influence of regularization. The experiment results
show that with proper entropy regularization, we are able to maximize revenue
while constraining other business indicators within given ranges.
",gang bai,,2018.0,,arXiv,Bai2018,True,,arXiv,Not available,"Practical Constrained Optimization of Auction Mechanisms in E-Commerce
Sponsored Search Advertising",84850ad79c8e9f9921fe4c6a27dd60b1,http://arxiv.org/abs/1807.11790v1
16241," We consider a multilevel network game, where nodes can improve their
communication costs by connecting to a high-speed network. The $n$ nodes are
connected by a static network and each node can decide individually to become a
gateway to the high-speed network. The goal of a node $v$ is to minimize its
private costs, i.e., the sum (SUM-game) or maximum (MAX-game) of communication
distances from $v$ to all other nodes plus a fixed price $\alpha > 0$ if it
decides to be a gateway. Between gateways the communication distance is $0$,
and gateways also improve other nodes' distances by behaving as shortcuts. For
the SUM-game, we show that for $\alpha \leq n-1$, the price of anarchy is
$\Theta(n/\sqrt{\alpha})$ and in this range equilibria always exist. In range
$\alpha \in (n-1,n(n-1))$ the price of anarchy is $\Theta(\sqrt{\alpha})$, and
for $\alpha \geq n(n-1)$ it is constant. For the MAX-game, we show that the
price of anarchy is either $\Theta(1 + n/\sqrt{\alpha})$, for $\alpha\geq 1$,
or else $1$. Given a graph with girth of at least $4\alpha$, equilibria always
exist. Concerning the dynamics, both the SUM-game and the MAX-game are not
potential games. For the SUM-game, we even show that it is not weakly acyclic.
",alexander skopalik,,2014.0,,arXiv,Abshoff2014,True,,arXiv,Not available,Multilevel Network Games,f191d00dcd151d7e14cb47baece4cc6f,http://arxiv.org/abs/1409.5383v1
16242," Sponsored search in E-commerce platforms such as Amazon, Taobao and Tmall
provides sellers an effective way to reach potential buyers with most relevant
purpose. In this paper, we study the auction mechanism optimization problem in
sponsored search on Alibaba's mobile E-commerce platform. Besides generating
revenue, we are supposed to maintain an efficient marketplace with plenty of
quality users, guarantee a reasonable return on investment (ROI) for
advertisers, and meanwhile, facilitate a pleasant shopping experience for the
users. These requirements essentially pose a constrained optimization problem.
Directly optimizing over auction parameters yields a discontinuous, non-convex
problem that denies effective solutions. One of our major contribution is a
practical convex optimization formulation of the original problem. We devise a
novel re-parametrization of auction mechanism with discrete sets of
representative instances. To construct the optimization problem, we build an
auction simulation system which estimates the resulted business indicators of
the selected parameters by replaying the auctions recorded from real online
requests. We summarized the experiments on real search traffics to analyze the
effects of fidelity of auction simulation, the efficacy under various
constraint targets and the influence of regularization. The experiment results
show that with proper entropy regularization, we are able to maximize revenue
while constraining other business indicators within given ranges.
",zhihui xie,,2018.0,,arXiv,Bai2018,True,,arXiv,Not available,"Practical Constrained Optimization of Auction Mechanisms in E-Commerce
Sponsored Search Advertising",84850ad79c8e9f9921fe4c6a27dd60b1,http://arxiv.org/abs/1807.11790v1
16243," Sponsored search in E-commerce platforms such as Amazon, Taobao and Tmall
provides sellers an effective way to reach potential buyers with most relevant
purpose. In this paper, we study the auction mechanism optimization problem in
sponsored search on Alibaba's mobile E-commerce platform. Besides generating
revenue, we are supposed to maintain an efficient marketplace with plenty of
quality users, guarantee a reasonable return on investment (ROI) for
advertisers, and meanwhile, facilitate a pleasant shopping experience for the
users. These requirements essentially pose a constrained optimization problem.
Directly optimizing over auction parameters yields a discontinuous, non-convex
problem that denies effective solutions. One of our major contribution is a
practical convex optimization formulation of the original problem. We devise a
novel re-parametrization of auction mechanism with discrete sets of
representative instances. To construct the optimization problem, we build an
auction simulation system which estimates the resulted business indicators of
the selected parameters by replaying the auctions recorded from real online
requests. We summarized the experiments on real search traffics to analyze the
effects of fidelity of auction simulation, the efficacy under various
constraint targets and the influence of regularization. The experiment results
show that with proper entropy regularization, we are able to maximize revenue
while constraining other business indicators within given ranges.
",liang wang,,2018.0,,arXiv,Bai2018,True,,arXiv,Not available,"Practical Constrained Optimization of Auction Mechanisms in E-Commerce
Sponsored Search Advertising",84850ad79c8e9f9921fe4c6a27dd60b1,http://arxiv.org/abs/1807.11790v1
16244," Budgets play a significant role in real-world sequential auction markets such
as those implemented by Internet companies. To maximize the value provided to
auction participants, spending is smoothed across auctions so budgets are used
for the best opportunities. This paper considers a smoothing procedure that
relies on {\em pacing multipliers}: for each bidder, the platform applies a
factor between 0 and 1 that uniformly scales the bids across all auctions.
Looking at this process as a game between all bidders, we introduce the notion
of {\em pacing equilibrium}, and prove that they are always guaranteed to
exist. We demonstrate through examples that a market can have multiple pacing
equilibria with large variations in several natural objectives. We go on to
show that computing either a social-welfare-maximizing or a revenue-maximizing
pacing equilibrium is NP-hard. Finally, we develop a mixed-integer program
whose feasible solutions coincide with pacing equilibria, and show that it can
be used to find equilibria that optimize several interesting objectives. Using
our mixed-integer program, we perform numerical simulations on synthetic
auction markets that provide evidence that, in spite of the possibility of
equilibrium multiplicity, it occurs very rarely across several families of
random instances. We also show that solutions from the mixed-integer program
can be used to improve the outcomes achieved in a more realistic adaptive
pacing setting.
",vincent conitzer,,2017.0,,arXiv,Conitzer2017,True,,arXiv,Not available,Multiplicative Pacing Equilibria in Auction Markets,858b1cf71913f47e457ddd70c86baba1,http://arxiv.org/abs/1706.07151v1
16245," Budgets play a significant role in real-world sequential auction markets such
as those implemented by Internet companies. To maximize the value provided to
auction participants, spending is smoothed across auctions so budgets are used
for the best opportunities. This paper considers a smoothing procedure that
relies on {\em pacing multipliers}: for each bidder, the platform applies a
factor between 0 and 1 that uniformly scales the bids across all auctions.
Looking at this process as a game between all bidders, we introduce the notion
of {\em pacing equilibrium}, and prove that they are always guaranteed to
exist. We demonstrate through examples that a market can have multiple pacing
equilibria with large variations in several natural objectives. We go on to
show that computing either a social-welfare-maximizing or a revenue-maximizing
pacing equilibrium is NP-hard. Finally, we develop a mixed-integer program
whose feasible solutions coincide with pacing equilibria, and show that it can
be used to find equilibria that optimize several interesting objectives. Using
our mixed-integer program, we perform numerical simulations on synthetic
auction markets that provide evidence that, in spite of the possibility of
equilibrium multiplicity, it occurs very rarely across several families of
random instances. We also show that solutions from the mixed-integer program
can be used to improve the outcomes achieved in a more realistic adaptive
pacing setting.
",christian kroer,,2017.0,,arXiv,Conitzer2017,True,,arXiv,Not available,Multiplicative Pacing Equilibria in Auction Markets,858b1cf71913f47e457ddd70c86baba1,http://arxiv.org/abs/1706.07151v1
16246," Budgets play a significant role in real-world sequential auction markets such
as those implemented by Internet companies. To maximize the value provided to
auction participants, spending is smoothed across auctions so budgets are used
for the best opportunities. This paper considers a smoothing procedure that
relies on {\em pacing multipliers}: for each bidder, the platform applies a
factor between 0 and 1 that uniformly scales the bids across all auctions.
Looking at this process as a game between all bidders, we introduce the notion
of {\em pacing equilibrium}, and prove that they are always guaranteed to
exist. We demonstrate through examples that a market can have multiple pacing
equilibria with large variations in several natural objectives. We go on to
show that computing either a social-welfare-maximizing or a revenue-maximizing
pacing equilibrium is NP-hard. Finally, we develop a mixed-integer program
whose feasible solutions coincide with pacing equilibria, and show that it can
be used to find equilibria that optimize several interesting objectives. Using
our mixed-integer program, we perform numerical simulations on synthetic
auction markets that provide evidence that, in spite of the possibility of
equilibrium multiplicity, it occurs very rarely across several families of
random instances. We also show that solutions from the mixed-integer program
can be used to improve the outcomes achieved in a more realistic adaptive
pacing setting.
",eric sodomka,,2017.0,,arXiv,Conitzer2017,True,,arXiv,Not available,Multiplicative Pacing Equilibria in Auction Markets,858b1cf71913f47e457ddd70c86baba1,http://arxiv.org/abs/1706.07151v1
16247," Budgets play a significant role in real-world sequential auction markets such
as those implemented by Internet companies. To maximize the value provided to
auction participants, spending is smoothed across auctions so budgets are used
for the best opportunities. This paper considers a smoothing procedure that
relies on {\em pacing multipliers}: for each bidder, the platform applies a
factor between 0 and 1 that uniformly scales the bids across all auctions.
Looking at this process as a game between all bidders, we introduce the notion
of {\em pacing equilibrium}, and prove that they are always guaranteed to
exist. We demonstrate through examples that a market can have multiple pacing
equilibria with large variations in several natural objectives. We go on to
show that computing either a social-welfare-maximizing or a revenue-maximizing
pacing equilibrium is NP-hard. Finally, we develop a mixed-integer program
whose feasible solutions coincide with pacing equilibria, and show that it can
be used to find equilibria that optimize several interesting objectives. Using
our mixed-integer program, we perform numerical simulations on synthetic
auction markets that provide evidence that, in spite of the possibility of
equilibrium multiplicity, it occurs very rarely across several families of
random instances. We also show that solutions from the mixed-integer program
can be used to improve the outcomes achieved in a more realistic adaptive
pacing setting.
",nicolas stier-moses,,2017.0,,arXiv,Conitzer2017,True,,arXiv,Not available,Multiplicative Pacing Equilibria in Auction Markets,858b1cf71913f47e457ddd70c86baba1,http://arxiv.org/abs/1706.07151v1
16248," As online ad offerings become increasingly complex, with multiple size
configurations and layouts available to advertisers, the sale of web
advertising space increasingly resembles a combinatorial auction with
complementarities. Standard ad auction formats do not immediately extend to
these settings, and truthful combinatorial auctions, such as the
Vickrey-Clarke-Groves auction, can yield unacceptably low revenue. Core
selecting auctions, which apply to combinatorial markets, boost revenue by
setting prices so that no group of agents, including the auctioneer, can
jointly improve their utilities by switching to a different allocation and
payments. Among outcomes in the core, bidder-optimal core points have been the
most widely studied due to their incentive properties, such as being
implementable at equilibrium. Prior work in economics has studied heuristics
for computing approximate bidder-optimal core points given oracle access to the
welfare optimization problem, but these solutions either lack performance
guarantees or are based on prohibitively slow convex programs. Our main result
is a combinatorial algorithm that finds an approximate bidder-optimal core
point with almost linear number of calls to the welfare maximization oracle.
Our algorithm is faster than previously-proposed heuristics, has theoretical
guarantees, and reveals some useful structural properties of the core polytope.
We conclude that core pricing is implementable even for very time sensitive
practical use cases such as realtime auctions for online advertising and can
yield more revenue. We justify this claim experimentally using the Microsoft
Bing Ad Auction platform, which allows advertisers to have decorations with a
non-uniform number of lines of text. We find that core pricing generates almost
100% more revenue than VCG, and almost 20% more revenue than the standard
Generalized Second Price (GSP) auction.
",jason hartline,,2016.0,,arXiv,Hartline2016,True,,arXiv,Not available,Fast Core Pricing for Rich Advertising Auctions,6959e22d65779e94daaf4dad29263fc9,http://arxiv.org/abs/1610.03564v3
16249," As online ad offerings become increasingly complex, with multiple size
configurations and layouts available to advertisers, the sale of web
advertising space increasingly resembles a combinatorial auction with
complementarities. Standard ad auction formats do not immediately extend to
these settings, and truthful combinatorial auctions, such as the
Vickrey-Clarke-Groves auction, can yield unacceptably low revenue. Core
selecting auctions, which apply to combinatorial markets, boost revenue by
setting prices so that no group of agents, including the auctioneer, can
jointly improve their utilities by switching to a different allocation and
payments. Among outcomes in the core, bidder-optimal core points have been the
most widely studied due to their incentive properties, such as being
implementable at equilibrium. Prior work in economics has studied heuristics
for computing approximate bidder-optimal core points given oracle access to the
welfare optimization problem, but these solutions either lack performance
guarantees or are based on prohibitively slow convex programs. Our main result
is a combinatorial algorithm that finds an approximate bidder-optimal core
point with almost linear number of calls to the welfare maximization oracle.
Our algorithm is faster than previously-proposed heuristics, has theoretical
guarantees, and reveals some useful structural properties of the core polytope.
We conclude that core pricing is implementable even for very time sensitive
practical use cases such as realtime auctions for online advertising and can
yield more revenue. We justify this claim experimentally using the Microsoft
Bing Ad Auction platform, which allows advertisers to have decorations with a
non-uniform number of lines of text. We find that core pricing generates almost
100% more revenue than VCG, and almost 20% more revenue than the standard
Generalized Second Price (GSP) auction.
",nicole immorlica,,2016.0,,arXiv,Hartline2016,True,,arXiv,Not available,Fast Core Pricing for Rich Advertising Auctions,6959e22d65779e94daaf4dad29263fc9,http://arxiv.org/abs/1610.03564v3
16250," As online ad offerings become increasingly complex, with multiple size
configurations and layouts available to advertisers, the sale of web
advertising space increasingly resembles a combinatorial auction with
complementarities. Standard ad auction formats do not immediately extend to
these settings, and truthful combinatorial auctions, such as the
Vickrey-Clarke-Groves auction, can yield unacceptably low revenue. Core
selecting auctions, which apply to combinatorial markets, boost revenue by
setting prices so that no group of agents, including the auctioneer, can
jointly improve their utilities by switching to a different allocation and
payments. Among outcomes in the core, bidder-optimal core points have been the
most widely studied due to their incentive properties, such as being
implementable at equilibrium. Prior work in economics has studied heuristics
for computing approximate bidder-optimal core points given oracle access to the
welfare optimization problem, but these solutions either lack performance
guarantees or are based on prohibitively slow convex programs. Our main result
is a combinatorial algorithm that finds an approximate bidder-optimal core
point with almost linear number of calls to the welfare maximization oracle.
Our algorithm is faster than previously-proposed heuristics, has theoretical
guarantees, and reveals some useful structural properties of the core polytope.
We conclude that core pricing is implementable even for very time sensitive
practical use cases such as realtime auctions for online advertising and can
yield more revenue. We justify this claim experimentally using the Microsoft
Bing Ad Auction platform, which allows advertisers to have decorations with a
non-uniform number of lines of text. We find that core pricing generates almost
100% more revenue than VCG, and almost 20% more revenue than the standard
Generalized Second Price (GSP) auction.
",mohammad khani,,2016.0,,arXiv,Hartline2016,True,,arXiv,Not available,Fast Core Pricing for Rich Advertising Auctions,6959e22d65779e94daaf4dad29263fc9,http://arxiv.org/abs/1610.03564v3
16251," As online ad offerings become increasingly complex, with multiple size
configurations and layouts available to advertisers, the sale of web
advertising space increasingly resembles a combinatorial auction with
complementarities. Standard ad auction formats do not immediately extend to
these settings, and truthful combinatorial auctions, such as the
Vickrey-Clarke-Groves auction, can yield unacceptably low revenue. Core
selecting auctions, which apply to combinatorial markets, boost revenue by
setting prices so that no group of agents, including the auctioneer, can
jointly improve their utilities by switching to a different allocation and
payments. Among outcomes in the core, bidder-optimal core points have been the
most widely studied due to their incentive properties, such as being
implementable at equilibrium. Prior work in economics has studied heuristics
for computing approximate bidder-optimal core points given oracle access to the
welfare optimization problem, but these solutions either lack performance
guarantees or are based on prohibitively slow convex programs. Our main result
is a combinatorial algorithm that finds an approximate bidder-optimal core
point with almost linear number of calls to the welfare maximization oracle.
Our algorithm is faster than previously-proposed heuristics, has theoretical
guarantees, and reveals some useful structural properties of the core polytope.
We conclude that core pricing is implementable even for very time sensitive
practical use cases such as realtime auctions for online advertising and can
yield more revenue. We justify this claim experimentally using the Microsoft
Bing Ad Auction platform, which allows advertisers to have decorations with a
non-uniform number of lines of text. We find that core pricing generates almost
100% more revenue than VCG, and almost 20% more revenue than the standard
Generalized Second Price (GSP) auction.
",brendan lucier,,2016.0,,arXiv,Hartline2016,True,,arXiv,Not available,Fast Core Pricing for Rich Advertising Auctions,6959e22d65779e94daaf4dad29263fc9,http://arxiv.org/abs/1610.03564v3
16252," In this paper we develop a novel approach to the convergence of Best-Response
Dynamics for the family of interference games. Interference games represent the
fundamental resource allocation conflict between users of the radio spectrum.
In contrast to congestion games, interference games are generally not potential
games. Therefore, proving the convergence of the best-response dynamics to a
Nash equilibrium in these games requires new techniques. We suggest a model for
random interference games, based on the long term fading governed by the
players' geometry. Our goal is to prove convergence of the approximate
best-response dynamics with high probability with respect to the randomized
game. We embrace the asynchronous model in which the acting player is chosen at
each stage at random. In our approximate best-response dynamics, the action of
a deviating player is chosen at random among all the approximately best ones.
We show that with high probability, with respect to the players' geometry and
asymptotically with the number of players, each action increases the expected
social-welfare (sum of achievable rates). Hence, the induced sum-rate process
is a submartingale. Based on the Martingale Convergence Theorem, we prove
convergence of the strategy profile to an approximate Nash equilibrium with
good performance for asymptotically almost all interference games. We use the
Markovity of the induced sum-rate process to provide probabilistic bounds on
the convergence time. Finally, we demonstrate our results in simulated
examples.
",ilai bistritz,,2017.0,,arXiv,Bistritz2017,True,,arXiv,Not available,Approximate Best-Response Dynamics in Random Interference Games,09e57462cadbe0c37d15fde0fe9e69d7,http://arxiv.org/abs/1706.05081v1
16253," As online ad offerings become increasingly complex, with multiple size
configurations and layouts available to advertisers, the sale of web
advertising space increasingly resembles a combinatorial auction with
complementarities. Standard ad auction formats do not immediately extend to
these settings, and truthful combinatorial auctions, such as the
Vickrey-Clarke-Groves auction, can yield unacceptably low revenue. Core
selecting auctions, which apply to combinatorial markets, boost revenue by
setting prices so that no group of agents, including the auctioneer, can
jointly improve their utilities by switching to a different allocation and
payments. Among outcomes in the core, bidder-optimal core points have been the
most widely studied due to their incentive properties, such as being
implementable at equilibrium. Prior work in economics has studied heuristics
for computing approximate bidder-optimal core points given oracle access to the
welfare optimization problem, but these solutions either lack performance
guarantees or are based on prohibitively slow convex programs. Our main result
is a combinatorial algorithm that finds an approximate bidder-optimal core
point with almost linear number of calls to the welfare maximization oracle.
Our algorithm is faster than previously-proposed heuristics, has theoretical
guarantees, and reveals some useful structural properties of the core polytope.
We conclude that core pricing is implementable even for very time sensitive
practical use cases such as realtime auctions for online advertising and can
yield more revenue. We justify this claim experimentally using the Microsoft
Bing Ad Auction platform, which allows advertisers to have decorations with a
non-uniform number of lines of text. We find that core pricing generates almost
100% more revenue than VCG, and almost 20% more revenue than the standard
Generalized Second Price (GSP) auction.
",rad niazadeh,,2016.0,,arXiv,Hartline2016,True,,arXiv,Not available,Fast Core Pricing for Rich Advertising Auctions,6959e22d65779e94daaf4dad29263fc9,http://arxiv.org/abs/1610.03564v3
16254," Auctions for perishable goods such as internet ad inventory need to make
real-time allocation and pricing decisions as the supply of the good arrives in
an online manner, without knowing the entire supply in advance. These
allocation and pricing decisions get complicated when buyers have some global
constraints. In this work, we consider a multi-unit model where buyers have
global {\em budget} constraints, and the supply arrives in an online manner.
Our main contribution is to show that for this setting there is an
individually-rational, incentive-compatible and Pareto-optimal auction that
allocates these units and calculates prices on the fly, without knowledge of
the total supply. We do so by showing that the Adaptive Clinching Auction
satisfies a {\em supply-monotonicity} property.
We also analyze and discuss, using examples, how the insights gained by the
allocation and payment rule can be applied to design better ad allocation
heuristics in practice. Finally, while our main technical result concerns
multi-unit supply, we propose a formal model of online supply that captures
scenarios beyond multi-unit supply and has applications to sponsored search. We
conjecture that our results for multi-unit auctions can be extended to these
more general models.
",gagan goel,,2012.0,,arXiv,Goel2012,True,,arXiv,Not available,Clinching Auctions with Online Supply,b7bd946ff58044ae9f7358b2dfac3866,http://arxiv.org/abs/1210.1456v1
16255," Auctions for perishable goods such as internet ad inventory need to make
real-time allocation and pricing decisions as the supply of the good arrives in
an online manner, without knowing the entire supply in advance. These
allocation and pricing decisions get complicated when buyers have some global
constraints. In this work, we consider a multi-unit model where buyers have
global {\em budget} constraints, and the supply arrives in an online manner.
Our main contribution is to show that for this setting there is an
individually-rational, incentive-compatible and Pareto-optimal auction that
allocates these units and calculates prices on the fly, without knowledge of
the total supply. We do so by showing that the Adaptive Clinching Auction
satisfies a {\em supply-monotonicity} property.
We also analyze and discuss, using examples, how the insights gained by the
allocation and payment rule can be applied to design better ad allocation
heuristics in practice. Finally, while our main technical result concerns
multi-unit supply, we propose a formal model of online supply that captures
scenarios beyond multi-unit supply and has applications to sponsored search. We
conjecture that our results for multi-unit auctions can be extended to these
more general models.
",vahab mirrokni,,2012.0,,arXiv,Goel2012,True,,arXiv,Not available,Clinching Auctions with Online Supply,b7bd946ff58044ae9f7358b2dfac3866,http://arxiv.org/abs/1210.1456v1
16256," Auctions for perishable goods such as internet ad inventory need to make
real-time allocation and pricing decisions as the supply of the good arrives in
an online manner, without knowing the entire supply in advance. These
allocation and pricing decisions get complicated when buyers have some global
constraints. In this work, we consider a multi-unit model where buyers have
global {\em budget} constraints, and the supply arrives in an online manner.
Our main contribution is to show that for this setting there is an
individually-rational, incentive-compatible and Pareto-optimal auction that
allocates these units and calculates prices on the fly, without knowledge of
the total supply. We do so by showing that the Adaptive Clinching Auction
satisfies a {\em supply-monotonicity} property.
We also analyze and discuss, using examples, how the insights gained by the
allocation and payment rule can be applied to design better ad allocation
heuristics in practice. Finally, while our main technical result concerns
multi-unit supply, we propose a formal model of online supply that captures
scenarios beyond multi-unit supply and has applications to sponsored search. We
conjecture that our results for multi-unit auctions can be extended to these
more general models.
",renato leme,,2012.0,,arXiv,Goel2012,True,,arXiv,Not available,Clinching Auctions with Online Supply,b7bd946ff58044ae9f7358b2dfac3866,http://arxiv.org/abs/1210.1456v1
16257," In the context of auctions for digital goods, an interesting random sampling
auction has been proposed by Goldberg, Hartline, and Wright [2001]. This
auction has been analyzed by Feige, Flaxman, Hartline, and Kleinberg [2005],
who have shown that it is 15-competitive in the worst case {which is
substantially better than the previously proven constant bounds but still far
from the conjectured competitive ratio of 4. In this paper, we prove that the
aforementioned random sampling auction is indeed 4-competitive for a large
class of instances where the number of bids above (or equal to) the optimal
sale price is at least 6. We also show that it is 4:68-competitive for the
small class of remaining instances thus leaving a negligible gap between the
lower and upper bound. We employ a mix of probabilistic techniques and dynamic
programming to compute these bounds.
",saeed alaei,,2013.0,,arXiv,Alaei2013,True,,arXiv,Not available,On Random Sampling Auctions for Digital Goods,53a187d0046ee86735ccf8b3c0ea050b,http://arxiv.org/abs/1303.4438v1
16258," In the context of auctions for digital goods, an interesting random sampling
auction has been proposed by Goldberg, Hartline, and Wright [2001]. This
auction has been analyzed by Feige, Flaxman, Hartline, and Kleinberg [2005],
who have shown that it is 15-competitive in the worst case {which is
substantially better than the previously proven constant bounds but still far
from the conjectured competitive ratio of 4. In this paper, we prove that the
aforementioned random sampling auction is indeed 4-competitive for a large
class of instances where the number of bids above (or equal to) the optimal
sale price is at least 6. We also show that it is 4:68-competitive for the
small class of remaining instances thus leaving a negligible gap between the
lower and upper bound. We employ a mix of probabilistic techniques and dynamic
programming to compute these bounds.
",azarakhsh malekian,,2013.0,,arXiv,Alaei2013,True,,arXiv,Not available,On Random Sampling Auctions for Digital Goods,53a187d0046ee86735ccf8b3c0ea050b,http://arxiv.org/abs/1303.4438v1
16259," In the context of auctions for digital goods, an interesting random sampling
auction has been proposed by Goldberg, Hartline, and Wright [2001]. This
auction has been analyzed by Feige, Flaxman, Hartline, and Kleinberg [2005],
who have shown that it is 15-competitive in the worst case {which is
substantially better than the previously proven constant bounds but still far
from the conjectured competitive ratio of 4. In this paper, we prove that the
aforementioned random sampling auction is indeed 4-competitive for a large
class of instances where the number of bids above (or equal to) the optimal
sale price is at least 6. We also show that it is 4:68-competitive for the
small class of remaining instances thus leaving a negligible gap between the
lower and upper bound. We employ a mix of probabilistic techniques and dynamic
programming to compute these bounds.
",aravind srinivasan,,2013.0,,arXiv,Alaei2013,True,,arXiv,Not available,On Random Sampling Auctions for Digital Goods,53a187d0046ee86735ccf8b3c0ea050b,http://arxiv.org/abs/1303.4438v1
16260," Cr\'emer and McLean [1985] showed that, when buyers' valuations are drawn
from a correlated distribution, an auction with full knowledge on the
distribution can extract the full social surplus. We study whether this
phenomenon persists when the auctioneer has only incomplete knowledge of the
distribution, represented by a finite family of candidate distributions, and
has sample access to the real distribution. We show that the naive approach
which uses samples to distinguish candidate distributions may fail, whereas an
extended version of the Cr\'emer-McLean auction simultaneously extracts full
social surplus under each candidate distribution. With an algebraic argument,
we give a tight bound on the number of samples needed by this auction, which is
the difference between the number of candidate distributions and the dimension
of the linear space they span.
",hu fu,,2014.0,,arXiv,Fu2014,True,,arXiv,Not available,Optimal Auctions for Correlated Buyers with Sampling,e523922d379e5166479fdbfdb5e1354f,http://arxiv.org/abs/1406.1571v1
16261," Cr\'emer and McLean [1985] showed that, when buyers' valuations are drawn
from a correlated distribution, an auction with full knowledge on the
distribution can extract the full social surplus. We study whether this
phenomenon persists when the auctioneer has only incomplete knowledge of the
distribution, represented by a finite family of candidate distributions, and
has sample access to the real distribution. We show that the naive approach
which uses samples to distinguish candidate distributions may fail, whereas an
extended version of the Cr\'emer-McLean auction simultaneously extracts full
social surplus under each candidate distribution. With an algebraic argument,
we give a tight bound on the number of samples needed by this auction, which is
the difference between the number of candidate distributions and the dimension
of the linear space they span.
",nima haghpanah,,2014.0,,arXiv,Fu2014,True,,arXiv,Not available,Optimal Auctions for Correlated Buyers with Sampling,e523922d379e5166479fdbfdb5e1354f,http://arxiv.org/abs/1406.1571v1
16262," Cr\'emer and McLean [1985] showed that, when buyers' valuations are drawn
from a correlated distribution, an auction with full knowledge on the
distribution can extract the full social surplus. We study whether this
phenomenon persists when the auctioneer has only incomplete knowledge of the
distribution, represented by a finite family of candidate distributions, and
has sample access to the real distribution. We show that the naive approach
which uses samples to distinguish candidate distributions may fail, whereas an
extended version of the Cr\'emer-McLean auction simultaneously extracts full
social surplus under each candidate distribution. With an algebraic argument,
we give a tight bound on the number of samples needed by this auction, which is
the difference between the number of candidate distributions and the dimension
of the linear space they span.
",jason hartline,,2014.0,,arXiv,Fu2014,True,,arXiv,Not available,Optimal Auctions for Correlated Buyers with Sampling,e523922d379e5166479fdbfdb5e1354f,http://arxiv.org/abs/1406.1571v1
16263," In this paper we develop a novel approach to the convergence of Best-Response
Dynamics for the family of interference games. Interference games represent the
fundamental resource allocation conflict between users of the radio spectrum.
In contrast to congestion games, interference games are generally not potential
games. Therefore, proving the convergence of the best-response dynamics to a
Nash equilibrium in these games requires new techniques. We suggest a model for
random interference games, based on the long term fading governed by the
players' geometry. Our goal is to prove convergence of the approximate
best-response dynamics with high probability with respect to the randomized
game. We embrace the asynchronous model in which the acting player is chosen at
each stage at random. In our approximate best-response dynamics, the action of
a deviating player is chosen at random among all the approximately best ones.
We show that with high probability, with respect to the players' geometry and
asymptotically with the number of players, each action increases the expected
social-welfare (sum of achievable rates). Hence, the induced sum-rate process
is a submartingale. Based on the Martingale Convergence Theorem, we prove
convergence of the strategy profile to an approximate Nash equilibrium with
good performance for asymptotically almost all interference games. We use the
Markovity of the induced sum-rate process to provide probabilistic bounds on
the convergence time. Finally, we demonstrate our results in simulated
examples.
",amir leshem,,2017.0,,arXiv,Bistritz2017,True,,arXiv,Not available,Approximate Best-Response Dynamics in Random Interference Games,09e57462cadbe0c37d15fde0fe9e69d7,http://arxiv.org/abs/1706.05081v1
16264," Cr\'emer and McLean [1985] showed that, when buyers' valuations are drawn
from a correlated distribution, an auction with full knowledge on the
distribution can extract the full social surplus. We study whether this
phenomenon persists when the auctioneer has only incomplete knowledge of the
distribution, represented by a finite family of candidate distributions, and
has sample access to the real distribution. We show that the naive approach
which uses samples to distinguish candidate distributions may fail, whereas an
extended version of the Cr\'emer-McLean auction simultaneously extracts full
social surplus under each candidate distribution. With an algebraic argument,
we give a tight bound on the number of samples needed by this auction, which is
the difference between the number of candidate distributions and the dimension
of the linear space they span.
",robert kleinberg,,2014.0,,arXiv,Fu2014,True,,arXiv,Not available,Optimal Auctions for Correlated Buyers with Sampling,e523922d379e5166479fdbfdb5e1354f,http://arxiv.org/abs/1406.1571v1
16265," We study efficiency loss in Bayesian revenue optimal auctions. We quantify
this as the worst case ratio of loss in the realized social welfare to the
social welfare that can be realized by an efficient auction. Our focus is on
auctions with single-parameter buyers and where buyers' valuation sets are
finite. For binary valued single-parameter buyers with independent (not
necessarily identically distributed) private valuations, we show that the worst
case efficiency loss ratio (ELR) is no worse than it is with only one buyer;
moreover, it is at most 1/2. Moving beyond the case of binary valuations but
restricting to single item auctions, where buyers' private valuations are
independent and identically distributed, we obtain bounds on the worst case ELR
as a function of number of buyers, cardinality of buyers' valuation set, and
ratio of maximum to minimum possible values that buyers can have for the item.
",vineet abhishek,,2010.0,,arXiv,Abhishek2010,True,,arXiv,Not available,Efficiency Loss in Revenue Optimal Auctions,6bc0926da61cb7246c9964d32aeeb892,http://arxiv.org/abs/1005.1121v2
16266," We study efficiency loss in Bayesian revenue optimal auctions. We quantify
this as the worst case ratio of loss in the realized social welfare to the
social welfare that can be realized by an efficient auction. Our focus is on
auctions with single-parameter buyers and where buyers' valuation sets are
finite. For binary valued single-parameter buyers with independent (not
necessarily identically distributed) private valuations, we show that the worst
case efficiency loss ratio (ELR) is no worse than it is with only one buyer;
moreover, it is at most 1/2. Moving beyond the case of binary valuations but
restricting to single item auctions, where buyers' private valuations are
independent and identically distributed, we obtain bounds on the worst case ELR
as a function of number of buyers, cardinality of buyers' valuation set, and
ratio of maximum to minimum possible values that buyers can have for the item.
",bruce hajek,,2010.0,,arXiv,Abhishek2010,True,,arXiv,Not available,Efficiency Loss in Revenue Optimal Auctions,6bc0926da61cb7246c9964d32aeeb892,http://arxiv.org/abs/1005.1121v2
16267," Designing revenue optimal auctions for selling an item to $n$ symmetric
bidders is a fundamental problem in mechanism design. Myerson (1981) shows that
the second price auction with an appropriate reserve price is optimal when
bidders' values are drawn i.i.d. from a known regular distribution. A
cornerstone in the prior-independent revenue maximization literature is a
result by Bulow and Klemperer (1996) showing that the second price auction
without a reserve achieves $(n-1)/n$ of the optimal revenue in the worst case.
We construct a randomized mechanism that strictly outperforms the second
price auction in this setting. Our mechanism inflates the second highest bid
with a probability that varies with $n$. For two bidders we improve the
performance guarantee from $0.5$ to $0.512$ of the optimal revenue. We also
resolve a question in the design of revenue optimal mechanisms that have access
to a single sample from an unknown distribution. We show that a randomized
mechanism strictly outperforms all deterministic mechanisms in terms of worst
case guarantee.
",hu fu,,2015.0,,arXiv,Fu2015,True,,arXiv,Not available,Randomization beats Second Price as a Prior-Independent Auction,77b9e8505291c4d49bb3b485acff8d69,http://arxiv.org/abs/1507.08042v1
16268," Designing revenue optimal auctions for selling an item to $n$ symmetric
bidders is a fundamental problem in mechanism design. Myerson (1981) shows that
the second price auction with an appropriate reserve price is optimal when
bidders' values are drawn i.i.d. from a known regular distribution. A
cornerstone in the prior-independent revenue maximization literature is a
result by Bulow and Klemperer (1996) showing that the second price auction
without a reserve achieves $(n-1)/n$ of the optimal revenue in the worst case.
We construct a randomized mechanism that strictly outperforms the second
price auction in this setting. Our mechanism inflates the second highest bid
with a probability that varies with $n$. For two bidders we improve the
performance guarantee from $0.5$ to $0.512$ of the optimal revenue. We also
resolve a question in the design of revenue optimal mechanisms that have access
to a single sample from an unknown distribution. We show that a randomized
mechanism strictly outperforms all deterministic mechanisms in terms of worst
case guarantee.
",nicole immolica,,2015.0,,arXiv,Fu2015,True,,arXiv,Not available,Randomization beats Second Price as a Prior-Independent Auction,77b9e8505291c4d49bb3b485acff8d69,http://arxiv.org/abs/1507.08042v1
16269," Designing revenue optimal auctions for selling an item to $n$ symmetric
bidders is a fundamental problem in mechanism design. Myerson (1981) shows that
the second price auction with an appropriate reserve price is optimal when
bidders' values are drawn i.i.d. from a known regular distribution. A
cornerstone in the prior-independent revenue maximization literature is a
result by Bulow and Klemperer (1996) showing that the second price auction
without a reserve achieves $(n-1)/n$ of the optimal revenue in the worst case.
We construct a randomized mechanism that strictly outperforms the second
price auction in this setting. Our mechanism inflates the second highest bid
with a probability that varies with $n$. For two bidders we improve the
performance guarantee from $0.5$ to $0.512$ of the optimal revenue. We also
resolve a question in the design of revenue optimal mechanisms that have access
to a single sample from an unknown distribution. We show that a randomized
mechanism strictly outperforms all deterministic mechanisms in terms of worst
case guarantee.
",brendan lucier,,2015.0,,arXiv,Fu2015,True,,arXiv,Not available,Randomization beats Second Price as a Prior-Independent Auction,77b9e8505291c4d49bb3b485acff8d69,http://arxiv.org/abs/1507.08042v1
16270," Designing revenue optimal auctions for selling an item to $n$ symmetric
bidders is a fundamental problem in mechanism design. Myerson (1981) shows that
the second price auction with an appropriate reserve price is optimal when
bidders' values are drawn i.i.d. from a known regular distribution. A
cornerstone in the prior-independent revenue maximization literature is a
result by Bulow and Klemperer (1996) showing that the second price auction
without a reserve achieves $(n-1)/n$ of the optimal revenue in the worst case.
We construct a randomized mechanism that strictly outperforms the second
price auction in this setting. Our mechanism inflates the second highest bid
with a probability that varies with $n$. For two bidders we improve the
performance guarantee from $0.5$ to $0.512$ of the optimal revenue. We also
resolve a question in the design of revenue optimal mechanisms that have access
to a single sample from an unknown distribution. We show that a randomized
mechanism strictly outperforms all deterministic mechanisms in terms of worst
case guarantee.
",philipp strack,,2015.0,,arXiv,Fu2015,True,,arXiv,Not available,Randomization beats Second Price as a Prior-Independent Auction,77b9e8505291c4d49bb3b485acff8d69,http://arxiv.org/abs/1507.08042v1
16271," Motivated by applications such as stock exchanges and spectrum auctions,
there is a growing interest in mechanisms for arranging trade in two-sided
markets. Existing mechanisms are either not truthful, or do not guarantee an
asymptotically-optimal gain-from-trade, or rely on a prior on the traders'
valuations, or operate in limited settings such as a single kind of good. We
extend the random market-halving technique used in earlier works to markets
with multiple kinds of goods, where traders have gross-substitute valuations.
We present MIDA: a Multi Item-kind Double-Auction mechanism. It is prior-free,
truthful, strongly-budget-balanced, and guarantees near-optimal gain from trade
when market sizes of all goods grow to $\infty$ at a similar rate.
",erel segal-halevi,,2016.0,,arXiv,Segal-Halevi2016,True,,arXiv,Not available,Double Auctions in Markets for Multiple Kinds of Goods,cbde9b521bff3531b8986833148f3483,http://arxiv.org/abs/1604.06210v5
16272," Motivated by applications such as stock exchanges and spectrum auctions,
there is a growing interest in mechanisms for arranging trade in two-sided
markets. Existing mechanisms are either not truthful, or do not guarantee an
asymptotically-optimal gain-from-trade, or rely on a prior on the traders'
valuations, or operate in limited settings such as a single kind of good. We
extend the random market-halving technique used in earlier works to markets
with multiple kinds of goods, where traders have gross-substitute valuations.
We present MIDA: a Multi Item-kind Double-Auction mechanism. It is prior-free,
truthful, strongly-budget-balanced, and guarantees near-optimal gain from trade
when market sizes of all goods grow to $\infty$ at a similar rate.
",avinatan hassidim,,2016.0,,arXiv,Segal-Halevi2016,True,,arXiv,Not available,Double Auctions in Markets for Multiple Kinds of Goods,cbde9b521bff3531b8986833148f3483,http://arxiv.org/abs/1604.06210v5
16273," Motivated by applications such as stock exchanges and spectrum auctions,
there is a growing interest in mechanisms for arranging trade in two-sided
markets. Existing mechanisms are either not truthful, or do not guarantee an
asymptotically-optimal gain-from-trade, or rely on a prior on the traders'
valuations, or operate in limited settings such as a single kind of good. We
extend the random market-halving technique used in earlier works to markets
with multiple kinds of goods, where traders have gross-substitute valuations.
We present MIDA: a Multi Item-kind Double-Auction mechanism. It is prior-free,
truthful, strongly-budget-balanced, and guarantees near-optimal gain from trade
when market sizes of all goods grow to $\infty$ at a similar rate.
",yonatan aumann,,2016.0,,arXiv,Segal-Halevi2016,True,,arXiv,Not available,Double Auctions in Markets for Multiple Kinds of Goods,cbde9b521bff3531b8986833148f3483,http://arxiv.org/abs/1604.06210v5
16274," In an earlier experiment, participants played a perfect information game
against a computer, which was programmed to deviate often from its backward
induction strategy right at the beginning of the game. Participants knew that
in each game, the computer was nevertheless optimizing against some belief
about the participant's future strategy. In the aggregate, it appeared that
participants applied forward induction. However, cardinal effects seemed to
play a role as well: a number of participants might have been trying to
maximize expected utility.
In order to find out how people really reason in such a game, we designed
centipede-like turn-taking games with new payoff structures in order to make
such cardinal effects less likely. We ran a new experiment with 50
participants, based on marble drop visualizations of these revised payoff
structures. After participants played 48 test games, we asked a number of
questions to gauge the participants' reasoning about their own and the
opponent's strategy at all decision nodes of a sample game. We also checked how
the verbalized strategies fit to the actual choices they made at all their
decision points in the 48 test games.
Even though in the aggregate, participants in the new experiment still tend
to slightly favor the forward induction choice at their first decision node,
their verbalized strategies most often depend on their own attitudes towards
risk and those they assign to the computer opponent, sometimes in addition to
considerations about cooperativeness and competitiveness.
",sujata ghosh,,2017.0,10.4204/EPTCS.251.19,"EPTCS 251, 2017, pp. 265-284",Ghosh2017,True,,arXiv,Not available,"What Drives People's Choices in Turn-Taking Games, if not Game-Theoretic
Rationality?",3832039bd19e3f2a603cb1dbea2d0585,http://arxiv.org/abs/1707.08749v1
16275," We study simple and approximately optimal auctions for agents with a
particular form of risk-averse preferences. We show that, for symmetric agents,
the optimal revenue (given a prior distribution over the agent preferences) can
be approximated by the first-price auction (which is prior independent), and,
for asymmetric agents, the optimal revenue can be approximated by an auction
with simple form. These results are based on two technical methods. The first
is for upper-bounding the revenue from a risk-averse agent. The second gives a
payment identity for mechanisms with pay-your-bid semantics.
",hu fu,,2013.0,,arXiv,Fu2013,True,,arXiv,Not available,Prior-independent Auctions for Risk-averse Agents,a852bf114db37337ccd732ffc9968d8b,http://arxiv.org/abs/1301.0401v1
16276," We study simple and approximately optimal auctions for agents with a
particular form of risk-averse preferences. We show that, for symmetric agents,
the optimal revenue (given a prior distribution over the agent preferences) can
be approximated by the first-price auction (which is prior independent), and,
for asymmetric agents, the optimal revenue can be approximated by an auction
with simple form. These results are based on two technical methods. The first
is for upper-bounding the revenue from a risk-averse agent. The second gives a
payment identity for mechanisms with pay-your-bid semantics.
",jason hartline,,2013.0,,arXiv,Fu2013,True,,arXiv,Not available,Prior-independent Auctions for Risk-averse Agents,a852bf114db37337ccd732ffc9968d8b,http://arxiv.org/abs/1301.0401v1
16277," We study simple and approximately optimal auctions for agents with a
particular form of risk-averse preferences. We show that, for symmetric agents,
the optimal revenue (given a prior distribution over the agent preferences) can
be approximated by the first-price auction (which is prior independent), and,
for asymmetric agents, the optimal revenue can be approximated by an auction
with simple form. These results are based on two technical methods. The first
is for upper-bounding the revenue from a risk-averse agent. The second gives a
payment identity for mechanisms with pay-your-bid semantics.
",darrell hoy,,2013.0,,arXiv,Fu2013,True,,arXiv,Not available,Prior-independent Auctions for Risk-averse Agents,a852bf114db37337ccd732ffc9968d8b,http://arxiv.org/abs/1301.0401v1
16283," We consider auctions in which the players have very limited knowledge about
their own valuations. Specifically, the only information that a Knightian
player $i$ has about the profile of true valuations, $\theta^*$, consists of a
set of distributions, from one of which $\theta_i^*$ has been drawn.
The VCG mechanism guarantees very high social welfare both in single- and
multi-good auctions, so long as Knightian players do not select strategies that
are dominated. With such Knightian players, however, we prove that the VCG
mechanism guarantees very poor social welfare in unrestricted combinatorial
auctions.
",alessandro chiesa,,2014.0,,arXiv,Chiesa2014,True,,arXiv,Not available,"Knightian Analysis of the VCG Mechanism in Unrestricted Combinatorial
Auctions",8547ec952285828b1f02882a9447a257,http://arxiv.org/abs/1403.6410v1
16284," We consider auctions in which the players have very limited knowledge about
their own valuations. Specifically, the only information that a Knightian
player $i$ has about the profile of true valuations, $\theta^*$, consists of a
set of distributions, from one of which $\theta_i^*$ has been drawn.
The VCG mechanism guarantees very high social welfare both in single- and
multi-good auctions, so long as Knightian players do not select strategies that
are dominated. With such Knightian players, however, we prove that the VCG
mechanism guarantees very poor social welfare in unrestricted combinatorial
auctions.
",silvio micali,,2014.0,,arXiv,Chiesa2014,True,,arXiv,Not available,"Knightian Analysis of the VCG Mechanism in Unrestricted Combinatorial
Auctions",8547ec952285828b1f02882a9447a257,http://arxiv.org/abs/1403.6410v1
16285," In an earlier experiment, participants played a perfect information game
against a computer, which was programmed to deviate often from its backward
induction strategy right at the beginning of the game. Participants knew that
in each game, the computer was nevertheless optimizing against some belief
about the participant's future strategy. In the aggregate, it appeared that
participants applied forward induction. However, cardinal effects seemed to
play a role as well: a number of participants might have been trying to
maximize expected utility.
In order to find out how people really reason in such a game, we designed
centipede-like turn-taking games with new payoff structures in order to make
such cardinal effects less likely. We ran a new experiment with 50
participants, based on marble drop visualizations of these revised payoff
structures. After participants played 48 test games, we asked a number of
questions to gauge the participants' reasoning about their own and the
opponent's strategy at all decision nodes of a sample game. We also checked how
the verbalized strategies fit to the actual choices they made at all their
decision points in the 48 test games.
Even though in the aggregate, participants in the new experiment still tend
to slightly favor the forward induction choice at their first decision node,
their verbalized strategies most often depend on their own attitudes towards
risk and those they assign to the computer opponent, sometimes in addition to
considerations about cooperativeness and competitiveness.
",aviad heifetz,,2017.0,10.4204/EPTCS.251.19,"EPTCS 251, 2017, pp. 265-284",Ghosh2017,True,,arXiv,Not available,"What Drives People's Choices in Turn-Taking Games, if not Game-Theoretic
Rationality?",3832039bd19e3f2a603cb1dbea2d0585,http://arxiv.org/abs/1707.08749v1
16286," We consider auctions in which the players have very limited knowledge about
their own valuations. Specifically, the only information that a Knightian
player $i$ has about the profile of true valuations, $\theta^*$, consists of a
set of distributions, from one of which $\theta_i^*$ has been drawn.
The VCG mechanism guarantees very high social welfare both in single- and
multi-good auctions, so long as Knightian players do not select strategies that
are dominated. With such Knightian players, however, we prove that the VCG
mechanism guarantees very poor social welfare in unrestricted combinatorial
auctions.
",zeyuan zhu,,2014.0,,arXiv,Chiesa2014,True,,arXiv,Not available,"Knightian Analysis of the VCG Mechanism in Unrestricted Combinatorial
Auctions",8547ec952285828b1f02882a9447a257,http://arxiv.org/abs/1403.6410v1
16287," A single advertisement often benefits many parties, for example, an ad for a
Samsung laptop benefits Microsoft. We study this phenomenon in search
advertising auctions and show that standard solutions, including the status quo
ignorance of mutual benefit and a benefit-aware Vickrey-Clarke-Groves
mechanism, perform poorly. In contrast, we show that an appropriate first-price
auction has nice equilibria in a single-slot ad auction --- all equilibria that
satisfy a natural cooperative envy-freeness condition select the
welfare-maximizing ad and satisfy an intuitive lower-bound on revenue.
",darrell hoy,,2012.0,,arXiv,Hoy2012,True,,arXiv,Not available,Coopetitive Ad Auctions,1a23cf53c3536a796744c7a2e33f2a56,http://arxiv.org/abs/1209.0832v1
16288," A single advertisement often benefits many parties, for example, an ad for a
Samsung laptop benefits Microsoft. We study this phenomenon in search
advertising auctions and show that standard solutions, including the status quo
ignorance of mutual benefit and a benefit-aware Vickrey-Clarke-Groves
mechanism, perform poorly. In contrast, we show that an appropriate first-price
auction has nice equilibria in a single-slot ad auction --- all equilibria that
satisfy a natural cooperative envy-freeness condition select the
welfare-maximizing ad and satisfy an intuitive lower-bound on revenue.
",kamal jain,,2012.0,,arXiv,Hoy2012,True,,arXiv,Not available,Coopetitive Ad Auctions,1a23cf53c3536a796744c7a2e33f2a56,http://arxiv.org/abs/1209.0832v1
16289," A single advertisement often benefits many parties, for example, an ad for a
Samsung laptop benefits Microsoft. We study this phenomenon in search
advertising auctions and show that standard solutions, including the status quo
ignorance of mutual benefit and a benefit-aware Vickrey-Clarke-Groves
mechanism, perform poorly. In contrast, we show that an appropriate first-price
auction has nice equilibria in a single-slot ad auction --- all equilibria that
satisfy a natural cooperative envy-freeness condition select the
welfare-maximizing ad and satisfy an intuitive lower-bound on revenue.
",christopher wilkens,,2012.0,,arXiv,Hoy2012,True,,arXiv,Not available,Coopetitive Ad Auctions,1a23cf53c3536a796744c7a2e33f2a56,http://arxiv.org/abs/1209.0832v1
16290," We construct prior-free auctions with constant-factor approximation
guarantees with ordered bidders, in both unlimited and limited supply settings.
We compare the expected revenue of our auctions on a bid vector to the monotone
price benchmark, the maximum revenue that can be obtained from a bid vector
using supply-respecting prices that are nonincreasing in the bidder ordering
and bounded above by the second-highest bid. As a consequence, our auctions are
simultaneously near-optimal in a wide range of Bayesian multi-unit
environments.
",elias koutsoupias,,2012.0,,arXiv,Koutsoupias2012,True,,arXiv,Not available,Near-Optimal Multi-Unit Auctions with Ordered Bidders,6e6fb8c370b57c5f4cd10ad848db2936,http://arxiv.org/abs/1212.2825v1
16291," We construct prior-free auctions with constant-factor approximation
guarantees with ordered bidders, in both unlimited and limited supply settings.
We compare the expected revenue of our auctions on a bid vector to the monotone
price benchmark, the maximum revenue that can be obtained from a bid vector
using supply-respecting prices that are nonincreasing in the bidder ordering
and bounded above by the second-highest bid. As a consequence, our auctions are
simultaneously near-optimal in a wide range of Bayesian multi-unit
environments.
",stefano leonardi,,2012.0,,arXiv,Koutsoupias2012,True,,arXiv,Not available,Near-Optimal Multi-Unit Auctions with Ordered Bidders,6e6fb8c370b57c5f4cd10ad848db2936,http://arxiv.org/abs/1212.2825v1
16292," We construct prior-free auctions with constant-factor approximation
guarantees with ordered bidders, in both unlimited and limited supply settings.
We compare the expected revenue of our auctions on a bid vector to the monotone
price benchmark, the maximum revenue that can be obtained from a bid vector
using supply-respecting prices that are nonincreasing in the bidder ordering
and bounded above by the second-highest bid. As a consequence, our auctions are
simultaneously near-optimal in a wide range of Bayesian multi-unit
environments.
",tim roughgarden,,2012.0,,arXiv,Koutsoupias2012,True,,arXiv,Not available,Near-Optimal Multi-Unit Auctions with Ordered Bidders,6e6fb8c370b57c5f4cd10ad848db2936,http://arxiv.org/abs/1212.2825v1
16293," Secure spectrum auctions can revolutionize the spectrum utilization of
cellular networks and satisfy the ever increasing demand for resources. In this
paper, a multi-tier dynamic spectrum sharing system is studied for efficient
sharing of spectrum with commercial wireless system providers (WSPs), with an
emphasis on federal spectrum sharing. The proposed spectrum sharing system
optimizes usage of spectrum resources, manages intra-WSP and inter-WSP
interference and provides essential level of security, privacy, and obfuscation
to enable the most efficient and reliable usage of the shared spectrum. It
features an intermediate spectrum auctioneer responsible for allocating
resources to commercial WSPs by running secure spectrum auctions. The proposed
secure spectrum auction, MTSSA, leverages Paillier cryptosystem to avoid
possible fraud and bid-rigging. Numerical simulations are provided to compare
the performance of MTSSA, in the considered spectrum sharing system, with other
spectrum auction mechanisms for realistic cellular systems.
",ahmed abdelhadi,,2015.0,10.1109/TCCN.2015.2488618,arXiv,Abdelhadi2015,True,,arXiv,Not available,"A Multi-Tier Wireless Spectrum Sharing System Leveraging Secure Spectrum
Auctions",6ea63f2f5d482f00687f5aee42f7ca68,http://arxiv.org/abs/1503.04899v2
16294," Secure spectrum auctions can revolutionize the spectrum utilization of
cellular networks and satisfy the ever increasing demand for resources. In this
paper, a multi-tier dynamic spectrum sharing system is studied for efficient
sharing of spectrum with commercial wireless system providers (WSPs), with an
emphasis on federal spectrum sharing. The proposed spectrum sharing system
optimizes usage of spectrum resources, manages intra-WSP and inter-WSP
interference and provides essential level of security, privacy, and obfuscation
to enable the most efficient and reliable usage of the shared spectrum. It
features an intermediate spectrum auctioneer responsible for allocating
resources to commercial WSPs by running secure spectrum auctions. The proposed
secure spectrum auction, MTSSA, leverages Paillier cryptosystem to avoid
possible fraud and bid-rigging. Numerical simulations are provided to compare
the performance of MTSSA, in the considered spectrum sharing system, with other
spectrum auction mechanisms for realistic cellular systems.
",haya shajaiah,,2015.0,10.1109/TCCN.2015.2488618,arXiv,Abdelhadi2015,True,,arXiv,Not available,"A Multi-Tier Wireless Spectrum Sharing System Leveraging Secure Spectrum
Auctions",6ea63f2f5d482f00687f5aee42f7ca68,http://arxiv.org/abs/1503.04899v2
16295," Secure spectrum auctions can revolutionize the spectrum utilization of
cellular networks and satisfy the ever increasing demand for resources. In this
paper, a multi-tier dynamic spectrum sharing system is studied for efficient
sharing of spectrum with commercial wireless system providers (WSPs), with an
emphasis on federal spectrum sharing. The proposed spectrum sharing system
optimizes usage of spectrum resources, manages intra-WSP and inter-WSP
interference and provides essential level of security, privacy, and obfuscation
to enable the most efficient and reliable usage of the shared spectrum. It
features an intermediate spectrum auctioneer responsible for allocating
resources to commercial WSPs by running secure spectrum auctions. The proposed
secure spectrum auction, MTSSA, leverages Paillier cryptosystem to avoid
possible fraud and bid-rigging. Numerical simulations are provided to compare
the performance of MTSSA, in the considered spectrum sharing system, with other
spectrum auction mechanisms for realistic cellular systems.
",charles clancy,,2015.0,10.1109/TCCN.2015.2488618,arXiv,Abdelhadi2015,True,,arXiv,Not available,"A Multi-Tier Wireless Spectrum Sharing System Leveraging Secure Spectrum
Auctions",6ea63f2f5d482f00687f5aee42f7ca68,http://arxiv.org/abs/1503.04899v2
16296," In an earlier experiment, participants played a perfect information game
against a computer, which was programmed to deviate often from its backward
induction strategy right at the beginning of the game. Participants knew that
in each game, the computer was nevertheless optimizing against some belief
about the participant's future strategy. In the aggregate, it appeared that
participants applied forward induction. However, cardinal effects seemed to
play a role as well: a number of participants might have been trying to
maximize expected utility.
In order to find out how people really reason in such a game, we designed
centipede-like turn-taking games with new payoff structures in order to make
such cardinal effects less likely. We ran a new experiment with 50
participants, based on marble drop visualizations of these revised payoff
structures. After participants played 48 test games, we asked a number of
questions to gauge the participants' reasoning about their own and the
opponent's strategy at all decision nodes of a sample game. We also checked how
the verbalized strategies fit to the actual choices they made at all their
decision points in the 48 test games.
Even though in the aggregate, participants in the new experiment still tend
to slightly favor the forward induction choice at their first decision node,
their verbalized strategies most often depend on their own attitudes towards
risk and those they assign to the computer opponent, sometimes in addition to
considerations about cooperativeness and competitiveness.
",rineke verbrugge,,2017.0,10.4204/EPTCS.251.19,"EPTCS 251, 2017, pp. 265-284",Ghosh2017,True,,arXiv,Not available,"What Drives People's Choices in Turn-Taking Games, if not Game-Theoretic
Rationality?",3832039bd19e3f2a603cb1dbea2d0585,http://arxiv.org/abs/1707.08749v1
16297," In many natural settings agents participate in multiple different auctions
that are not simultaneous. In such auctions, future opportunities affect
strategic considerations of the players. The goal of this paper is to develop a
quantitative understanding of outcomes of such sequential auctions. In earlier
work (Paes Leme et al. 2012) we initiated the study of the price of anarchy in
sequential auctions. We considered sequential first price auctions in the full
information model, where players are aware of all future opportunities, as well
as the valuation of all players. In this paper, we study efficiency in
sequential auctions in the Bayesian environment, relaxing the informational
assumption on the players. We focus on two environments, both studied in the
full information model in Paes Leme et al. 2012, matching markets and matroid
auctions. In the full information environment, a sequential first price cut
auction for matroid settings is efficient. In Bayesian environments this is no
longer the case, as we show using a simple example with three players. Our main
result is a bound of $1+\frac{e}{e-1}\approx 2.58$ on the price of anarchy in
both matroid auctions and single-value matching markets (even with correlated
types) and a bound of $2\frac{e}{e-1}\approx 3.16$ for general matching markets
with independent types. To bound the price of anarchy we need to consider
possible deviations at an equilibrium. In a sequential Bayesian environment the
effect of deviations is more complex than in one-shot games; early bids allow
others to infer information about the player's value. We create effective
deviations despite the presence of this difficulty by introducing a bluffing
technique of independent interest.
",vasilis syrgkanis,,2012.0,,arXiv,Syrgkanis2012,True,,arXiv,Not available,Bayesian Sequential Auctions,f4d6d2838cdc78bdbdd4f3b9f4370193,http://arxiv.org/abs/1206.4771v1
16298," In many natural settings agents participate in multiple different auctions
that are not simultaneous. In such auctions, future opportunities affect
strategic considerations of the players. The goal of this paper is to develop a
quantitative understanding of outcomes of such sequential auctions. In earlier
work (Paes Leme et al. 2012) we initiated the study of the price of anarchy in
sequential auctions. We considered sequential first price auctions in the full
information model, where players are aware of all future opportunities, as well
as the valuation of all players. In this paper, we study efficiency in
sequential auctions in the Bayesian environment, relaxing the informational
assumption on the players. We focus on two environments, both studied in the
full information model in Paes Leme et al. 2012, matching markets and matroid
auctions. In the full information environment, a sequential first price cut
auction for matroid settings is efficient. In Bayesian environments this is no
longer the case, as we show using a simple example with three players. Our main
result is a bound of $1+\frac{e}{e-1}\approx 2.58$ on the price of anarchy in
both matroid auctions and single-value matching markets (even with correlated
types) and a bound of $2\frac{e}{e-1}\approx 3.16$ for general matching markets
with independent types. To bound the price of anarchy we need to consider
possible deviations at an equilibrium. In a sequential Bayesian environment the
effect of deviations is more complex than in one-shot games; early bids allow
others to infer information about the player's value. We create effective
deviations despite the presence of this difficulty by introducing a bluffing
technique of independent interest.
",eva tardos,,2012.0,,arXiv,Syrgkanis2012,True,,arXiv,Not available,Bayesian Sequential Auctions,f4d6d2838cdc78bdbdd4f3b9f4370193,http://arxiv.org/abs/1206.4771v1
16299," All-pay auctions, a common mechanism for various human and agent
interactions, suffers, like many other mechanisms, from the possibility of
players' failure to participate in the auction. We model such failures, and
fully characterize equilibrium for this class of games, we present a symmetric
equilibrium and show that under some conditions the equilibrium is unique. We
reveal various properties of the equilibrium, such as the lack of influence of
the most-likely-to-participate player on the behavior of the other players. We
perform this analysis with two scenarios: the sum-profit model, where the
auctioneer obtains the sum of all submitted bids, and the max-profit model of
crowdsourcing contests, where the auctioneer can only use the best submissions
and thus obtains only the winning bid.
Furthermore, we examine various methods of influencing the probability of
participation such as the effects of misreporting one's own probability of
participating, and how influencing another player's participation chances
changes the player's strategy.
",yoad lewenberg,,2017.0,,"IEEE Intelligent Systems, 2017",Lewenberg2017,True,,arXiv,Not available,Agent Failures in All-Pay Auctions,1f63cf331cdc8c78d8e119b59b1bb58f,http://arxiv.org/abs/1702.04138v1
16300," All-pay auctions, a common mechanism for various human and agent
interactions, suffers, like many other mechanisms, from the possibility of
players' failure to participate in the auction. We model such failures, and
fully characterize equilibrium for this class of games, we present a symmetric
equilibrium and show that under some conditions the equilibrium is unique. We
reveal various properties of the equilibrium, such as the lack of influence of
the most-likely-to-participate player on the behavior of the other players. We
perform this analysis with two scenarios: the sum-profit model, where the
auctioneer obtains the sum of all submitted bids, and the max-profit model of
crowdsourcing contests, where the auctioneer can only use the best submissions
and thus obtains only the winning bid.
Furthermore, we examine various methods of influencing the probability of
participation such as the effects of misreporting one's own probability of
participating, and how influencing another player's participation chances
changes the player's strategy.
",omer lev,,2017.0,,"IEEE Intelligent Systems, 2017",Lewenberg2017,True,,arXiv,Not available,Agent Failures in All-Pay Auctions,1f63cf331cdc8c78d8e119b59b1bb58f,http://arxiv.org/abs/1702.04138v1
16301," All-pay auctions, a common mechanism for various human and agent
interactions, suffers, like many other mechanisms, from the possibility of
players' failure to participate in the auction. We model such failures, and
fully characterize equilibrium for this class of games, we present a symmetric
equilibrium and show that under some conditions the equilibrium is unique. We
reveal various properties of the equilibrium, such as the lack of influence of
the most-likely-to-participate player on the behavior of the other players. We
perform this analysis with two scenarios: the sum-profit model, where the
auctioneer obtains the sum of all submitted bids, and the max-profit model of
crowdsourcing contests, where the auctioneer can only use the best submissions
and thus obtains only the winning bid.
Furthermore, we examine various methods of influencing the probability of
participation such as the effects of misreporting one's own probability of
participating, and how influencing another player's participation chances
changes the player's strategy.
",yoram bachrach,,2017.0,,"IEEE Intelligent Systems, 2017",Lewenberg2017,True,,arXiv,Not available,Agent Failures in All-Pay Auctions,1f63cf331cdc8c78d8e119b59b1bb58f,http://arxiv.org/abs/1702.04138v1
16302," All-pay auctions, a common mechanism for various human and agent
interactions, suffers, like many other mechanisms, from the possibility of
players' failure to participate in the auction. We model such failures, and
fully characterize equilibrium for this class of games, we present a symmetric
equilibrium and show that under some conditions the equilibrium is unique. We
reveal various properties of the equilibrium, such as the lack of influence of
the most-likely-to-participate player on the behavior of the other players. We
perform this analysis with two scenarios: the sum-profit model, where the
auctioneer obtains the sum of all submitted bids, and the max-profit model of
crowdsourcing contests, where the auctioneer can only use the best submissions
and thus obtains only the winning bid.
Furthermore, we examine various methods of influencing the probability of
participation such as the effects of misreporting one's own probability of
participating, and how influencing another player's participation chances
changes the player's strategy.
",jeffrey rosenschein,,2017.0,,"IEEE Intelligent Systems, 2017",Lewenberg2017,True,,arXiv,Not available,Agent Failures in All-Pay Auctions,1f63cf331cdc8c78d8e119b59b1bb58f,http://arxiv.org/abs/1702.04138v1
16303," We propose a game theoretic framework for task allocation in mobile cloud
computing that corresponds to offloading of compute tasks to a group of nearby
mobile devices. Specifically, in our framework, a distributor node holds a
multidimensional auction for allocating the tasks of a job among nearby mobile
nodes based on their computational capabilities and also the cost of
computation at these nodes, with the goal of reducing the overall job
completion time. Our proposed auction also has the desired incentive
compatibility property that ensures that mobile devices truthfully reveal their
capabilities and costs and that those devices benefit from the task allocation.
To deal with node mobility, we perform multiple auctions over adaptive time
intervals. We develop a heuristic approach to dynamically find the best time
intervals between auctions to minimize unnecessary auctions and the
accompanying overheads. We evaluate our framework and methods using both real
world and synthetic mobility traces. Our evaluation results show that our game
theoretic framework improves the job completion time by a factor of 2-5 in
comparison to the time taken for executing the job locally, while minimizing
the number of auctions and the accompanying overheads. Our approach is also
profitable for the nearby nodes that execute the distributor's tasks with these
nodes receiving a compensation higher than their actual costs.
",mojgan khaledi,,2016.0,,arXiv,Khaledi2016,True,,arXiv,Not available,Profitable Task Allocation in Mobile Cloud Computing,34411beade6c0e348ab24f31f40fdfec,http://arxiv.org/abs/1608.08521v1
16304," We propose a game theoretic framework for task allocation in mobile cloud
computing that corresponds to offloading of compute tasks to a group of nearby
mobile devices. Specifically, in our framework, a distributor node holds a
multidimensional auction for allocating the tasks of a job among nearby mobile
nodes based on their computational capabilities and also the cost of
computation at these nodes, with the goal of reducing the overall job
completion time. Our proposed auction also has the desired incentive
compatibility property that ensures that mobile devices truthfully reveal their
capabilities and costs and that those devices benefit from the task allocation.
To deal with node mobility, we perform multiple auctions over adaptive time
intervals. We develop a heuristic approach to dynamically find the best time
intervals between auctions to minimize unnecessary auctions and the
accompanying overheads. We evaluate our framework and methods using both real
world and synthetic mobility traces. Our evaluation results show that our game
theoretic framework improves the job completion time by a factor of 2-5 in
comparison to the time taken for executing the job locally, while minimizing
the number of auctions and the accompanying overheads. Our approach is also
profitable for the nearby nodes that execute the distributor's tasks with these
nodes receiving a compensation higher than their actual costs.
",mehrdad khaledi,,2016.0,,arXiv,Khaledi2016,True,,arXiv,Not available,Profitable Task Allocation in Mobile Cloud Computing,34411beade6c0e348ab24f31f40fdfec,http://arxiv.org/abs/1608.08521v1
16305," We propose a game theoretic framework for task allocation in mobile cloud
computing that corresponds to offloading of compute tasks to a group of nearby
mobile devices. Specifically, in our framework, a distributor node holds a
multidimensional auction for allocating the tasks of a job among nearby mobile
nodes based on their computational capabilities and also the cost of
computation at these nodes, with the goal of reducing the overall job
completion time. Our proposed auction also has the desired incentive
compatibility property that ensures that mobile devices truthfully reveal their
capabilities and costs and that those devices benefit from the task allocation.
To deal with node mobility, we perform multiple auctions over adaptive time
intervals. We develop a heuristic approach to dynamically find the best time
intervals between auctions to minimize unnecessary auctions and the
accompanying overheads. We evaluate our framework and methods using both real
world and synthetic mobility traces. Our evaluation results show that our game
theoretic framework improves the job completion time by a factor of 2-5 in
comparison to the time taken for executing the job locally, while minimizing
the number of auctions and the accompanying overheads. Our approach is also
profitable for the nearby nodes that execute the distributor's tasks with these
nodes receiving a compensation higher than their actual costs.
",sneha kasera,,2016.0,,arXiv,Khaledi2016,True,,arXiv,Not available,Profitable Task Allocation in Mobile Cloud Computing,34411beade6c0e348ab24f31f40fdfec,http://arxiv.org/abs/1608.08521v1
16307," In an earlier experiment, participants played a perfect information game
against a computer, which was programmed to deviate often from its backward
induction strategy right at the beginning of the game. Participants knew that
in each game, the computer was nevertheless optimizing against some belief
about the participant's future strategy. In the aggregate, it appeared that
participants applied forward induction. However, cardinal effects seemed to
play a role as well: a number of participants might have been trying to
maximize expected utility.
In order to find out how people really reason in such a game, we designed
centipede-like turn-taking games with new payoff structures in order to make
such cardinal effects less likely. We ran a new experiment with 50
participants, based on marble drop visualizations of these revised payoff
structures. After participants played 48 test games, we asked a number of
questions to gauge the participants' reasoning about their own and the
opponent's strategy at all decision nodes of a sample game. We also checked how
the verbalized strategies fit to the actual choices they made at all their
decision points in the 48 test games.
Even though in the aggregate, participants in the new experiment still tend
to slightly favor the forward induction choice at their first decision node,
their verbalized strategies most often depend on their own attitudes towards
risk and those they assign to the computer opponent, sometimes in addition to
considerations about cooperativeness and competitiveness.
",harmen weerd,,2017.0,10.4204/EPTCS.251.19,"EPTCS 251, 2017, pp. 265-284",Ghosh2017,True,,arXiv,Not available,"What Drives People's Choices in Turn-Taking Games, if not Game-Theoretic
Rationality?",3832039bd19e3f2a603cb1dbea2d0585,http://arxiv.org/abs/1707.08749v1
16309," We present a scheme for playing quantum repeated 2x2 games based on the
Marinatto and Weber's approach to quantum games. As a potential application, we
study twice repeated Prisoner's Dilemma game. We show that results not
available in classical game can be obtained when the game is played in the
quantum way. Before we present our idea, we comment on the previous scheme of
playing quantum repeated games.
",piotr frackiewicz,,2011.0,10.1088/1751-8113/45/8/085307,arXiv,Frackiewicz2011,True,,arXiv,Not available,Quantum repeated games revisited,a091a5871a50766661f22ea80a7606fb,http://arxiv.org/abs/1109.3753v1
16310," We study the problem of setting a price for a potential buyer with a
valuation drawn from an unknown distribution $D$. The seller has ""data""' about
$D$ in the form of $m \ge 1$ i.i.d. samples, and the algorithmic challenge is
to use these samples to obtain expected revenue as close as possible to what
could be achieved with advance knowledge of $D$.
Our first set of results quantifies the number of samples $m$ that are
necessary and sufficient to obtain a $(1-\epsilon)$-approximation. For example,
for an unknown distribution that satisfies the monotone hazard rate (MHR)
condition, we prove that $\tilde{\Theta}(\epsilon^{-3/2})$ samples are
necessary and sufficient. Remarkably, this is fewer samples than is necessary
to accurately estimate the expected revenue obtained by even a single reserve
price. We also prove essentially tight sample complexity bounds for regular
distributions, bounded-support distributions, and a wide class of irregular
distributions. Our lower bound approach borrows tools from differential privacy
and information theory, and we believe it could find further applications in
auction theory.
Our second set of results considers the single-sample case. For regular
distributions, we prove that no pricing strategy is better than
$\tfrac{1}{2}$-approximate, and this is optimal by the Bulow-Klemperer theorem.
For MHR distributions, we show how to do better: we give a simple pricing
strategy that guarantees expected revenue at least $0.589$ times the maximum
possible. We also prove that no pricing strategy achieves an approximation
guarantee better than $\frac{e}{4} \approx .68$.
",zhiyi huang,,2014.0,,arXiv,Huang2014,True,,arXiv,Not available,Making the Most of Your Samples,0beaa9211ab7397c119a3299ad8c7696,http://arxiv.org/abs/1407.2479v2
16311," We study the problem of setting a price for a potential buyer with a
valuation drawn from an unknown distribution $D$. The seller has ""data""' about
$D$ in the form of $m \ge 1$ i.i.d. samples, and the algorithmic challenge is
to use these samples to obtain expected revenue as close as possible to what
could be achieved with advance knowledge of $D$.
Our first set of results quantifies the number of samples $m$ that are
necessary and sufficient to obtain a $(1-\epsilon)$-approximation. For example,
for an unknown distribution that satisfies the monotone hazard rate (MHR)
condition, we prove that $\tilde{\Theta}(\epsilon^{-3/2})$ samples are
necessary and sufficient. Remarkably, this is fewer samples than is necessary
to accurately estimate the expected revenue obtained by even a single reserve
price. We also prove essentially tight sample complexity bounds for regular
distributions, bounded-support distributions, and a wide class of irregular
distributions. Our lower bound approach borrows tools from differential privacy
and information theory, and we believe it could find further applications in
auction theory.
Our second set of results considers the single-sample case. For regular
distributions, we prove that no pricing strategy is better than
$\tfrac{1}{2}$-approximate, and this is optimal by the Bulow-Klemperer theorem.
For MHR distributions, we show how to do better: we give a simple pricing
strategy that guarantees expected revenue at least $0.589$ times the maximum
possible. We also prove that no pricing strategy achieves an approximation
guarantee better than $\frac{e}{4} \approx .68$.
",yishay mansour,,2014.0,,arXiv,Huang2014,True,,arXiv,Not available,Making the Most of Your Samples,0beaa9211ab7397c119a3299ad8c7696,http://arxiv.org/abs/1407.2479v2
16312," We study the problem of setting a price for a potential buyer with a
valuation drawn from an unknown distribution $D$. The seller has ""data""' about
$D$ in the form of $m \ge 1$ i.i.d. samples, and the algorithmic challenge is
to use these samples to obtain expected revenue as close as possible to what
could be achieved with advance knowledge of $D$.
Our first set of results quantifies the number of samples $m$ that are
necessary and sufficient to obtain a $(1-\epsilon)$-approximation. For example,
for an unknown distribution that satisfies the monotone hazard rate (MHR)
condition, we prove that $\tilde{\Theta}(\epsilon^{-3/2})$ samples are
necessary and sufficient. Remarkably, this is fewer samples than is necessary
to accurately estimate the expected revenue obtained by even a single reserve
price. We also prove essentially tight sample complexity bounds for regular
distributions, bounded-support distributions, and a wide class of irregular
distributions. Our lower bound approach borrows tools from differential privacy
and information theory, and we believe it could find further applications in
auction theory.
Our second set of results considers the single-sample case. For regular
distributions, we prove that no pricing strategy is better than
$\tfrac{1}{2}$-approximate, and this is optimal by the Bulow-Klemperer theorem.
For MHR distributions, we show how to do better: we give a simple pricing
strategy that guarantees expected revenue at least $0.589$ times the maximum
possible. We also prove that no pricing strategy achieves an approximation
guarantee better than $\frac{e}{4} \approx .68$.
",tim roughgarden,,2014.0,,arXiv,Huang2014,True,,arXiv,Not available,Making the Most of Your Samples,0beaa9211ab7397c119a3299ad8c7696,http://arxiv.org/abs/1407.2479v2
16313," In Zeng et al. [Fluct. Noise Lett. 7 (2007) L439--L447] the analysis of the
lowest unique positive integer game is simplified by some reasonable
assumptions that make the problem tractable for arbitrary numbers of players.
However, here we show that the solution obtained for rational players is not a
Nash equilibrium and that a rational utility maximizer with full computational
capability would arrive at a solution with a superior expected payoff. An exact
solution is presented for the three- and four-player cases and an approximate
solution for an arbitrary number of players.
",adrian flitney,,2008.0,,Fluct. Noise Lett. 8 (2008) C1-C4,Flitney2008,True,,arXiv,Not available,"Comments on ""Reverse auction: the lowest positive integer game""",25fd8ccd4c07b9b0d8fb8e37e9f30047,http://arxiv.org/abs/0801.1535v1
16314," Second-price auctions with reserve play a critical role for modern search
engine and popular online sites since the revenue of these companies often
directly de- pends on the outcome of such auctions. The choice of the reserve
price is the main mechanism through which the auction revenue can be influenced
in these electronic markets. We cast the problem of selecting the reserve price
to optimize revenue as a learning problem and present a full theoretical
analysis dealing with the complex properties of the corresponding loss
function. We further give novel algorithms for solving this problem and report
the results of several experiments in both synthetic and real data
demonstrating their effectiveness.
",mehryar mohri,,2013.0,,arXiv,Mohri2013,True,,arXiv,Not available,"Learning Theory and Algorithms for Revenue Optimization in Second-Price
Auctions with Reserve",065e83e802149e3e0fbaa26ec9a44084,http://arxiv.org/abs/1310.5665v3
16315," Second-price auctions with reserve play a critical role for modern search
engine and popular online sites since the revenue of these companies often
directly de- pends on the outcome of such auctions. The choice of the reserve
price is the main mechanism through which the auction revenue can be influenced
in these electronic markets. We cast the problem of selecting the reserve price
to optimize revenue as a learning problem and present a full theoretical
analysis dealing with the complex properties of the corresponding loss
function. We further give novel algorithms for solving this problem and report
the results of several experiments in both synthetic and real data
demonstrating their effectiveness.
",andres medina,,2013.0,,arXiv,Mohri2013,True,,arXiv,Not available,"Learning Theory and Algorithms for Revenue Optimization in Second-Price
Auctions with Reserve",065e83e802149e3e0fbaa26ec9a44084,http://arxiv.org/abs/1310.5665v3
16316," The call auction is a widely used trading mechanism, especially during the
opening and closing periods of financial markets. In this paper, we study a
standard call auction problem where orders are submitted according to Poisson
processes, with random prices distributed according to a general distribution,
and may be cancelled at any time. We compute the analytical expressions of the
distributions of the traded volume, of the lower and upper bounds of the
clearing prices, and of the price range of these possible clearing prices of
the call auction. Using results from the theory of order statistics and a
theorem on the limit of sequences of random variables with independent random
indices, we derive the weak limits of all these distributions. In this setting,
traded volume and bounds of the clearing prices are found to be asymptotically
normal, while the clearing price range is asymptotically exponential. All the
parameters of these distributions are explicitly derived as functions of the
parameters of the incoming orders' flows.
",ioane toke,,2014.0,,arXiv,Toke2014,True,,arXiv,Not available,Exact and asymptotic solutions of the call auction problem,a8c1fd06545ba8acb94fe635d3abcba7,http://arxiv.org/abs/1407.4512v2
16317," We develop a novel optimization model to maximize the profit of a Demand-Side
Platform (DSP) while ensuring that the budget utilization preferences of the
DSP's advertiser clients are adequately met. Our model is highly flexible and
can be applied in a Real-Time Bidding environment (RTB) with arbitrary auction
types, e.g., both first and second price auctions. Our proposed formulation
leads to a non-convex optimization problem due to the joint optimization over
both impression allocation and bid price decisions. Using Fenchel duality
theory, we construct a dual problem that is convex and can be solved
efficiently to obtain feasible bidding prices and allocation variables that can
be deployed in a RTB setting. With a few minimal additional assumptions on the
properties of the auctions, we demonstrate theoretically that our
computationally efficient procedure based on convex optimization principles is
guaranteed to deliver a globally optimal solution. We conduct experiments using
data from a real DSP to validate our theoretical findings and to demonstrate
that our method successfully trades off between DSP profitability and budget
utilization in a simulated online environment.
",alfonso lobos,,2018.0,,arXiv,Lobos2018,True,,arXiv,Not available,"Optimal Bidding, Allocation and Budget Spending for a Demand Side
Platform Under Many Auction Types",337486461d7ca1a9c4f51c638cb6f80f,http://arxiv.org/abs/1805.11645v1
16318," We introduce a framework for studying the effect of cooperation on the
quality of outcomes in utility games. Our framework is a coalitional analog of
the smoothness framework of non-cooperative games. Coalitional smoothness
implies bounds on the strong price of anarchy, the loss of quality of
coalitionally stable outcomes, as well as bounds on coalitional versions of
coarse correlated equilibria and sink equilibria, which we define as
out-of-equilibrium myopic behavior as determined by a natural coalitional
version of best-response dynamics.
Our coalitional smoothness framework captures existing results bounding the
strong price of anarchy of network design games. We show that in any monotone
utility-maximization game, if each player's utility is at least his marginal
contribution to the welfare, then the strong price of anarchy is at most 2.
This captures a broad class of games, including games with a very high price of
anarchy. Additionally, we show that in potential games the strong price of
anarchy is close to the price of stability, the quality of the best Nash
equilibrium.
",yoram bachrach,,2013.0,,arXiv,Bachrach2013,True,,arXiv,Not available,Strong Price of Anarchy and Coalitional Dynamics,0af31fdf9e7e188adbdb7f97879422ab,http://arxiv.org/abs/1307.2537v1
16319," We develop a novel optimization model to maximize the profit of a Demand-Side
Platform (DSP) while ensuring that the budget utilization preferences of the
DSP's advertiser clients are adequately met. Our model is highly flexible and
can be applied in a Real-Time Bidding environment (RTB) with arbitrary auction
types, e.g., both first and second price auctions. Our proposed formulation
leads to a non-convex optimization problem due to the joint optimization over
both impression allocation and bid price decisions. Using Fenchel duality
theory, we construct a dual problem that is convex and can be solved
efficiently to obtain feasible bidding prices and allocation variables that can
be deployed in a RTB setting. With a few minimal additional assumptions on the
properties of the auctions, we demonstrate theoretically that our
computationally efficient procedure based on convex optimization principles is
guaranteed to deliver a globally optimal solution. We conduct experiments using
data from a real DSP to validate our theoretical findings and to demonstrate
that our method successfully trades off between DSP profitability and budget
utilization in a simulated online environment.
",paul grigas,,2018.0,,arXiv,Lobos2018,True,,arXiv,Not available,"Optimal Bidding, Allocation and Budget Spending for a Demand Side
Platform Under Many Auction Types",337486461d7ca1a9c4f51c638cb6f80f,http://arxiv.org/abs/1805.11645v1
16320," We develop a novel optimization model to maximize the profit of a Demand-Side
Platform (DSP) while ensuring that the budget utilization preferences of the
DSP's advertiser clients are adequately met. Our model is highly flexible and
can be applied in a Real-Time Bidding environment (RTB) with arbitrary auction
types, e.g., both first and second price auctions. Our proposed formulation
leads to a non-convex optimization problem due to the joint optimization over
both impression allocation and bid price decisions. Using Fenchel duality
theory, we construct a dual problem that is convex and can be solved
efficiently to obtain feasible bidding prices and allocation variables that can
be deployed in a RTB setting. With a few minimal additional assumptions on the
properties of the auctions, we demonstrate theoretically that our
computationally efficient procedure based on convex optimization principles is
guaranteed to deliver a globally optimal solution. We conduct experiments using
data from a real DSP to validate our theoretical findings and to demonstrate
that our method successfully trades off between DSP profitability and budget
utilization in a simulated online environment.
",zheng wen,,2018.0,,arXiv,Lobos2018,True,,arXiv,Not available,"Optimal Bidding, Allocation and Budget Spending for a Demand Side
Platform Under Many Auction Types",337486461d7ca1a9c4f51c638cb6f80f,http://arxiv.org/abs/1805.11645v1
16321," We develop a novel optimization model to maximize the profit of a Demand-Side
Platform (DSP) while ensuring that the budget utilization preferences of the
DSP's advertiser clients are adequately met. Our model is highly flexible and
can be applied in a Real-Time Bidding environment (RTB) with arbitrary auction
types, e.g., both first and second price auctions. Our proposed formulation
leads to a non-convex optimization problem due to the joint optimization over
both impression allocation and bid price decisions. Using Fenchel duality
theory, we construct a dual problem that is convex and can be solved
efficiently to obtain feasible bidding prices and allocation variables that can
be deployed in a RTB setting. With a few minimal additional assumptions on the
properties of the auctions, we demonstrate theoretically that our
computationally efficient procedure based on convex optimization principles is
guaranteed to deliver a globally optimal solution. We conduct experiments using
data from a real DSP to validate our theoretical findings and to demonstrate
that our method successfully trades off between DSP profitability and budget
utilization in a simulated online environment.
",kuang-chih lee,,2018.0,,arXiv,Lobos2018,True,,arXiv,Not available,"Optimal Bidding, Allocation and Budget Spending for a Demand Side
Platform Under Many Auction Types",337486461d7ca1a9c4f51c638cb6f80f,http://arxiv.org/abs/1805.11645v1
16322," Quantization becomes a new way to study classical game theory since quantum
strategies and quantum games have been proposed. In previous studies, many
typical game models, such as prisoner's dilemma, battle of the sexes, Hawk-Dove
game, have been investigated by using quantization approaches. In this paper,
several game models of opinion formations have been quantized based on the
Marinatto-Weber quantum game scheme, a frequently used scheme to convert
classical games to quantum versions. Our results show that the quantization can
change fascinatingly the properties of some classical opinion formation game
models so as to generate win-win outcomes.
",xinyang deng,,2015.0,10.1209/0295-5075/114/50012,arXiv,Deng2015,True,,arXiv,Not available,"Quantum games of opinion formation based on the Marinatto-Weber quantum
game scheme",637025fc7b8ab599113a4a838497b62f,http://arxiv.org/abs/1507.07966v1
16323," Quantization becomes a new way to study classical game theory since quantum
strategies and quantum games have been proposed. In previous studies, many
typical game models, such as prisoner's dilemma, battle of the sexes, Hawk-Dove
game, have been investigated by using quantization approaches. In this paper,
several game models of opinion formations have been quantized based on the
Marinatto-Weber quantum game scheme, a frequently used scheme to convert
classical games to quantum versions. Our results show that the quantization can
change fascinatingly the properties of some classical opinion formation game
models so as to generate win-win outcomes.
",yong deng,,2015.0,10.1209/0295-5075/114/50012,arXiv,Deng2015,True,,arXiv,Not available,"Quantum games of opinion formation based on the Marinatto-Weber quantum
game scheme",637025fc7b8ab599113a4a838497b62f,http://arxiv.org/abs/1507.07966v1
16324," Quantization becomes a new way to study classical game theory since quantum
strategies and quantum games have been proposed. In previous studies, many
typical game models, such as prisoner's dilemma, battle of the sexes, Hawk-Dove
game, have been investigated by using quantization approaches. In this paper,
several game models of opinion formations have been quantized based on the
Marinatto-Weber quantum game scheme, a frequently used scheme to convert
classical games to quantum versions. Our results show that the quantization can
change fascinatingly the properties of some classical opinion formation game
models so as to generate win-win outcomes.
",qi liu,,2015.0,10.1209/0295-5075/114/50012,arXiv,Deng2015,True,,arXiv,Not available,"Quantum games of opinion formation based on the Marinatto-Weber quantum
game scheme",637025fc7b8ab599113a4a838497b62f,http://arxiv.org/abs/1507.07966v1
16325," Quantization becomes a new way to study classical game theory since quantum
strategies and quantum games have been proposed. In previous studies, many
typical game models, such as prisoner's dilemma, battle of the sexes, Hawk-Dove
game, have been investigated by using quantization approaches. In this paper,
several game models of opinion formations have been quantized based on the
Marinatto-Weber quantum game scheme, a frequently used scheme to convert
classical games to quantum versions. Our results show that the quantization can
change fascinatingly the properties of some classical opinion formation game
models so as to generate win-win outcomes.
",zhen wang,,2015.0,10.1209/0295-5075/114/50012,arXiv,Deng2015,True,,arXiv,Not available,"Quantum games of opinion formation based on the Marinatto-Weber quantum
game scheme",637025fc7b8ab599113a4a838497b62f,http://arxiv.org/abs/1507.07966v1
16326," Using duality theory techniques we derive simple, closed-form formulas for
bounding the optimal revenue of a monopolist selling many heterogeneous goods,
in the case where the buyer's valuations for the items come i.i.d. from a
uniform distribution and in the case where they follow independent (but not
necessarily identical) exponential distributions. We apply this in order to get
in both these settings specific performance guarantees, as functions of the
number of items $m$, for the simple deterministic selling mechanisms studied by
Hart and Nisan [EC 2012], namely the one that sells the items separately and
the one that offers them all in a single bundle.
We also propose and study the performance of a natural randomized mechanism
for exponential valuations, called Proportional. As an interesting corollary,
for the special case where the exponential distributions are also identical, we
can derive that offering the goods in a single full bundle is the optimal
selling mechanism for any number of items. To our knowledge, this is the first
result of its kind: finding a revenue-maximizing auction in an additive setting
with arbitrarily many goods.
",yiannis giannakopoulos,,2014.0,10.1016/j.tcs.2015.03.010,Theoretical Computer Science 581 (2015) 83-96,Giannakopoulos2014,True,,arXiv,Not available,Bounding the Optimal Revenue of Selling Multiple Goods,3a264e4fc56b59693f4bf20f31cbdefe,http://arxiv.org/abs/1404.2832v6
16327," The design of profit-maximizing multi-item mechanisms is a notoriously
challenging problem with tremendous real-world impact. The mechanism designer's
goal is to field a mechanism with high expected profit on the distribution over
buyers' values. Unfortunately, if the set of mechanisms he optimizes over is
complex, a mechanism may have high empirical profit over a small set of samples
but low expected profit. This raises the question, how many samples are
sufficient to ensure that the empirically optimal mechanism is nearly optimal
in expectation? We uncover structure shared by a myriad of pricing, auction,
and lottery mechanisms that allows us to prove strong sample complexity bounds:
for any set of buyers' values, profit is a piecewise linear function of the
mechanism's parameters. We prove new bounds for mechanism classes not yet
studied in the sample-based mechanism design literature and match or improve
over the best known guarantees for many classes. The profit functions we study
are significantly different from well-understood functions in machine learning,
so our analysis requires a sharp understanding of the interplay between
mechanism parameters and buyer values. We strengthen our main results with
data-dependent bounds when the distribution over buyers' values is
""well-behaved."" Finally, we investigate a fundamental tradeoff in sample-based
mechanism design: complex mechanisms often have higher profit than simple
mechanisms, but more samples are required to ensure that empirical and expected
profit are close. We provide techniques for optimizing this tradeoff.
",maria-florina balcan,,2017.0,,arXiv,Balcan2017,True,,arXiv,Not available,A General Theory of Sample Complexity for Multi-Item Profit Maximization,15ca727cd6f087f396938b765a22c74d,http://arxiv.org/abs/1705.00243v4
16328," The design of profit-maximizing multi-item mechanisms is a notoriously
challenging problem with tremendous real-world impact. The mechanism designer's
goal is to field a mechanism with high expected profit on the distribution over
buyers' values. Unfortunately, if the set of mechanisms he optimizes over is
complex, a mechanism may have high empirical profit over a small set of samples
but low expected profit. This raises the question, how many samples are
sufficient to ensure that the empirically optimal mechanism is nearly optimal
in expectation? We uncover structure shared by a myriad of pricing, auction,
and lottery mechanisms that allows us to prove strong sample complexity bounds:
for any set of buyers' values, profit is a piecewise linear function of the
mechanism's parameters. We prove new bounds for mechanism classes not yet
studied in the sample-based mechanism design literature and match or improve
over the best known guarantees for many classes. The profit functions we study
are significantly different from well-understood functions in machine learning,
so our analysis requires a sharp understanding of the interplay between
mechanism parameters and buyer values. We strengthen our main results with
data-dependent bounds when the distribution over buyers' values is
""well-behaved."" Finally, we investigate a fundamental tradeoff in sample-based
mechanism design: complex mechanisms often have higher profit than simple
mechanisms, but more samples are required to ensure that empirical and expected
profit are close. We provide techniques for optimizing this tradeoff.
",tuomas sandholm,,2017.0,,arXiv,Balcan2017,True,,arXiv,Not available,A General Theory of Sample Complexity for Multi-Item Profit Maximization,15ca727cd6f087f396938b765a22c74d,http://arxiv.org/abs/1705.00243v4
16329," We introduce a framework for studying the effect of cooperation on the
quality of outcomes in utility games. Our framework is a coalitional analog of
the smoothness framework of non-cooperative games. Coalitional smoothness
implies bounds on the strong price of anarchy, the loss of quality of
coalitionally stable outcomes, as well as bounds on coalitional versions of
coarse correlated equilibria and sink equilibria, which we define as
out-of-equilibrium myopic behavior as determined by a natural coalitional
version of best-response dynamics.
Our coalitional smoothness framework captures existing results bounding the
strong price of anarchy of network design games. We show that in any monotone
utility-maximization game, if each player's utility is at least his marginal
contribution to the welfare, then the strong price of anarchy is at most 2.
This captures a broad class of games, including games with a very high price of
anarchy. Additionally, we show that in potential games the strong price of
anarchy is close to the price of stability, the quality of the best Nash
equilibrium.
",vasilis syrgkanis,,2013.0,,arXiv,Bachrach2013,True,,arXiv,Not available,Strong Price of Anarchy and Coalitional Dynamics,0af31fdf9e7e188adbdb7f97879422ab,http://arxiv.org/abs/1307.2537v1
16330," The design of profit-maximizing multi-item mechanisms is a notoriously
challenging problem with tremendous real-world impact. The mechanism designer's
goal is to field a mechanism with high expected profit on the distribution over
buyers' values. Unfortunately, if the set of mechanisms he optimizes over is
complex, a mechanism may have high empirical profit over a small set of samples
but low expected profit. This raises the question, how many samples are
sufficient to ensure that the empirically optimal mechanism is nearly optimal
in expectation? We uncover structure shared by a myriad of pricing, auction,
and lottery mechanisms that allows us to prove strong sample complexity bounds:
for any set of buyers' values, profit is a piecewise linear function of the
mechanism's parameters. We prove new bounds for mechanism classes not yet
studied in the sample-based mechanism design literature and match or improve
over the best known guarantees for many classes. The profit functions we study
are significantly different from well-understood functions in machine learning,
so our analysis requires a sharp understanding of the interplay between
mechanism parameters and buyer values. We strengthen our main results with
data-dependent bounds when the distribution over buyers' values is
""well-behaved."" Finally, we investigate a fundamental tradeoff in sample-based
mechanism design: complex mechanisms often have higher profit than simple
mechanisms, but more samples are required to ensure that empirical and expected
profit are close. We provide techniques for optimizing this tradeoff.
",ellen vitercik,,2017.0,,arXiv,Balcan2017,True,,arXiv,Not available,A General Theory of Sample Complexity for Multi-Item Profit Maximization,15ca727cd6f087f396938b765a22c74d,http://arxiv.org/abs/1705.00243v4
16331," As the number of resources on chip multiprocessors (CMPs) increases, the
complexity of how to best allocate these resources increases drastically.
Because the higher number of applications makes the interaction and impacts of
various memory levels more complex. Also, the selection of the objective
function to define what \enquote{best} means for all applications is
challenging. Memory-level parallelism (MLP) aware replacement algorithms in
CMPs try to maximize the overall system performance or equalize each
application's performance degradation due to sharing. However, depending on the
selected \enquote{performance} metric, these algorithms are not efficiently
implemented, because these centralized approaches mostly need some further
information regarding about applications' need. In this paper, we propose a
contention-aware game-theoretic resource management approach (CARMA) using
market auction mechanism to find an optimal strategy for each application in a
resource competition game. The applications learn through repeated interactions
to choose their action on choosing the shared resources. Specifically, we
consider two cases: (i) cache competition game, and (ii) main processor and
co-processor congestion game. We enforce costs for each resource and derive
bidding strategy. Accurate evaluation of the proposed approach show that our
distributed allocation is scalable and outperforms the static and traditional
approaches.
",farshid farhat,,2017.0,,arXiv,Farhat2017,True,,arXiv,Not available,"CARMA: Contention-aware Auction-based Resource Management in
Architecture",e96a94cf2967d68de4e7a03781cd9312,http://arxiv.org/abs/1710.00073v4
16332," As the number of resources on chip multiprocessors (CMPs) increases, the
complexity of how to best allocate these resources increases drastically.
Because the higher number of applications makes the interaction and impacts of
various memory levels more complex. Also, the selection of the objective
function to define what \enquote{best} means for all applications is
challenging. Memory-level parallelism (MLP) aware replacement algorithms in
CMPs try to maximize the overall system performance or equalize each
application's performance degradation due to sharing. However, depending on the
selected \enquote{performance} metric, these algorithms are not efficiently
implemented, because these centralized approaches mostly need some further
information regarding about applications' need. In this paper, we propose a
contention-aware game-theoretic resource management approach (CARMA) using
market auction mechanism to find an optimal strategy for each application in a
resource competition game. The applications learn through repeated interactions
to choose their action on choosing the shared resources. Specifically, we
consider two cases: (i) cache competition game, and (ii) main processor and
co-processor congestion game. We enforce costs for each resource and derive
bidding strategy. Accurate evaluation of the proposed approach show that our
distributed allocation is scalable and outperforms the static and traditional
approaches.
",diman tootaghaj,,2017.0,,arXiv,Farhat2017,True,,arXiv,Not available,"CARMA: Contention-aware Auction-based Resource Management in
Architecture",e96a94cf2967d68de4e7a03781cd9312,http://arxiv.org/abs/1710.00073v4
16333," We provide some examples showing how game-theoretic arguments can be used in
computability theory and algorithmic information theory: unique numbering
theorem (Friedberg), the gap between conditional complexity and total
conditional complexity, Epstein--Levin theorem and some (yet unpublished)
result of Muchnik and Vyugin
",andrej muchnik,,2012.0,,arXiv,Muchnik2012,True,,arXiv,Not available,"Game arguments in computability theory and algorithmic information
theory",e2d25a25f68f039d6ae204ee42b37076,http://arxiv.org/abs/1204.0198v4
16334," We provide some examples showing how game-theoretic arguments can be used in
computability theory and algorithmic information theory: unique numbering
theorem (Friedberg), the gap between conditional complexity and total
conditional complexity, Epstein--Levin theorem and some (yet unpublished)
result of Muchnik and Vyugin
",alexander shen,,2012.0,,arXiv,Muchnik2012,True,,arXiv,Not available,"Game arguments in computability theory and algorithmic information
theory",e2d25a25f68f039d6ae204ee42b37076,http://arxiv.org/abs/1204.0198v4
16335," We provide some examples showing how game-theoretic arguments can be used in
computability theory and algorithmic information theory: unique numbering
theorem (Friedberg), the gap between conditional complexity and total
conditional complexity, Epstein--Levin theorem and some (yet unpublished)
result of Muchnik and Vyugin
",mikhail vyugin,,2012.0,,arXiv,Muchnik2012,True,,arXiv,Not available,"Game arguments in computability theory and algorithmic information
theory",e2d25a25f68f039d6ae204ee42b37076,http://arxiv.org/abs/1204.0198v4
16336," We consider monotonicity problems for graph searching games. Variants of
these games - defined by the type of moves allowed for the players - have been
found to be closely connected to graph decompositions and associated width
measures such as path- or tree-width. Of particular interest is the question
whether these games are monotone, i.e. whether the cops can catch a robber
without ever allowing the robber to reach positions that have been cleared
before. The monotonicity problem for graph searching games has intensely been
studied in the literature, but for two types of games the problem was left
unresolved. These are the games on digraphs where the robber is invisible and
lazy or visible and fast. In this paper, we solve the problems by giving
examples showing that both types of games are non-monotone. Graph searching
games on digraphs are closely related to recent proposals for digraph
decompositions generalising tree-width to directed graphs. These proposals have
partly been motivated by attempts to develop a structure theory for digraphs
similar to the graph minor theory developed by Robertson and Seymour for
undirected graphs, and partly by the immense number of algorithmic results
using tree-width of undirected graphs and the hope that part of this success
might be reproducible on digraphs using a directed tree-width. Unfortunately
the number of applications for the digraphs measures introduced so far is still
small. We therefore explore the limits of the algorithmic applicability of
digraph decompositions. In particular, we show that various natural candidates
for problems that might benefit from digraphs having small directed tree-width
remain NP-complete even on almost acyclic graphs.
",stephan kreutzer,,2008.0,,arXiv,Kreutzer2008,True,,arXiv,Not available,Digraph Decompositions and Monotonicity in Digraph Searching,34c7d9b8dbed273404b8ca711b6c0292,http://arxiv.org/abs/0802.2228v1
16337," We consider monotonicity problems for graph searching games. Variants of
these games - defined by the type of moves allowed for the players - have been
found to be closely connected to graph decompositions and associated width
measures such as path- or tree-width. Of particular interest is the question
whether these games are monotone, i.e. whether the cops can catch a robber
without ever allowing the robber to reach positions that have been cleared
before. The monotonicity problem for graph searching games has intensely been
studied in the literature, but for two types of games the problem was left
unresolved. These are the games on digraphs where the robber is invisible and
lazy or visible and fast. In this paper, we solve the problems by giving
examples showing that both types of games are non-monotone. Graph searching
games on digraphs are closely related to recent proposals for digraph
decompositions generalising tree-width to directed graphs. These proposals have
partly been motivated by attempts to develop a structure theory for digraphs
similar to the graph minor theory developed by Robertson and Seymour for
undirected graphs, and partly by the immense number of algorithmic results
using tree-width of undirected graphs and the hope that part of this success
might be reproducible on digraphs using a directed tree-width. Unfortunately
the number of applications for the digraphs measures introduced so far is still
small. We therefore explore the limits of the algorithmic applicability of
digraph decompositions. In particular, we show that various natural candidates
for problems that might benefit from digraphs having small directed tree-width
remain NP-complete even on almost acyclic graphs.
",sebastian ordyniak,,2008.0,,arXiv,Kreutzer2008,True,,arXiv,Not available,Digraph Decompositions and Monotonicity in Digraph Searching,34c7d9b8dbed273404b8ca711b6c0292,http://arxiv.org/abs/0802.2228v1
16338," Representation languages for coalitional games are a key research area in
algorithmic game theory. There is an inherent tradeoff between how general a
language is, allowing it to capture more elaborate games, and how hard it is
computationally to optimize and solve such games. One prominent such language
is the simple yet expressive Weighted Graph Games (WGGs) representation [14],
which maintains knowledge about synergies between agents in the form of an edge
weighted graph.
We consider the problem of finding the optimal coalition structure in WGGs.
The agents in such games are vertices in a graph, and the value of a coalition
is the sum of the weights of the edges present between coalition members. The
optimal coalition structure is a partition of the agents to coalitions, that
maximizes the sum of utilities obtained by the coalitions. We show that finding
the optimal coalition structure is not only hard for general graphs, but is
also intractable for restricted families such as planar graphs which are
amenable for many other combinatorial problems. We then provide algorithms with
constant factor approximations for planar, minor-free and bounded degree
graphs.
",yoram bachrach,,2011.0,,arXiv,Bachrach2011,True,,arXiv,Not available,Optimal Coalition Structures in Cooperative Graph Games,e58351577fc6c36ca638df97126ea730,http://arxiv.org/abs/1108.5248v2
16339," Representation languages for coalitional games are a key research area in
algorithmic game theory. There is an inherent tradeoff between how general a
language is, allowing it to capture more elaborate games, and how hard it is
computationally to optimize and solve such games. One prominent such language
is the simple yet expressive Weighted Graph Games (WGGs) representation [14],
which maintains knowledge about synergies between agents in the form of an edge
weighted graph.
We consider the problem of finding the optimal coalition structure in WGGs.
The agents in such games are vertices in a graph, and the value of a coalition
is the sum of the weights of the edges present between coalition members. The
optimal coalition structure is a partition of the agents to coalitions, that
maximizes the sum of utilities obtained by the coalitions. We show that finding
the optimal coalition structure is not only hard for general graphs, but is
also intractable for restricted families such as planar graphs which are
amenable for many other combinatorial problems. We then provide algorithms with
constant factor approximations for planar, minor-free and bounded degree
graphs.
",pushmeet kohli,,2011.0,,arXiv,Bachrach2011,True,,arXiv,Not available,Optimal Coalition Structures in Cooperative Graph Games,e58351577fc6c36ca638df97126ea730,http://arxiv.org/abs/1108.5248v2
16340," We study a game for recognising formal languages, in which two players with
imperfect information need to coordinate on a common decision, given private
input words correlated by a finite graph. The players have a joint objective to
avoid an inadmissible decision, in spite of the uncertainty induced by the
input.
We show that the acceptor model based on consensus games characterises
context-sensitive languages. Further, we describe the expressiveness of these
games in terms of iterated synchronous transductions and identify a subclass
that characterises context-free languages.
",dietmar berwanger,,2015.0,,arXiv,Berwanger2015,True,,arXiv,Not available,Consensus Game Acceptors and Iterated Transductions,fb66475077eaf68af0c9cd96dbe7d038,http://arxiv.org/abs/1501.07131v3
16341," We introduce a framework for studying the effect of cooperation on the
quality of outcomes in utility games. Our framework is a coalitional analog of
the smoothness framework of non-cooperative games. Coalitional smoothness
implies bounds on the strong price of anarchy, the loss of quality of
coalitionally stable outcomes, as well as bounds on coalitional versions of
coarse correlated equilibria and sink equilibria, which we define as
out-of-equilibrium myopic behavior as determined by a natural coalitional
version of best-response dynamics.
Our coalitional smoothness framework captures existing results bounding the
strong price of anarchy of network design games. We show that in any monotone
utility-maximization game, if each player's utility is at least his marginal
contribution to the welfare, then the strong price of anarchy is at most 2.
This captures a broad class of games, including games with a very high price of
anarchy. Additionally, we show that in potential games the strong price of
anarchy is close to the price of stability, the quality of the best Nash
equilibrium.
",eva tardos,,2013.0,,arXiv,Bachrach2013,True,,arXiv,Not available,Strong Price of Anarchy and Coalitional Dynamics,0af31fdf9e7e188adbdb7f97879422ab,http://arxiv.org/abs/1307.2537v1
16342," Representation languages for coalitional games are a key research area in
algorithmic game theory. There is an inherent tradeoff between how general a
language is, allowing it to capture more elaborate games, and how hard it is
computationally to optimize and solve such games. One prominent such language
is the simple yet expressive Weighted Graph Games (WGGs) representation [14],
which maintains knowledge about synergies between agents in the form of an edge
weighted graph.
We consider the problem of finding the optimal coalition structure in WGGs.
The agents in such games are vertices in a graph, and the value of a coalition
is the sum of the weights of the edges present between coalition members. The
optimal coalition structure is a partition of the agents to coalitions, that
maximizes the sum of utilities obtained by the coalitions. We show that finding
the optimal coalition structure is not only hard for general graphs, but is
also intractable for restricted families such as planar graphs which are
amenable for many other combinatorial problems. We then provide algorithms with
constant factor approximations for planar, minor-free and bounded degree
graphs.
",vladimir kolmogorov,,2011.0,,arXiv,Bachrach2011,True,,arXiv,Not available,Optimal Coalition Structures in Cooperative Graph Games,e58351577fc6c36ca638df97126ea730,http://arxiv.org/abs/1108.5248v2
16343," Representation languages for coalitional games are a key research area in
algorithmic game theory. There is an inherent tradeoff between how general a
language is, allowing it to capture more elaborate games, and how hard it is
computationally to optimize and solve such games. One prominent such language
is the simple yet expressive Weighted Graph Games (WGGs) representation [14],
which maintains knowledge about synergies between agents in the form of an edge
weighted graph.
We consider the problem of finding the optimal coalition structure in WGGs.
The agents in such games are vertices in a graph, and the value of a coalition
is the sum of the weights of the edges present between coalition members. The
optimal coalition structure is a partition of the agents to coalitions, that
maximizes the sum of utilities obtained by the coalitions. We show that finding
the optimal coalition structure is not only hard for general graphs, but is
also intractable for restricted families such as planar graphs which are
amenable for many other combinatorial problems. We then provide algorithms with
constant factor approximations for planar, minor-free and bounded degree
graphs.
",morteza zadimoghaddam,,2011.0,,arXiv,Bachrach2011,True,,arXiv,Not available,Optimal Coalition Structures in Cooperative Graph Games,e58351577fc6c36ca638df97126ea730,http://arxiv.org/abs/1108.5248v2
16344," Game theory's prescriptive power typically relies on full rationality and/or
self-play interactions. In contrast, this work sets aside these fundamental
premises and focuses instead on heterogeneous autonomous interactions between
two or more agents. Specifically, we introduce a new and concise representation
for repeated adversarial (constant-sum) games that highlight the necessary
features that enable an automated planing agent to reason about how to score
above the game's Nash equilibrium, when facing heterogeneous adversaries. To
this end, we present TeamUP, a model-based RL algorithm designed for learning
and planning such an abstraction. In essence, it is somewhat similar to R-max
with a cleverly engineered reward shaping that treats exploration as an
adversarial optimization problem. In practice, it attempts to find an ally with
which to tacitly collude (in more than two-player games) and then collaborates
on a joint plan of actions that can consistently score a high utility in
adversarial repeated games. We use the inaugural Lemonade Stand Game Tournament
to demonstrate the effectiveness of our approach, and find that TeamUP is the
best performing agent, demoting the Tournament's actual winning strategy into
second place. In our experimental analysis, we show hat our strategy
successfully and consistently builds collaborations with many different
heterogeneous (and sometimes very sophisticated) adversaries.
",enrique cote,,2012.0,,arXiv,Cote2012,True,,arXiv,Not available,Automated Planning in Repeated Adversarial Games,5ec008101ce2a6052061d50dae3497eb,http://arxiv.org/abs/1203.3498v1
16345," Game theory's prescriptive power typically relies on full rationality and/or
self-play interactions. In contrast, this work sets aside these fundamental
premises and focuses instead on heterogeneous autonomous interactions between
two or more agents. Specifically, we introduce a new and concise representation
for repeated adversarial (constant-sum) games that highlight the necessary
features that enable an automated planing agent to reason about how to score
above the game's Nash equilibrium, when facing heterogeneous adversaries. To
this end, we present TeamUP, a model-based RL algorithm designed for learning
and planning such an abstraction. In essence, it is somewhat similar to R-max
with a cleverly engineered reward shaping that treats exploration as an
adversarial optimization problem. In practice, it attempts to find an ally with
which to tacitly collude (in more than two-player games) and then collaborates
on a joint plan of actions that can consistently score a high utility in
adversarial repeated games. We use the inaugural Lemonade Stand Game Tournament
to demonstrate the effectiveness of our approach, and find that TeamUP is the
best performing agent, demoting the Tournament's actual winning strategy into
second place. In our experimental analysis, we show hat our strategy
successfully and consistently builds collaborations with many different
heterogeneous (and sometimes very sophisticated) adversaries.
",archie chapman,,2012.0,,arXiv,Cote2012,True,,arXiv,Not available,Automated Planning in Repeated Adversarial Games,5ec008101ce2a6052061d50dae3497eb,http://arxiv.org/abs/1203.3498v1
16346," Game theory's prescriptive power typically relies on full rationality and/or
self-play interactions. In contrast, this work sets aside these fundamental
premises and focuses instead on heterogeneous autonomous interactions between
two or more agents. Specifically, we introduce a new and concise representation
for repeated adversarial (constant-sum) games that highlight the necessary
features that enable an automated planing agent to reason about how to score
above the game's Nash equilibrium, when facing heterogeneous adversaries. To
this end, we present TeamUP, a model-based RL algorithm designed for learning
and planning such an abstraction. In essence, it is somewhat similar to R-max
with a cleverly engineered reward shaping that treats exploration as an
adversarial optimization problem. In practice, it attempts to find an ally with
which to tacitly collude (in more than two-player games) and then collaborates
on a joint plan of actions that can consistently score a high utility in
adversarial repeated games. We use the inaugural Lemonade Stand Game Tournament
to demonstrate the effectiveness of our approach, and find that TeamUP is the
best performing agent, demoting the Tournament's actual winning strategy into
second place. In our experimental analysis, we show hat our strategy
successfully and consistently builds collaborations with many different
heterogeneous (and sometimes very sophisticated) adversaries.
",adam sykulski,,2012.0,,arXiv,Cote2012,True,,arXiv,Not available,Automated Planning in Repeated Adversarial Games,5ec008101ce2a6052061d50dae3497eb,http://arxiv.org/abs/1203.3498v1
16347," Game theory's prescriptive power typically relies on full rationality and/or
self-play interactions. In contrast, this work sets aside these fundamental
premises and focuses instead on heterogeneous autonomous interactions between
two or more agents. Specifically, we introduce a new and concise representation
for repeated adversarial (constant-sum) games that highlight the necessary
features that enable an automated planing agent to reason about how to score
above the game's Nash equilibrium, when facing heterogeneous adversaries. To
this end, we present TeamUP, a model-based RL algorithm designed for learning
and planning such an abstraction. In essence, it is somewhat similar to R-max
with a cleverly engineered reward shaping that treats exploration as an
adversarial optimization problem. In practice, it attempts to find an ally with
which to tacitly collude (in more than two-player games) and then collaborates
on a joint plan of actions that can consistently score a high utility in
adversarial repeated games. We use the inaugural Lemonade Stand Game Tournament
to demonstrate the effectiveness of our approach, and find that TeamUP is the
best performing agent, demoting the Tournament's actual winning strategy into
second place. In our experimental analysis, we show hat our strategy
successfully and consistently builds collaborations with many different
heterogeneous (and sometimes very sophisticated) adversaries.
",nicholas jennings,,2012.0,,arXiv,Cote2012,True,,arXiv,Not available,Automated Planning in Repeated Adversarial Games,5ec008101ce2a6052061d50dae3497eb,http://arxiv.org/abs/1203.3498v1
16348," Parity games are an expressive framework to consider realizability questions
for omega-regular languages. However, it is open whether they can be solved in
polynomial time, making them unamenable for practical usage. To overcome this
restriction, we consider 3-color parity games, which can be solved in
polynomial time. They still cover an expressive fragment of specifications, as
they include the classical B\""uchi and co-B\""uchi winning conditions as well as
their union and intersection. This already suffices to express many useful
combinations of safety and liveness properties, as for example the family of
GR(1). The best known algorithm for 3-color parity games solves a game with n
vertices in $ O(n^{2}\sqrt{n}) $ time. We improve on this result by presenting
a new algorithm, based on simple attractor constructions, which only needs time
$ O(n^2) $. As a result, we match the best known running times for solving
(co)-B\""uchi games, showing that 3-color parity games are not harder to solve
in general.
",felix klein,,2014.0,,arXiv,Klein2014,True,,arXiv,Not available,Solving 3-Color Parity Games in $ O(n^2) $ Time,d73faf59fa50ffa5c445dd165474212f,http://arxiv.org/abs/1412.5159v2
16349," Two fundamental problems in computational game theory are computing a Nash
equilibrium and learning to exploit opponents given observations of their play
(opponent exploitation). The latter is perhaps even more important than the
former: Nash equilibrium does not have a compelling theoretical justification
in game classes other than two-player zero-sum, and for all games one can
potentially do better by exploiting perceived weaknesses of the opponent than
by following a static equilibrium strategy throughout the match. The natural
setting for opponent exploitation is the Bayesian setting where we have a prior
model that is integrated with observations to create a posterior opponent model
that we respond to. The most natural, and a well-studied prior distribution is
the Dirichlet distribution. An exact polynomial-time algorithm is known for
best-responding to the posterior distribution for an opponent assuming a
Dirichlet prior with multinomial sampling in normal-form games; however, for
imperfect-information games the best known algorithm is based on approximating
an infinite integral without theoretical guarantees. We present the first exact
algorithm for a natural class of imperfect-information games. We demonstrate
that our algorithm runs quickly in practice and outperforms the best prior
approaches. We also present an algorithm for the uniform prior setting.
",sam ganzfried,,2016.0,,arXiv,Ganzfried2016,True,,arXiv,Not available,Bayesian Opponent Exploitation in Imperfect-Information Games,387fd57284473cfe0b7b5b4a82553584,http://arxiv.org/abs/1603.03491v6
16350," Two fundamental problems in computational game theory are computing a Nash
equilibrium and learning to exploit opponents given observations of their play
(opponent exploitation). The latter is perhaps even more important than the
former: Nash equilibrium does not have a compelling theoretical justification
in game classes other than two-player zero-sum, and for all games one can
potentially do better by exploiting perceived weaknesses of the opponent than
by following a static equilibrium strategy throughout the match. The natural
setting for opponent exploitation is the Bayesian setting where we have a prior
model that is integrated with observations to create a posterior opponent model
that we respond to. The most natural, and a well-studied prior distribution is
the Dirichlet distribution. An exact polynomial-time algorithm is known for
best-responding to the posterior distribution for an opponent assuming a
Dirichlet prior with multinomial sampling in normal-form games; however, for
imperfect-information games the best known algorithm is based on approximating
an infinite integral without theoretical guarantees. We present the first exact
algorithm for a natural class of imperfect-information games. We demonstrate
that our algorithm runs quickly in practice and outperforms the best prior
approaches. We also present an algorithm for the uniform prior setting.
",qingyun sun,,2016.0,,arXiv,Ganzfried2016,True,,arXiv,Not available,Bayesian Opponent Exploitation in Imperfect-Information Games,387fd57284473cfe0b7b5b4a82553584,http://arxiv.org/abs/1603.03491v6
16351," We consider eager-push epidemic dissemination in a complete graph. Time is
divided into synchronous stages. In each stage, a source disseminates $\nu$
events. Each event is sent by the source, and forwarded by each node upon its
first reception, to $f$ nodes selected uniformly at random, where $f$ is the
fanout. We use Game Theory to study the range of $f$ for which equilibria
strategies exist, assuming that players are either rational or obedient to the
protocol, and that they do not collude. We model interactions as an infinitely
repeated game. We devise a monitoring mechanism that extends the repeated game
with communication rounds used for exchanging monitoring information, and
define strategies for this extended game. We assume the existence of a trusted
mediator, that players are computationally bounded such that they cannot break
the cryptographic primitives used in our mechanism, and that symmetric
ciphering is cheap. Under these assumptions, we show that, if the size of the
stream is sufficiently large and players attribute enough value to future
utilities, then the defined strategies are Sequential Equilibria of the
extended game for any value of $f$. Moreover, the utility provided to each
player is arbitrarily close to that provided in the original game. This shows
that we can persuade rational nodes to follow a dissemination protocol that
uses any fanout, while arbitrarily minimising the relative overhead of
monitoring.
",xavier vilaca,,2014.0,,arXiv,Vilaca2014,True,,arXiv,Not available,"On the Range of Equilibria Utilities of a Repeated Epidemic
Dissemination Game with a Mediator",ae45198f9980d886cc875bddffe2a26d,http://arxiv.org/abs/1407.6295v3
16352," We introduce a framework for studying the effect of cooperation on the
quality of outcomes in utility games. Our framework is a coalitional analog of
the smoothness framework of non-cooperative games. Coalitional smoothness
implies bounds on the strong price of anarchy, the loss of quality of
coalitionally stable outcomes, as well as bounds on coalitional versions of
coarse correlated equilibria and sink equilibria, which we define as
out-of-equilibrium myopic behavior as determined by a natural coalitional
version of best-response dynamics.
Our coalitional smoothness framework captures existing results bounding the
strong price of anarchy of network design games. We show that in any monotone
utility-maximization game, if each player's utility is at least his marginal
contribution to the welfare, then the strong price of anarchy is at most 2.
This captures a broad class of games, including games with a very high price of
anarchy. Additionally, we show that in potential games the strong price of
anarchy is close to the price of stability, the quality of the best Nash
equilibrium.
",milan vojnovic,,2013.0,,arXiv,Bachrach2013,True,,arXiv,Not available,Strong Price of Anarchy and Coalitional Dynamics,0af31fdf9e7e188adbdb7f97879422ab,http://arxiv.org/abs/1307.2537v1
16353," We consider eager-push epidemic dissemination in a complete graph. Time is
divided into synchronous stages. In each stage, a source disseminates $\nu$
events. Each event is sent by the source, and forwarded by each node upon its
first reception, to $f$ nodes selected uniformly at random, where $f$ is the
fanout. We use Game Theory to study the range of $f$ for which equilibria
strategies exist, assuming that players are either rational or obedient to the
protocol, and that they do not collude. We model interactions as an infinitely
repeated game. We devise a monitoring mechanism that extends the repeated game
with communication rounds used for exchanging monitoring information, and
define strategies for this extended game. We assume the existence of a trusted
mediator, that players are computationally bounded such that they cannot break
the cryptographic primitives used in our mechanism, and that symmetric
ciphering is cheap. Under these assumptions, we show that, if the size of the
stream is sufficiently large and players attribute enough value to future
utilities, then the defined strategies are Sequential Equilibria of the
extended game for any value of $f$. Moreover, the utility provided to each
player is arbitrarily close to that provided in the original game. This shows
that we can persuade rational nodes to follow a dissemination protocol that
uses any fanout, while arbitrarily minimising the relative overhead of
monitoring.
",luis rodrigues,,2014.0,,arXiv,Vilaca2014,True,,arXiv,Not available,"On the Range of Equilibria Utilities of a Repeated Epidemic
Dissemination Game with a Mediator",ae45198f9980d886cc875bddffe2a26d,http://arxiv.org/abs/1407.6295v3
16354," Access to the cloud has the potential to provide scalable and cost effective
enhancements of physical devices through the use of advanced computational
processes run on apparently limitless cyber infrastructure. On the other hand,
cyber-physical systems and cloud-controlled devices are subject to numerous
design challenges; among them is that of security. In particular, recent
advances in adversary technology pose Advanced Persistent Threats (APTs) which
may stealthily and completely compromise a cyber system. In this paper, we
design a framework for the security of cloud-based systems that specifies when
a device should trust commands from the cloud which may be compromised. This
interaction can be considered as a game between three players: a cloud
defender/administrator, an attacker, and a device. We use traditional signaling
games to model the interaction between the cloud and the device, and we use the
recently proposed FlipIt game to model the struggle between the defender and
attacker for control of the cloud. Because attacks upon the cloud can occur
without knowledge of the defender, we assume that strategies in both games are
picked according to prior commitment. This framework requires a new equilibrium
concept, which we call Gestalt Equilibrium, a fixed-point that expresses the
interdependence of the signaling and FlipIt games. We present the solution to
this fixed-point problem under certain parameter cases, and illustrate an
example application of cloud control of an unmanned vehicle. Our results
contribute to the growing understanding of cloud-controlled systems.
",jeffrey pawlick,,2015.0,10.13140/RG.2.1.3128.9446,arXiv,Pawlick2015,True,,arXiv,Not available,"Flip the Cloud: Cyber-Physical Signaling Games in the Presence of
Advanced Persistent Threats",79451471eb556da5bc5d15fa8abddff4,http://arxiv.org/abs/1507.00576v2
16355," Access to the cloud has the potential to provide scalable and cost effective
enhancements of physical devices through the use of advanced computational
processes run on apparently limitless cyber infrastructure. On the other hand,
cyber-physical systems and cloud-controlled devices are subject to numerous
design challenges; among them is that of security. In particular, recent
advances in adversary technology pose Advanced Persistent Threats (APTs) which
may stealthily and completely compromise a cyber system. In this paper, we
design a framework for the security of cloud-based systems that specifies when
a device should trust commands from the cloud which may be compromised. This
interaction can be considered as a game between three players: a cloud
defender/administrator, an attacker, and a device. We use traditional signaling
games to model the interaction between the cloud and the device, and we use the
recently proposed FlipIt game to model the struggle between the defender and
attacker for control of the cloud. Because attacks upon the cloud can occur
without knowledge of the defender, we assume that strategies in both games are
picked according to prior commitment. This framework requires a new equilibrium
concept, which we call Gestalt Equilibrium, a fixed-point that expresses the
interdependence of the signaling and FlipIt games. We present the solution to
this fixed-point problem under certain parameter cases, and illustrate an
example application of cloud control of an unmanned vehicle. Our results
contribute to the growing understanding of cloud-controlled systems.
",sadegh farhang,,2015.0,10.13140/RG.2.1.3128.9446,arXiv,Pawlick2015,True,,arXiv,Not available,"Flip the Cloud: Cyber-Physical Signaling Games in the Presence of
Advanced Persistent Threats",79451471eb556da5bc5d15fa8abddff4,http://arxiv.org/abs/1507.00576v2
16356," Access to the cloud has the potential to provide scalable and cost effective
enhancements of physical devices through the use of advanced computational
processes run on apparently limitless cyber infrastructure. On the other hand,
cyber-physical systems and cloud-controlled devices are subject to numerous
design challenges; among them is that of security. In particular, recent
advances in adversary technology pose Advanced Persistent Threats (APTs) which
may stealthily and completely compromise a cyber system. In this paper, we
design a framework for the security of cloud-based systems that specifies when
a device should trust commands from the cloud which may be compromised. This
interaction can be considered as a game between three players: a cloud
defender/administrator, an attacker, and a device. We use traditional signaling
games to model the interaction between the cloud and the device, and we use the
recently proposed FlipIt game to model the struggle between the defender and
attacker for control of the cloud. Because attacks upon the cloud can occur
without knowledge of the defender, we assume that strategies in both games are
picked according to prior commitment. This framework requires a new equilibrium
concept, which we call Gestalt Equilibrium, a fixed-point that expresses the
interdependence of the signaling and FlipIt games. We present the solution to
this fixed-point problem under certain parameter cases, and illustrate an
example application of cloud control of an unmanned vehicle. Our results
contribute to the growing understanding of cloud-controlled systems.
",quanyan zhu,,2015.0,10.13140/RG.2.1.3128.9446,arXiv,Pawlick2015,True,,arXiv,Not available,"Flip the Cloud: Cyber-Physical Signaling Games in the Presence of
Advanced Persistent Threats",79451471eb556da5bc5d15fa8abddff4,http://arxiv.org/abs/1507.00576v2
16357," This paper studies the complexity of solving two classes of non-cooperative
games in a distributed manner in which the players communicate with a set of
system nodes over noisy communication channels. The complexity of solving each
game class is defined as the minimum number of iterations required to find a
Nash equilibrium (NE) of any game in that class with $\epsilon$ accuracy.
First, we consider the class $\mathcal{G}$ of all $N$-player non-cooperative
games with a continuous action space that admit at least one NE. Using
information-theoretic inequalities, we derive a lower bound on the complexity
of solving $\mathcal{G}$ that depends on the Kolmogorov $2\epsilon$-capacity of
the constraint set and the total capacity of the communication channels. We
also derive a lower bound on the complexity of solving games in $\mathcal{G}$
which depends on the volume and surface area of the constraint set. We next
consider the class of all $N$-player non-cooperative games with at least one NE
such that the players' utility functions satisfy a certain (differential)
constraint. We derive lower bounds on the complexity of solving this game class
under both Gaussian and non-Gaussian noise models. Our result in the
non-Gaussian case is derived by establishing a connection between the
Kullback-Leibler distance and Fisher information.
",ehsan nekouei,,2017.0,,arXiv,Nekouei2017,True,,arXiv,Not available,"Lower Bounds on the Complexity of Solving Two Classes of Non-cooperative
Games",01ab54afcdb58fee3cd3049477409b4b,http://arxiv.org/abs/1701.06717v1
16358," This paper studies the complexity of solving two classes of non-cooperative
games in a distributed manner in which the players communicate with a set of
system nodes over noisy communication channels. The complexity of solving each
game class is defined as the minimum number of iterations required to find a
Nash equilibrium (NE) of any game in that class with $\epsilon$ accuracy.
First, we consider the class $\mathcal{G}$ of all $N$-player non-cooperative
games with a continuous action space that admit at least one NE. Using
information-theoretic inequalities, we derive a lower bound on the complexity
of solving $\mathcal{G}$ that depends on the Kolmogorov $2\epsilon$-capacity of
the constraint set and the total capacity of the communication channels. We
also derive a lower bound on the complexity of solving games in $\mathcal{G}$
which depends on the volume and surface area of the constraint set. We next
consider the class of all $N$-player non-cooperative games with at least one NE
such that the players' utility functions satisfy a certain (differential)
constraint. We derive lower bounds on the complexity of solving this game class
under both Gaussian and non-Gaussian noise models. Our result in the
non-Gaussian case is derived by establishing a connection between the
Kullback-Leibler distance and Fisher information.
",girish nair,,2017.0,,arXiv,Nekouei2017,True,,arXiv,Not available,"Lower Bounds on the Complexity of Solving Two Classes of Non-cooperative
Games",01ab54afcdb58fee3cd3049477409b4b,http://arxiv.org/abs/1701.06717v1
16359," This paper studies the complexity of solving two classes of non-cooperative
games in a distributed manner in which the players communicate with a set of
system nodes over noisy communication channels. The complexity of solving each
game class is defined as the minimum number of iterations required to find a
Nash equilibrium (NE) of any game in that class with $\epsilon$ accuracy.
First, we consider the class $\mathcal{G}$ of all $N$-player non-cooperative
games with a continuous action space that admit at least one NE. Using
information-theoretic inequalities, we derive a lower bound on the complexity
of solving $\mathcal{G}$ that depends on the Kolmogorov $2\epsilon$-capacity of
the constraint set and the total capacity of the communication channels. We
also derive a lower bound on the complexity of solving games in $\mathcal{G}$
which depends on the volume and surface area of the constraint set. We next
consider the class of all $N$-player non-cooperative games with at least one NE
such that the players' utility functions satisfy a certain (differential)
constraint. We derive lower bounds on the complexity of solving this game class
under both Gaussian and non-Gaussian noise models. Our result in the
non-Gaussian case is derived by establishing a connection between the
Kullback-Leibler distance and Fisher information.
",tansu alpcan,,2017.0,,arXiv,Nekouei2017,True,,arXiv,Not available,"Lower Bounds on the Complexity of Solving Two Classes of Non-cooperative
Games",01ab54afcdb58fee3cd3049477409b4b,http://arxiv.org/abs/1701.06717v1
16360," This paper studies the complexity of solving two classes of non-cooperative
games in a distributed manner in which the players communicate with a set of
system nodes over noisy communication channels. The complexity of solving each
game class is defined as the minimum number of iterations required to find a
Nash equilibrium (NE) of any game in that class with $\epsilon$ accuracy.
First, we consider the class $\mathcal{G}$ of all $N$-player non-cooperative
games with a continuous action space that admit at least one NE. Using
information-theoretic inequalities, we derive a lower bound on the complexity
of solving $\mathcal{G}$ that depends on the Kolmogorov $2\epsilon$-capacity of
the constraint set and the total capacity of the communication channels. We
also derive a lower bound on the complexity of solving games in $\mathcal{G}$
which depends on the volume and surface area of the constraint set. We next
consider the class of all $N$-player non-cooperative games with at least one NE
such that the players' utility functions satisfy a certain (differential)
constraint. We derive lower bounds on the complexity of solving this game class
under both Gaussian and non-Gaussian noise models. Our result in the
non-Gaussian case is derived by establishing a connection between the
Kullback-Leibler distance and Fisher information.
",robin evans,,2017.0,,arXiv,Nekouei2017,True,,arXiv,Not available,"Lower Bounds on the Complexity of Solving Two Classes of Non-cooperative
Games",01ab54afcdb58fee3cd3049477409b4b,http://arxiv.org/abs/1701.06717v1
16361," Parity games are games that are played on directed graphs whose vertices are
labeled by natural numbers, called priorities. The players push a token along
the edges of the digraph. The winner is determined by the parity of the
greatest priority occurring infinitely often in this infinite play.
A motivation for studying parity games comes from the area of formal
verification of systems by model checking. Deciding the winner in a parity game
is polynomial time equivalent to the model checking problem of the modal
mu-calculus. Another strong motivation lies in the fact that the exact
complexity of solving parity games is a long-standing open problem, the
currently best known algorithm being subexponential. It is known that the
problem is in the complexity classes UP and coUP.
In this paper we identify restricted classes of digraphs where the problem is
solvable in polynomial time, following an approach from structural graph
theory. We consider three standard graph operations: the join of two graphs,
repeated pasting along vertices, and the addition of a vertex. Given a class C
of digraphs on which we can solve parity games in polynomial time, we show that
the same holds for the class obtained from C by applying once any of these
three operations to its elements.
These results provide, in particular, polynomial time algorithms for parity
games whose underlying graph is an orientation of a complete graph, a complete
bipartite graph, a block graph, or a block-cactus graph. These are classes
where the problem was not known to be efficiently solvable.
Previous results concerning restricted classes of parity games which are
solvable in polynomial time include classes of bounded tree-width, bounded
DAG-width, and bounded clique-width.
We also prove that recognising the winning regions of a parity game is not
easier than computing them from scratch.
",christoph dittmann,,2012.0,,arXiv,Dittmann2012,True,,arXiv,Not available,Graph Operations on Parity Games and Polynomial-Time Algorithms,4e05dfef3920b8778282155f6a631823,http://arxiv.org/abs/1208.1640v1
16362," Parity games are games that are played on directed graphs whose vertices are
labeled by natural numbers, called priorities. The players push a token along
the edges of the digraph. The winner is determined by the parity of the
greatest priority occurring infinitely often in this infinite play.
A motivation for studying parity games comes from the area of formal
verification of systems by model checking. Deciding the winner in a parity game
is polynomial time equivalent to the model checking problem of the modal
mu-calculus. Another strong motivation lies in the fact that the exact
complexity of solving parity games is a long-standing open problem, the
currently best known algorithm being subexponential. It is known that the
problem is in the complexity classes UP and coUP.
In this paper we identify restricted classes of digraphs where the problem is
solvable in polynomial time, following an approach from structural graph
theory. We consider three standard graph operations: the join of two graphs,
repeated pasting along vertices, and the addition of a vertex. Given a class C
of digraphs on which we can solve parity games in polynomial time, we show that
the same holds for the class obtained from C by applying once any of these
three operations to its elements.
These results provide, in particular, polynomial time algorithms for parity
games whose underlying graph is an orientation of a complete graph, a complete
bipartite graph, a block graph, or a block-cactus graph. These are classes
where the problem was not known to be efficiently solvable.
Previous results concerning restricted classes of parity games which are
solvable in polynomial time include classes of bounded tree-width, bounded
DAG-width, and bounded clique-width.
We also prove that recognising the winning regions of a parity game is not
easier than computing them from scratch.
",stephan kreutzer,,2012.0,,arXiv,Dittmann2012,True,,arXiv,Not available,Graph Operations on Parity Games and Polynomial-Time Algorithms,4e05dfef3920b8778282155f6a631823,http://arxiv.org/abs/1208.1640v1
16363," Boolean games are an expressive and natural formalism through which to
investigate problems of strategic interaction in multiagent systems. Although
they have been widely studied, almost all previous work on Nash equilibria in
Boolean games has focused on the restricted setting of pure strategies. This is
a shortcoming as finite games are guaranteed to have at least one equilibrium
in mixed strategies, but many simple games fail to have pure strategy
equilibria at all. We address this by showing that a natural decision problem
about mixed equilibria: determining whether a Boolean game has a mixed strategy
equilibrium that guarantees every player a given payoff, is NEXP-hard.
Accordingly, the $\epsilon$ variety of the problem is NEXP-complete. The proof
can be adapted to show coNEXP-hardness of a similar question: whether all Nash
equilibria of a Boolean game guarantee every player at least the given payoff.
",egor ianovski,,2013.0,,arXiv,Ianovski2013,True,,arXiv,Not available,EGuaranteeNash for Boolean Games is NEXP-Hard,272e38fb774a62e2e3d5de3b3ab51114,http://arxiv.org/abs/1312.4114v1
16364," Parity games are games that are played on directed graphs whose vertices are
labeled by natural numbers, called priorities. The players push a token along
the edges of the digraph. The winner is determined by the parity of the
greatest priority occurring infinitely often in this infinite play.
A motivation for studying parity games comes from the area of formal
verification of systems by model checking. Deciding the winner in a parity game
is polynomial time equivalent to the model checking problem of the modal
mu-calculus. Another strong motivation lies in the fact that the exact
complexity of solving parity games is a long-standing open problem, the
currently best known algorithm being subexponential. It is known that the
problem is in the complexity classes UP and coUP.
In this paper we identify restricted classes of digraphs where the problem is
solvable in polynomial time, following an approach from structural graph
theory. We consider three standard graph operations: the join of two graphs,
repeated pasting along vertices, and the addition of a vertex. Given a class C
of digraphs on which we can solve parity games in polynomial time, we show that
the same holds for the class obtained from C by applying once any of these
three operations to its elements.
These results provide, in particular, polynomial time algorithms for parity
games whose underlying graph is an orientation of a complete graph, a complete
bipartite graph, a block graph, or a block-cactus graph. These are classes
where the problem was not known to be efficiently solvable.
Previous results concerning restricted classes of parity games which are
solvable in polynomial time include classes of bounded tree-width, bounded
DAG-width, and bounded clique-width.
We also prove that recognising the winning regions of a parity game is not
easier than computing them from scratch.
",alexandru tomescu,,2012.0,,arXiv,Dittmann2012,True,,arXiv,Not available,Graph Operations on Parity Games and Polynomial-Time Algorithms,4e05dfef3920b8778282155f6a631823,http://arxiv.org/abs/1208.1640v1
16365," Good economic mechanisms depend on the preferences of participants in the
mechanism. For example, the revenue-optimal auction for selling an item is
parameterized by a reserve price, and the appropriate reserve price depends on
how much the bidders are willing to pay. A mechanism designer can potentially
learn about the participants' preferences by observing historical data from the
mechanism; the designer could then update the mechanism in response to learned
preferences to improve its performance. The challenge of such an approach is
that the data corresponds to the actions of the participants and not their
preferences. Preferences can potentially be inferred from actions but the
degree of inference possible depends on the mechanism. In the optimal auction
example, it is impossible to learn anything about preferences of bidders who
are not willing to pay the reserve price. These bidders will not cast bids in
the auction and, from historical bid data, the auctioneer could never learn
that lowering the reserve price would give a higher revenue (even if it would).
To address this impossibility, the auctioneer could sacrifice revenue
optimality in the initial auction to obtain better inference properties so that
the auction's parameters can be adapted to changing preferences in the future.
This paper develops the theory for optimal mechanism design subject to good
inferability.
",shuchi chawla,,2014.0,,arXiv,Chawla2014,True,,arXiv,Not available,Mechanism Design for Data Science,5b24227fa781f60b651c7539c70d0211,http://arxiv.org/abs/1404.5971v2
16366," Good economic mechanisms depend on the preferences of participants in the
mechanism. For example, the revenue-optimal auction for selling an item is
parameterized by a reserve price, and the appropriate reserve price depends on
how much the bidders are willing to pay. A mechanism designer can potentially
learn about the participants' preferences by observing historical data from the
mechanism; the designer could then update the mechanism in response to learned
preferences to improve its performance. The challenge of such an approach is
that the data corresponds to the actions of the participants and not their
preferences. Preferences can potentially be inferred from actions but the
degree of inference possible depends on the mechanism. In the optimal auction
example, it is impossible to learn anything about preferences of bidders who
are not willing to pay the reserve price. These bidders will not cast bids in
the auction and, from historical bid data, the auctioneer could never learn
that lowering the reserve price would give a higher revenue (even if it would).
To address this impossibility, the auctioneer could sacrifice revenue
optimality in the initial auction to obtain better inference properties so that
the auction's parameters can be adapted to changing preferences in the future.
This paper develops the theory for optimal mechanism design subject to good
inferability.
",jason hartline,,2014.0,,arXiv,Chawla2014,True,,arXiv,Not available,Mechanism Design for Data Science,5b24227fa781f60b651c7539c70d0211,http://arxiv.org/abs/1404.5971v2
16367," Good economic mechanisms depend on the preferences of participants in the
mechanism. For example, the revenue-optimal auction for selling an item is
parameterized by a reserve price, and the appropriate reserve price depends on
how much the bidders are willing to pay. A mechanism designer can potentially
learn about the participants' preferences by observing historical data from the
mechanism; the designer could then update the mechanism in response to learned
preferences to improve its performance. The challenge of such an approach is
that the data corresponds to the actions of the participants and not their
preferences. Preferences can potentially be inferred from actions but the
degree of inference possible depends on the mechanism. In the optimal auction
example, it is impossible to learn anything about preferences of bidders who
are not willing to pay the reserve price. These bidders will not cast bids in
the auction and, from historical bid data, the auctioneer could never learn
that lowering the reserve price would give a higher revenue (even if it would).
To address this impossibility, the auctioneer could sacrifice revenue
optimality in the initial auction to obtain better inference properties so that
the auction's parameters can be adapted to changing preferences in the future.
This paper develops the theory for optimal mechanism design subject to good
inferability.
",denis nekipelov,,2014.0,,arXiv,Chawla2014,True,,arXiv,Not available,Mechanism Design for Data Science,5b24227fa781f60b651c7539c70d0211,http://arxiv.org/abs/1404.5971v2
16368," One of the major drawbacks of the celebrated VCG auction is its low (or zero)
revenue even when the agents have high value for the goods and a {\em
competitive} outcome could have generated a significant revenue. A competitive
outcome is one for which it is impossible for the seller and a subset of buyers
to `block' the auction by defecting and negotiating an outcome with higher
payoffs for themselves. This corresponds to the well-known concept of {\em
core} in cooperative game theory.
In particular, VCG revenue is known to be not competitive when the goods
being sold have complementarities. A bottleneck here is an impossibility result
showing that there is no auction that simultaneously achieves competitive
prices (a core outcome) and incentive-compatibility.
In this paper we try to overcome the above impossibility result by asking the
following natural question: is it possible to design an incentive-compatible
auction whose revenue is comparable (even if less) to a competitive outcome?
Towards this, we define a notion of {\em core-competitive} auctions. We say
that an incentive-compatible auction is $\alpha$-core-competitive if its
revenue is at least $1/\alpha$ fraction of the minimum revenue of a
core-outcome. We study the Text-and-Image setting. In this setting, there is an
ad slot which can be filled with either a single image ad or $k$ text ads. We
design an $O(\ln \ln k)$ core-competitive randomized auction and an
$O(\sqrt{\ln(k)})$ competitive deterministic auction for the Text-and-Image
setting. We also show that both factors are tight.
",gagan goel,,2015.0,,arXiv,Goel2015,True,,arXiv,Not available,Core-competitive Auctions,5fb277b82485f3d5fcd69f8565ac0626,http://arxiv.org/abs/1505.07911v2
16369," One of the major drawbacks of the celebrated VCG auction is its low (or zero)
revenue even when the agents have high value for the goods and a {\em
competitive} outcome could have generated a significant revenue. A competitive
outcome is one for which it is impossible for the seller and a subset of buyers
to `block' the auction by defecting and negotiating an outcome with higher
payoffs for themselves. This corresponds to the well-known concept of {\em
core} in cooperative game theory.
In particular, VCG revenue is known to be not competitive when the goods
being sold have complementarities. A bottleneck here is an impossibility result
showing that there is no auction that simultaneously achieves competitive
prices (a core outcome) and incentive-compatibility.
In this paper we try to overcome the above impossibility result by asking the
following natural question: is it possible to design an incentive-compatible
auction whose revenue is comparable (even if less) to a competitive outcome?
Towards this, we define a notion of {\em core-competitive} auctions. We say
that an incentive-compatible auction is $\alpha$-core-competitive if its
revenue is at least $1/\alpha$ fraction of the minimum revenue of a
core-outcome. We study the Text-and-Image setting. In this setting, there is an
ad slot which can be filled with either a single image ad or $k$ text ads. We
design an $O(\ln \ln k)$ core-competitive randomized auction and an
$O(\sqrt{\ln(k)})$ competitive deterministic auction for the Text-and-Image
setting. We also show that both factors are tight.
",mohammad khani,,2015.0,,arXiv,Goel2015,True,,arXiv,Not available,Core-competitive Auctions,5fb277b82485f3d5fcd69f8565ac0626,http://arxiv.org/abs/1505.07911v2
16370," One of the major drawbacks of the celebrated VCG auction is its low (or zero)
revenue even when the agents have high value for the goods and a {\em
competitive} outcome could have generated a significant revenue. A competitive
outcome is one for which it is impossible for the seller and a subset of buyers
to `block' the auction by defecting and negotiating an outcome with higher
payoffs for themselves. This corresponds to the well-known concept of {\em
core} in cooperative game theory.
In particular, VCG revenue is known to be not competitive when the goods
being sold have complementarities. A bottleneck here is an impossibility result
showing that there is no auction that simultaneously achieves competitive
prices (a core outcome) and incentive-compatibility.
In this paper we try to overcome the above impossibility result by asking the
following natural question: is it possible to design an incentive-compatible
auction whose revenue is comparable (even if less) to a competitive outcome?
Towards this, we define a notion of {\em core-competitive} auctions. We say
that an incentive-compatible auction is $\alpha$-core-competitive if its
revenue is at least $1/\alpha$ fraction of the minimum revenue of a
core-outcome. We study the Text-and-Image setting. In this setting, there is an
ad slot which can be filled with either a single image ad or $k$ text ads. We
design an $O(\ln \ln k)$ core-competitive randomized auction and an
$O(\sqrt{\ln(k)})$ competitive deterministic auction for the Text-and-Image
setting. We also show that both factors are tight.
",renato leme,,2015.0,,arXiv,Goel2015,True,,arXiv,Not available,Core-competitive Auctions,5fb277b82485f3d5fcd69f8565ac0626,http://arxiv.org/abs/1505.07911v2
16371," Inspired by the recent developments in the field of Spectrum Auctions, we
have tried to provide a comprehensive framework for the complete procedure of
Spectrum Licensing. We have identified the various issues the Governments need
to decide upon while designing the licensing procedure and what are the various
options available in each issue. We also provide an in depth study of how each
of this options impact the overall procedure along with theoretical and
practical results from the past. Lastly we argue as to how we can combine the
positives two most widely used Spectrum Auctions mechanisms into the Hybrid
Multiple Round Auction mechanism being proposed by us.
",devansh dikshit,,2009.0,,arXiv,Dikshit2009,True,,arXiv,Not available,"On Framework and Hybrid Auction Approach to the Spectrum Licensing
Procedure",b9e97ebb2ab9cecafad03ebbcf641387,http://arxiv.org/abs/0902.3104v1
16372," Inspired by the recent developments in the field of Spectrum Auctions, we
have tried to provide a comprehensive framework for the complete procedure of
Spectrum Licensing. We have identified the various issues the Governments need
to decide upon while designing the licensing procedure and what are the various
options available in each issue. We also provide an in depth study of how each
of this options impact the overall procedure along with theoretical and
practical results from the past. Lastly we argue as to how we can combine the
positives two most widely used Spectrum Auctions mechanisms into the Hybrid
Multiple Round Auction mechanism being proposed by us.
",y. narahari,,2009.0,,arXiv,Dikshit2009,True,,arXiv,Not available,"On Framework and Hybrid Auction Approach to the Spectrum Licensing
Procedure",b9e97ebb2ab9cecafad03ebbcf641387,http://arxiv.org/abs/0902.3104v1
16374," Boolean games are an expressive and natural formalism through which to
investigate problems of strategic interaction in multiagent systems. Although
they have been widely studied, almost all previous work on Nash equilibria in
Boolean games has focused on the restricted setting of pure strategies. This is
a shortcoming as finite games are guaranteed to have at least one equilibrium
in mixed strategies, but many simple games fail to have pure strategy
equilibria at all. We address this by showing that a natural decision problem
about mixed equilibria: determining whether a Boolean game has a mixed strategy
equilibrium that guarantees every player a given payoff, is NEXP-hard.
Accordingly, the $\epsilon$ variety of the problem is NEXP-complete. The proof
can be adapted to show coNEXP-hardness of a similar question: whether all Nash
equilibria of a Boolean game guarantee every player at least the given payoff.
",luke ong,,2013.0,,arXiv,Ianovski2013,True,,arXiv,Not available,EGuaranteeNash for Boolean Games is NEXP-Hard,272e38fb774a62e2e3d5de3b3ab51114,http://arxiv.org/abs/1312.4114v1
16376," We present our results on Uniform Price Auctions, one of the standard
sealed-bid multi-unit auction formats, for selling multiple identical units of
a single good to multi-demand bidders. Contrary to the truthful and
economically efficient multi-unit Vickrey auction, the Uniform Price Auction
encourages strategic bidding and is socially inefficient in general. The
uniform pricing rule is, however, widely popular by its appeal to the natural
anticipation, that identical items should be identically priced. In this work
we study equilibria of the Uniform Price Auction for bidders with (symmetric)
submodular valuation functions, over the number of units that they win. We
investigate pure Nash equilibria of the auction in undominated strategies; we
produce a characterization of these equilibria that allows us to prove that a
fraction 1-1/e of the optimum social welfare is always recovered in undominated
pure Nash equilibrium -- and this bound is essentially tight. Subsequently, we
study the auction under the incomplete information setting and prove a bound of
4-2/k on the economic inefficiency of (mixed) Bayes Nash equilibria that are
supported by undominated strategies.
",evangelos markakis,,2012.0,,arXiv,Markakis2012,True,,arXiv,Not available,On the Inefficiency of the Uniform Price Auction,4f68f7005c1cb99e5e15b980fd3dd17a,http://arxiv.org/abs/1211.1860v4
16377," We present our results on Uniform Price Auctions, one of the standard
sealed-bid multi-unit auction formats, for selling multiple identical units of
a single good to multi-demand bidders. Contrary to the truthful and
economically efficient multi-unit Vickrey auction, the Uniform Price Auction
encourages strategic bidding and is socially inefficient in general. The
uniform pricing rule is, however, widely popular by its appeal to the natural
anticipation, that identical items should be identically priced. In this work
we study equilibria of the Uniform Price Auction for bidders with (symmetric)
submodular valuation functions, over the number of units that they win. We
investigate pure Nash equilibria of the auction in undominated strategies; we
produce a characterization of these equilibria that allows us to prove that a
fraction 1-1/e of the optimum social welfare is always recovered in undominated
pure Nash equilibrium -- and this bound is essentially tight. Subsequently, we
study the auction under the incomplete information setting and prove a bound of
4-2/k on the economic inefficiency of (mixed) Bayes Nash equilibria that are
supported by undominated strategies.
",orestis telelis,,2012.0,,arXiv,Markakis2012,True,,arXiv,Not available,On the Inefficiency of the Uniform Price Auction,4f68f7005c1cb99e5e15b980fd3dd17a,http://arxiv.org/abs/1211.1860v4
16378," We study the efficiency of simple auctions in the presence of complements.
[DMSW15] introduced the single-bid auction, and showed that it has a price of
anarchy (PoA) of $O(\log m)$ for complement-free (i.e., subadditive)
valuations. Prior to our work, no non-trivial upper bound on the PoA of single
bid auctions was known for valuations exhibiting complements. We introduce a
hierarchy over valuations, where levels of the hierarchy correspond to the
degree of complementarity, and the PoA of the single bid auction degrades
gracefully with the level of the hierarchy. This hierarchy is a refinement of
the Maximum over Positive Hypergraphs (MPH) hierarchy [FFIILS15], where the
degree of complementarity $d$ is captured by the maximum number of neighbors of
a node in the positive hypergraph representation. We show that the price of
anarchy of the single bid auction for valuations of level $d$ of the hierarchy
is $O(d^2 \log(m/d))$, where $m$ is the number of items. We also establish an
improved upper bound of $O(d \log m)$ for a subclass where every hyperedge in
the positive hypergraph representation is of size at most 2 (but the degree is
still $d$). Finally, we show that randomizing between the single bid auction
and the grand bundle auction has a price of anarchy of at most $O(\sqrt{m})$
for general valuations. All of our results are derived via the smoothness
framework, thus extend to coarse-correlated equilibria and to Bayes Nash
equilibria.
",michal feldman,,2016.0,10.1145/2940716.2940735,arXiv,Feldman2016,True,,arXiv,Not available,Simple Mechanisms For Agents With Complements,256610b16b93d394b1fb44ac5fcfc814,http://arxiv.org/abs/1603.07939v2
16379," We study the efficiency of simple auctions in the presence of complements.
[DMSW15] introduced the single-bid auction, and showed that it has a price of
anarchy (PoA) of $O(\log m)$ for complement-free (i.e., subadditive)
valuations. Prior to our work, no non-trivial upper bound on the PoA of single
bid auctions was known for valuations exhibiting complements. We introduce a
hierarchy over valuations, where levels of the hierarchy correspond to the
degree of complementarity, and the PoA of the single bid auction degrades
gracefully with the level of the hierarchy. This hierarchy is a refinement of
the Maximum over Positive Hypergraphs (MPH) hierarchy [FFIILS15], where the
degree of complementarity $d$ is captured by the maximum number of neighbors of
a node in the positive hypergraph representation. We show that the price of
anarchy of the single bid auction for valuations of level $d$ of the hierarchy
is $O(d^2 \log(m/d))$, where $m$ is the number of items. We also establish an
improved upper bound of $O(d \log m)$ for a subclass where every hyperedge in
the positive hypergraph representation is of size at most 2 (but the degree is
still $d$). Finally, we show that randomizing between the single bid auction
and the grand bundle auction has a price of anarchy of at most $O(\sqrt{m})$
for general valuations. All of our results are derived via the smoothness
framework, thus extend to coarse-correlated equilibria and to Bayes Nash
equilibria.
",ophir friedler,,2016.0,10.1145/2940716.2940735,arXiv,Feldman2016,True,,arXiv,Not available,Simple Mechanisms For Agents With Complements,256610b16b93d394b1fb44ac5fcfc814,http://arxiv.org/abs/1603.07939v2
16380," We study the efficiency of simple auctions in the presence of complements.
[DMSW15] introduced the single-bid auction, and showed that it has a price of
anarchy (PoA) of $O(\log m)$ for complement-free (i.e., subadditive)
valuations. Prior to our work, no non-trivial upper bound on the PoA of single
bid auctions was known for valuations exhibiting complements. We introduce a
hierarchy over valuations, where levels of the hierarchy correspond to the
degree of complementarity, and the PoA of the single bid auction degrades
gracefully with the level of the hierarchy. This hierarchy is a refinement of
the Maximum over Positive Hypergraphs (MPH) hierarchy [FFIILS15], where the
degree of complementarity $d$ is captured by the maximum number of neighbors of
a node in the positive hypergraph representation. We show that the price of
anarchy of the single bid auction for valuations of level $d$ of the hierarchy
is $O(d^2 \log(m/d))$, where $m$ is the number of items. We also establish an
improved upper bound of $O(d \log m)$ for a subclass where every hyperedge in
the positive hypergraph representation is of size at most 2 (but the degree is
still $d$). Finally, we show that randomizing between the single bid auction
and the grand bundle auction has a price of anarchy of at most $O(\sqrt{m})$
for general valuations. All of our results are derived via the smoothness
framework, thus extend to coarse-correlated equilibria and to Bayes Nash
equilibria.
",jamie morgenstern,,2016.0,10.1145/2940716.2940735,arXiv,Feldman2016,True,,arXiv,Not available,Simple Mechanisms For Agents With Complements,256610b16b93d394b1fb44ac5fcfc814,http://arxiv.org/abs/1603.07939v2
16381," We study the efficiency of simple auctions in the presence of complements.
[DMSW15] introduced the single-bid auction, and showed that it has a price of
anarchy (PoA) of $O(\log m)$ for complement-free (i.e., subadditive)
valuations. Prior to our work, no non-trivial upper bound on the PoA of single
bid auctions was known for valuations exhibiting complements. We introduce a
hierarchy over valuations, where levels of the hierarchy correspond to the
degree of complementarity, and the PoA of the single bid auction degrades
gracefully with the level of the hierarchy. This hierarchy is a refinement of
the Maximum over Positive Hypergraphs (MPH) hierarchy [FFIILS15], where the
degree of complementarity $d$ is captured by the maximum number of neighbors of
a node in the positive hypergraph representation. We show that the price of
anarchy of the single bid auction for valuations of level $d$ of the hierarchy
is $O(d^2 \log(m/d))$, where $m$ is the number of items. We also establish an
improved upper bound of $O(d \log m)$ for a subclass where every hyperedge in
the positive hypergraph representation is of size at most 2 (but the degree is
still $d$). Finally, we show that randomizing between the single bid auction
and the grand bundle auction has a price of anarchy of at most $O(\sqrt{m})$
for general valuations. All of our results are derived via the smoothness
framework, thus extend to coarse-correlated equilibria and to Bayes Nash
equilibria.
",guy reiner,,2016.0,10.1145/2940716.2940735,arXiv,Feldman2016,True,,arXiv,Not available,Simple Mechanisms For Agents With Complements,256610b16b93d394b1fb44ac5fcfc814,http://arxiv.org/abs/1603.07939v2
16382," Many auction settings implicitly or explicitly require that bidders are
treated equally ex-ante. This may be because discrimination is philosophically
or legally impermissible, or because it is practically difficult to implement
or impossible to enforce. We study so-called {\em anonymous} auctions to
understand the revenue tradeoffs and to develop simple anonymous auctions that
are approximately optimal.
We consider digital goods settings and show that the optimal anonymous,
dominant strategy incentive compatible auction has an intuitive structure ---
imagine that bidders are randomly permuted before the auction, then infer a
posterior belief about bidder i's valuation from the values of other bidders
and set a posted price that maximizes revenue given this posterior.
We prove that no anonymous mechanism can guarantee an approximation better
than O(n) to the optimal revenue in the worst case (or O(log n) for regular
distributions) and that even posted price mechanisms match those guarantees.
Understanding that the real power of anonymous mechanisms comes when the
auctioneer can infer the bidder identities accurately, we show a tight O(k)
approximation guarantee when each bidder can be confused with at most k ""higher
types"". Moreover, we introduce a simple mechanism based on n target prices that
is asymptotically optimal and build on this mechanism to extend our results to
m-unit auctions and sponsored search.
",christos tzamos,,2014.0,,arXiv,Tzamos2014,True,,arXiv,Not available,The Value of Knowing Your Enemy,96386a0fadb1e4af86d74327b4dcc867,http://arxiv.org/abs/1411.1379v1
16383," Many auction settings implicitly or explicitly require that bidders are
treated equally ex-ante. This may be because discrimination is philosophically
or legally impermissible, or because it is practically difficult to implement
or impossible to enforce. We study so-called {\em anonymous} auctions to
understand the revenue tradeoffs and to develop simple anonymous auctions that
are approximately optimal.
We consider digital goods settings and show that the optimal anonymous,
dominant strategy incentive compatible auction has an intuitive structure ---
imagine that bidders are randomly permuted before the auction, then infer a
posterior belief about bidder i's valuation from the values of other bidders
and set a posted price that maximizes revenue given this posterior.
We prove that no anonymous mechanism can guarantee an approximation better
than O(n) to the optimal revenue in the worst case (or O(log n) for regular
distributions) and that even posted price mechanisms match those guarantees.
Understanding that the real power of anonymous mechanisms comes when the
auctioneer can infer the bidder identities accurately, we show a tight O(k)
approximation guarantee when each bidder can be confused with at most k ""higher
types"". Moreover, we introduce a simple mechanism based on n target prices that
is asymptotically optimal and build on this mechanism to extend our results to
m-unit auctions and sponsored search.
",christopher wilkens,,2014.0,,arXiv,Tzamos2014,True,,arXiv,Not available,The Value of Knowing Your Enemy,96386a0fadb1e4af86d74327b4dcc867,http://arxiv.org/abs/1411.1379v1
16384," We study the problem of selling a resource through an auction mechanism. The
winning buyer in turn develops this resource to generate profit. Two forms of
payment are considered: charging the winning buyer a one-time payment, or an
initial payment plus a profit sharing contract (PSC). We consider a symmetric
interdependent values model with risk averse or risk neutral buyers and a risk
neutral seller. For the second price auction and the English auction, we show
that the seller's expected total revenue from the auction where he also takes a
fraction of the positive profit is higher than the expected revenue from the
auction with only a one-time payment. Moreover, the seller can generate an even
higher expected total revenue if, in addition to taking a fraction of the
positive profit, he also takes the same fraction of any loss incurred from
developing the resource. Moving beyond simple PSCs, we show that the auction
with a PSC from a very general class generates higher expected total revenue
than the auction with only a one-time payment. Finally, we show that suitable
PSCs provide higher expected total revenue than a one-time payment even when
the incentives of the winning buyer to develop the resource must be addressed
by the seller.
",vineet abhishek,,2011.0,,arXiv,Abhishek2011,True,,arXiv,Not available,Auctions with a Profit Sharing Contract,b4a1d63ac3c865159de35e530f1e249d,http://arxiv.org/abs/1102.3195v5
16385," Modern organizations (e.g., hospitals, social networks, government agencies)
rely heavily on audit to detect and punish insiders who inappropriately access
and disclose confidential information. Recent work on audit games models the
strategic interaction between an auditor with a single audit resource and
auditees as a Stackelberg game, augmenting associated well-studied security
games with a configurable punishment parameter. We significantly generalize
this audit game model to account for multiple audit resources where each
resource is restricted to audit a subset of all potential violations, thus
enabling application to practical auditing scenarios. We provide an FPTAS that
computes an approximately optimal solution to the resulting non-convex
optimization problem. The main technical novelty is in the design and
correctness proof of an optimization transformation that enables the
construction of this FPTAS. In addition, we experimentally demonstrate that
this transformation significantly speeds up computation of solutions for a
class of audit games and security games.
",jeremiah blocki,,2014.0,,arXiv,Blocki2014,True,,arXiv,Not available,Audit Games with Multiple Defender Resources,08a641620dd57ba419821d6e8f9ce5bf,http://arxiv.org/abs/1409.4503v3
16386," We study the problem of selling a resource through an auction mechanism. The
winning buyer in turn develops this resource to generate profit. Two forms of
payment are considered: charging the winning buyer a one-time payment, or an
initial payment plus a profit sharing contract (PSC). We consider a symmetric
interdependent values model with risk averse or risk neutral buyers and a risk
neutral seller. For the second price auction and the English auction, we show
that the seller's expected total revenue from the auction where he also takes a
fraction of the positive profit is higher than the expected revenue from the
auction with only a one-time payment. Moreover, the seller can generate an even
higher expected total revenue if, in addition to taking a fraction of the
positive profit, he also takes the same fraction of any loss incurred from
developing the resource. Moving beyond simple PSCs, we show that the auction
with a PSC from a very general class generates higher expected total revenue
than the auction with only a one-time payment. Finally, we show that suitable
PSCs provide higher expected total revenue than a one-time payment even when
the incentives of the winning buyer to develop the resource must be addressed
by the seller.
",bruce hajek,,2011.0,,arXiv,Abhishek2011,True,,arXiv,Not available,Auctions with a Profit Sharing Contract,b4a1d63ac3c865159de35e530f1e249d,http://arxiv.org/abs/1102.3195v5
16387," We study the problem of selling a resource through an auction mechanism. The
winning buyer in turn develops this resource to generate profit. Two forms of
payment are considered: charging the winning buyer a one-time payment, or an
initial payment plus a profit sharing contract (PSC). We consider a symmetric
interdependent values model with risk averse or risk neutral buyers and a risk
neutral seller. For the second price auction and the English auction, we show
that the seller's expected total revenue from the auction where he also takes a
fraction of the positive profit is higher than the expected revenue from the
auction with only a one-time payment. Moreover, the seller can generate an even
higher expected total revenue if, in addition to taking a fraction of the
positive profit, he also takes the same fraction of any loss incurred from
developing the resource. Moving beyond simple PSCs, we show that the auction
with a PSC from a very general class generates higher expected total revenue
than the auction with only a one-time payment. Finally, we show that suitable
PSCs provide higher expected total revenue than a one-time payment even when
the incentives of the winning buyer to develop the resource must be addressed
by the seller.
",steven williams,,2011.0,,arXiv,Abhishek2011,True,,arXiv,Not available,Auctions with a Profit Sharing Contract,b4a1d63ac3c865159de35e530f1e249d,http://arxiv.org/abs/1102.3195v5
16388," Peer-to-peer communication has been recently considered as a popular issue
for local area services. An innovative resource allocation scheme is proposed
to improve the performance of mobile peer-to-peer, i.e., device-to-device
(D2D), communications as an underlay in the downlink (DL) cellular networks. To
optimize the system sum rate over the resource sharing of both D2D and cellular
modes, we introduce a reverse iterative combinatorial auction as the allocation
mechanism. In the auction, all the spectrum resources are considered as a set
of resource units, which as bidders compete to obtain business while the
packages of the D2D pairs are auctioned off as goods in each auction round. We
first formulate the valuation of each resource unit, as a basis of the proposed
auction. And then a detailed non-monotonic descending price auction algorithm
is explained depending on the utility function that accounts for the channel
gain from D2D and the costs for the system. Further, we prove that the proposed
auction-based scheme is cheat-proof, and converges in a finite number of
iteration rounds. We explain non-monotonicity in the price update process and
show lower complexity compared to a traditional combinatorial allocation. The
simulation results demonstrate that the algorithm efficiently leads to a good
performance on the system sum rate.
",chen xu,,2012.0,10.1109/JSAC.2013.SUP.0513031,arXiv,Xu2012,True,,arXiv,Not available,"Efficiency Resource Allocation for Device-to-Device Underlay
Communication Systems: A Reverse Iterative Combinatorial Auction Based
Approach",dc68d6c76b39d729c40d710862f1da6e,http://arxiv.org/abs/1211.2065v1
16389," Peer-to-peer communication has been recently considered as a popular issue
for local area services. An innovative resource allocation scheme is proposed
to improve the performance of mobile peer-to-peer, i.e., device-to-device
(D2D), communications as an underlay in the downlink (DL) cellular networks. To
optimize the system sum rate over the resource sharing of both D2D and cellular
modes, we introduce a reverse iterative combinatorial auction as the allocation
mechanism. In the auction, all the spectrum resources are considered as a set
of resource units, which as bidders compete to obtain business while the
packages of the D2D pairs are auctioned off as goods in each auction round. We
first formulate the valuation of each resource unit, as a basis of the proposed
auction. And then a detailed non-monotonic descending price auction algorithm
is explained depending on the utility function that accounts for the channel
gain from D2D and the costs for the system. Further, we prove that the proposed
auction-based scheme is cheat-proof, and converges in a finite number of
iteration rounds. We explain non-monotonicity in the price update process and
show lower complexity compared to a traditional combinatorial allocation. The
simulation results demonstrate that the algorithm efficiently leads to a good
performance on the system sum rate.
",lingyang song,,2012.0,10.1109/JSAC.2013.SUP.0513031,arXiv,Xu2012,True,,arXiv,Not available,"Efficiency Resource Allocation for Device-to-Device Underlay
Communication Systems: A Reverse Iterative Combinatorial Auction Based
Approach",dc68d6c76b39d729c40d710862f1da6e,http://arxiv.org/abs/1211.2065v1
16390," Peer-to-peer communication has been recently considered as a popular issue
for local area services. An innovative resource allocation scheme is proposed
to improve the performance of mobile peer-to-peer, i.e., device-to-device
(D2D), communications as an underlay in the downlink (DL) cellular networks. To
optimize the system sum rate over the resource sharing of both D2D and cellular
modes, we introduce a reverse iterative combinatorial auction as the allocation
mechanism. In the auction, all the spectrum resources are considered as a set
of resource units, which as bidders compete to obtain business while the
packages of the D2D pairs are auctioned off as goods in each auction round. We
first formulate the valuation of each resource unit, as a basis of the proposed
auction. And then a detailed non-monotonic descending price auction algorithm
is explained depending on the utility function that accounts for the channel
gain from D2D and the costs for the system. Further, we prove that the proposed
auction-based scheme is cheat-proof, and converges in a finite number of
iteration rounds. We explain non-monotonicity in the price update process and
show lower complexity compared to a traditional combinatorial allocation. The
simulation results demonstrate that the algorithm efficiently leads to a good
performance on the system sum rate.
",zhu han,,2012.0,10.1109/JSAC.2013.SUP.0513031,arXiv,Xu2012,True,,arXiv,Not available,"Efficiency Resource Allocation for Device-to-Device Underlay
Communication Systems: A Reverse Iterative Combinatorial Auction Based
Approach",dc68d6c76b39d729c40d710862f1da6e,http://arxiv.org/abs/1211.2065v1
16391," Peer-to-peer communication has been recently considered as a popular issue
for local area services. An innovative resource allocation scheme is proposed
to improve the performance of mobile peer-to-peer, i.e., device-to-device
(D2D), communications as an underlay in the downlink (DL) cellular networks. To
optimize the system sum rate over the resource sharing of both D2D and cellular
modes, we introduce a reverse iterative combinatorial auction as the allocation
mechanism. In the auction, all the spectrum resources are considered as a set
of resource units, which as bidders compete to obtain business while the
packages of the D2D pairs are auctioned off as goods in each auction round. We
first formulate the valuation of each resource unit, as a basis of the proposed
auction. And then a detailed non-monotonic descending price auction algorithm
is explained depending on the utility function that accounts for the channel
gain from D2D and the costs for the system. Further, we prove that the proposed
auction-based scheme is cheat-proof, and converges in a finite number of
iteration rounds. We explain non-monotonicity in the price update process and
show lower complexity compared to a traditional combinatorial allocation. The
simulation results demonstrate that the algorithm efficiently leads to a good
performance on the system sum rate.
",qun zhao,,2012.0,10.1109/JSAC.2013.SUP.0513031,arXiv,Xu2012,True,,arXiv,Not available,"Efficiency Resource Allocation for Device-to-Device Underlay
Communication Systems: A Reverse Iterative Combinatorial Auction Based
Approach",dc68d6c76b39d729c40d710862f1da6e,http://arxiv.org/abs/1211.2065v1
16392," Peer-to-peer communication has been recently considered as a popular issue
for local area services. An innovative resource allocation scheme is proposed
to improve the performance of mobile peer-to-peer, i.e., device-to-device
(D2D), communications as an underlay in the downlink (DL) cellular networks. To
optimize the system sum rate over the resource sharing of both D2D and cellular
modes, we introduce a reverse iterative combinatorial auction as the allocation
mechanism. In the auction, all the spectrum resources are considered as a set
of resource units, which as bidders compete to obtain business while the
packages of the D2D pairs are auctioned off as goods in each auction round. We
first formulate the valuation of each resource unit, as a basis of the proposed
auction. And then a detailed non-monotonic descending price auction algorithm
is explained depending on the utility function that accounts for the channel
gain from D2D and the costs for the system. Further, we prove that the proposed
auction-based scheme is cheat-proof, and converges in a finite number of
iteration rounds. We explain non-monotonicity in the price update process and
show lower complexity compared to a traditional combinatorial allocation. The
simulation results demonstrate that the algorithm efficiently leads to a good
performance on the system sum rate.
",xiaoli wang,,2012.0,10.1109/JSAC.2013.SUP.0513031,arXiv,Xu2012,True,,arXiv,Not available,"Efficiency Resource Allocation for Device-to-Device Underlay
Communication Systems: A Reverse Iterative Combinatorial Auction Based
Approach",dc68d6c76b39d729c40d710862f1da6e,http://arxiv.org/abs/1211.2065v1
16393," Peer-to-peer communication has been recently considered as a popular issue
for local area services. An innovative resource allocation scheme is proposed
to improve the performance of mobile peer-to-peer, i.e., device-to-device
(D2D), communications as an underlay in the downlink (DL) cellular networks. To
optimize the system sum rate over the resource sharing of both D2D and cellular
modes, we introduce a reverse iterative combinatorial auction as the allocation
mechanism. In the auction, all the spectrum resources are considered as a set
of resource units, which as bidders compete to obtain business while the
packages of the D2D pairs are auctioned off as goods in each auction round. We
first formulate the valuation of each resource unit, as a basis of the proposed
auction. And then a detailed non-monotonic descending price auction algorithm
is explained depending on the utility function that accounts for the channel
gain from D2D and the costs for the system. Further, we prove that the proposed
auction-based scheme is cheat-proof, and converges in a finite number of
iteration rounds. We explain non-monotonicity in the price update process and
show lower complexity compared to a traditional combinatorial allocation. The
simulation results demonstrate that the algorithm efficiently leads to a good
performance on the system sum rate.
",xiang cheng,,2012.0,10.1109/JSAC.2013.SUP.0513031,arXiv,Xu2012,True,,arXiv,Not available,"Efficiency Resource Allocation for Device-to-Device Underlay
Communication Systems: A Reverse Iterative Combinatorial Auction Based
Approach",dc68d6c76b39d729c40d710862f1da6e,http://arxiv.org/abs/1211.2065v1
16394," Peer-to-peer communication has been recently considered as a popular issue
for local area services. An innovative resource allocation scheme is proposed
to improve the performance of mobile peer-to-peer, i.e., device-to-device
(D2D), communications as an underlay in the downlink (DL) cellular networks. To
optimize the system sum rate over the resource sharing of both D2D and cellular
modes, we introduce a reverse iterative combinatorial auction as the allocation
mechanism. In the auction, all the spectrum resources are considered as a set
of resource units, which as bidders compete to obtain business while the
packages of the D2D pairs are auctioned off as goods in each auction round. We
first formulate the valuation of each resource unit, as a basis of the proposed
auction. And then a detailed non-monotonic descending price auction algorithm
is explained depending on the utility function that accounts for the channel
gain from D2D and the costs for the system. Further, we prove that the proposed
auction-based scheme is cheat-proof, and converges in a finite number of
iteration rounds. We explain non-monotonicity in the price update process and
show lower complexity compared to a traditional combinatorial allocation. The
simulation results demonstrate that the algorithm efficiently leads to a good
performance on the system sum rate.
",bingli jiao,,2012.0,10.1109/JSAC.2013.SUP.0513031,arXiv,Xu2012,True,,arXiv,Not available,"Efficiency Resource Allocation for Device-to-Device Underlay
Communication Systems: A Reverse Iterative Combinatorial Auction Based
Approach",dc68d6c76b39d729c40d710862f1da6e,http://arxiv.org/abs/1211.2065v1
16395," In this paper, we study sequential auctions with two budget constrained
bidders and any number of identical items. All prior results on such auctions
consider only two items. We construct a canonical outcome of the auction that
is the only natural equilibrium and is unique under a refinement of subgame
perfect equilibria. We show certain interesting properties of this equilibrium;
for instance, we show that the prices decrease as the auction progresses. This
phenomenon has been observed in many experiments and previous theoretic work
attributed it to features such as uncertainty in the supply or risk averse
bidders. We show that such features are not needed for this phenomenon and that
it arises purely from the most essential features: budget constraints and the
sequential nature of the auction. A little surprisingly we also show that in
this equilibrium one agent wins all his items in the beginning and then the
other agent wins the rest. The major difficulty in analyzing such sequential
auctions has been in understanding how the selling prices of the first few
rounds affect the utilities of the agents in the later rounds. We tackle this
difficulty by identifying certain key properties of the auction and the proof
is via a joint induction on all of them.
",zhiyi huang,,2012.0,,arXiv,Huang2012,True,,arXiv,Not available,Sequential Auctions of Identical Items with Budget-Constrained Bidders,0cb4828242c6fcb68bb7c024421c8ec3,http://arxiv.org/abs/1209.1698v1
16396," Modern organizations (e.g., hospitals, social networks, government agencies)
rely heavily on audit to detect and punish insiders who inappropriately access
and disclose confidential information. Recent work on audit games models the
strategic interaction between an auditor with a single audit resource and
auditees as a Stackelberg game, augmenting associated well-studied security
games with a configurable punishment parameter. We significantly generalize
this audit game model to account for multiple audit resources where each
resource is restricted to audit a subset of all potential violations, thus
enabling application to practical auditing scenarios. We provide an FPTAS that
computes an approximately optimal solution to the resulting non-convex
optimization problem. The main technical novelty is in the design and
correctness proof of an optimization transformation that enables the
construction of this FPTAS. In addition, we experimentally demonstrate that
this transformation significantly speeds up computation of solutions for a
class of audit games and security games.
",nicolas christin,,2014.0,,arXiv,Blocki2014,True,,arXiv,Not available,Audit Games with Multiple Defender Resources,08a641620dd57ba419821d6e8f9ce5bf,http://arxiv.org/abs/1409.4503v3
16397," In this paper, we study sequential auctions with two budget constrained
bidders and any number of identical items. All prior results on such auctions
consider only two items. We construct a canonical outcome of the auction that
is the only natural equilibrium and is unique under a refinement of subgame
perfect equilibria. We show certain interesting properties of this equilibrium;
for instance, we show that the prices decrease as the auction progresses. This
phenomenon has been observed in many experiments and previous theoretic work
attributed it to features such as uncertainty in the supply or risk averse
bidders. We show that such features are not needed for this phenomenon and that
it arises purely from the most essential features: budget constraints and the
sequential nature of the auction. A little surprisingly we also show that in
this equilibrium one agent wins all his items in the beginning and then the
other agent wins the rest. The major difficulty in analyzing such sequential
auctions has been in understanding how the selling prices of the first few
rounds affect the utilities of the agents in the later rounds. We tackle this
difficulty by identifying certain key properties of the auction and the proof
is via a joint induction on all of them.
",nikhil devanur,,2012.0,,arXiv,Huang2012,True,,arXiv,Not available,Sequential Auctions of Identical Items with Budget-Constrained Bidders,0cb4828242c6fcb68bb7c024421c8ec3,http://arxiv.org/abs/1209.1698v1
16398," In this paper, we study sequential auctions with two budget constrained
bidders and any number of identical items. All prior results on such auctions
consider only two items. We construct a canonical outcome of the auction that
is the only natural equilibrium and is unique under a refinement of subgame
perfect equilibria. We show certain interesting properties of this equilibrium;
for instance, we show that the prices decrease as the auction progresses. This
phenomenon has been observed in many experiments and previous theoretic work
attributed it to features such as uncertainty in the supply or risk averse
bidders. We show that such features are not needed for this phenomenon and that
it arises purely from the most essential features: budget constraints and the
sequential nature of the auction. A little surprisingly we also show that in
this equilibrium one agent wins all his items in the beginning and then the
other agent wins the rest. The major difficulty in analyzing such sequential
auctions has been in understanding how the selling prices of the first few
rounds affect the utilities of the agents in the later rounds. We tackle this
difficulty by identifying certain key properties of the auction and the proof
is via a joint induction on all of them.
",david malec,,2012.0,,arXiv,Huang2012,True,,arXiv,Not available,Sequential Auctions of Identical Items with Budget-Constrained Bidders,0cb4828242c6fcb68bb7c024421c8ec3,http://arxiv.org/abs/1209.1698v1
16399," The recent proliferation of increasingly capable mobile devices has given
rise to mobile crowd sensing (MCS) systems that outsource the collection of
sensory data to a crowd of participating workers that carry various mobile
devices. Aware of the paramount importance of effectively incentivizing
participation in such systems, the research community has proposed a wide
variety of incentive mechanisms. However, different from most of these existing
mechanisms which assume the existence of only one data requester, we consider
MCS systems with multiple data requesters, which are actually more common in
practice. Specifically, our incentive mechanism is based on double auction, and
is able to stimulate the participation of both data requesters and workers. In
real practice, the incentive mechanism is typically not an isolated module, but
interacts with the data aggregation mechanism that aggregates workers' data.
For this reason, we propose CENTURION, a novel integrated framework for
multi-requester MCS systems, consisting of the aforementioned incentive and
data aggregation mechanism. CENTURION's incentive mechanism satisfies
truthfulness, individual rationality, computational efficiency, as well as
guaranteeing non-negative social welfare, and its data aggregation mechanism
generates highly accurate aggregated results. The desirable properties of
CENTURION are validated through both theoretical analysis and extensive
simulations.
",haiming jin,,2017.0,,arXiv,Jin2017,True,,arXiv,Not available,CENTURION: Incentivizing Multi-Requester Mobile Crowd Sensing,8ac9eee546c559c6c36e8a0549e3c3a7,http://arxiv.org/abs/1701.01533v1
16400," The recent proliferation of increasingly capable mobile devices has given
rise to mobile crowd sensing (MCS) systems that outsource the collection of
sensory data to a crowd of participating workers that carry various mobile
devices. Aware of the paramount importance of effectively incentivizing
participation in such systems, the research community has proposed a wide
variety of incentive mechanisms. However, different from most of these existing
mechanisms which assume the existence of only one data requester, we consider
MCS systems with multiple data requesters, which are actually more common in
practice. Specifically, our incentive mechanism is based on double auction, and
is able to stimulate the participation of both data requesters and workers. In
real practice, the incentive mechanism is typically not an isolated module, but
interacts with the data aggregation mechanism that aggregates workers' data.
For this reason, we propose CENTURION, a novel integrated framework for
multi-requester MCS systems, consisting of the aforementioned incentive and
data aggregation mechanism. CENTURION's incentive mechanism satisfies
truthfulness, individual rationality, computational efficiency, as well as
guaranteeing non-negative social welfare, and its data aggregation mechanism
generates highly accurate aggregated results. The desirable properties of
CENTURION are validated through both theoretical analysis and extensive
simulations.
",lu su,,2017.0,,arXiv,Jin2017,True,,arXiv,Not available,CENTURION: Incentivizing Multi-Requester Mobile Crowd Sensing,8ac9eee546c559c6c36e8a0549e3c3a7,http://arxiv.org/abs/1701.01533v1
16401," The recent proliferation of increasingly capable mobile devices has given
rise to mobile crowd sensing (MCS) systems that outsource the collection of
sensory data to a crowd of participating workers that carry various mobile
devices. Aware of the paramount importance of effectively incentivizing
participation in such systems, the research community has proposed a wide
variety of incentive mechanisms. However, different from most of these existing
mechanisms which assume the existence of only one data requester, we consider
MCS systems with multiple data requesters, which are actually more common in
practice. Specifically, our incentive mechanism is based on double auction, and
is able to stimulate the participation of both data requesters and workers. In
real practice, the incentive mechanism is typically not an isolated module, but
interacts with the data aggregation mechanism that aggregates workers' data.
For this reason, we propose CENTURION, a novel integrated framework for
multi-requester MCS systems, consisting of the aforementioned incentive and
data aggregation mechanism. CENTURION's incentive mechanism satisfies
truthfulness, individual rationality, computational efficiency, as well as
guaranteeing non-negative social welfare, and its data aggregation mechanism
generates highly accurate aggregated results. The desirable properties of
CENTURION are validated through both theoretical analysis and extensive
simulations.
",klara nahrstedt,,2017.0,,arXiv,Jin2017,True,,arXiv,Not available,CENTURION: Incentivizing Multi-Requester Mobile Crowd Sensing,8ac9eee546c559c6c36e8a0549e3c3a7,http://arxiv.org/abs/1701.01533v1
16402," Mobile crowdsensing (MCS) is a promising sensing paradigm that leverages the
diverse embedded sensors in massive mobile devices. A key objective in MCS is
to efficiently schedule mobile users to perform multiple sensing tasks. Prior
work mainly focused on interactions between the task-layer and the user-layer,
without considering tasks' similar data requirements and users' heterogeneous
sensing capabilities. In this work, we propose a three-layer data-centric MCS
model by introducing a new data-layer between tasks and users, enable different
tasks to reuse the same data items, hence effectively leverage both task
similarities and user heterogeneities. We formulate a joint task selection and
user scheduling problem based on the new framework, aiming at maximizing social
welfare. We first analyze the centralized optimization problem with the
statistical information of tasks and users, and show the bound of the social
welfare gain due to data reuse. Then we consider the two-sided information
asymmetry of selfish task-owners and users, and propose a decentralized market
mechanism for achieving the centralized social optimality. In particular,
considering the NP-hardness of the optimization, we propose a truthful
two-sided randomized auction mechanism that ensures computational efficiency
and a close-to-optimal performance. Simulations verify the effectiveness of our
proposed model and mechanism.
",changkun jiang,,2017.0,,arXiv,Jiang2017,True,,arXiv,Not available,Data-Centric Mobile Crowdsensing,d490a07e33895548f90de3aef3852a8e,http://arxiv.org/abs/1705.06055v1
16403," Mobile crowdsensing (MCS) is a promising sensing paradigm that leverages the
diverse embedded sensors in massive mobile devices. A key objective in MCS is
to efficiently schedule mobile users to perform multiple sensing tasks. Prior
work mainly focused on interactions between the task-layer and the user-layer,
without considering tasks' similar data requirements and users' heterogeneous
sensing capabilities. In this work, we propose a three-layer data-centric MCS
model by introducing a new data-layer between tasks and users, enable different
tasks to reuse the same data items, hence effectively leverage both task
similarities and user heterogeneities. We formulate a joint task selection and
user scheduling problem based on the new framework, aiming at maximizing social
welfare. We first analyze the centralized optimization problem with the
statistical information of tasks and users, and show the bound of the social
welfare gain due to data reuse. Then we consider the two-sided information
asymmetry of selfish task-owners and users, and propose a decentralized market
mechanism for achieving the centralized social optimality. In particular,
considering the NP-hardness of the optimization, we propose a truthful
two-sided randomized auction mechanism that ensures computational efficiency
and a close-to-optimal performance. Simulations verify the effectiveness of our
proposed model and mechanism.
",lin gao,,2017.0,,arXiv,Jiang2017,True,,arXiv,Not available,Data-Centric Mobile Crowdsensing,d490a07e33895548f90de3aef3852a8e,http://arxiv.org/abs/1705.06055v1
16404," Mobile crowdsensing (MCS) is a promising sensing paradigm that leverages the
diverse embedded sensors in massive mobile devices. A key objective in MCS is
to efficiently schedule mobile users to perform multiple sensing tasks. Prior
work mainly focused on interactions between the task-layer and the user-layer,
without considering tasks' similar data requirements and users' heterogeneous
sensing capabilities. In this work, we propose a three-layer data-centric MCS
model by introducing a new data-layer between tasks and users, enable different
tasks to reuse the same data items, hence effectively leverage both task
similarities and user heterogeneities. We formulate a joint task selection and
user scheduling problem based on the new framework, aiming at maximizing social
welfare. We first analyze the centralized optimization problem with the
statistical information of tasks and users, and show the bound of the social
welfare gain due to data reuse. Then we consider the two-sided information
asymmetry of selfish task-owners and users, and propose a decentralized market
mechanism for achieving the centralized social optimality. In particular,
considering the NP-hardness of the optimization, we propose a truthful
two-sided randomized auction mechanism that ensures computational efficiency
and a close-to-optimal performance. Simulations verify the effectiveness of our
proposed model and mechanism.
",lingjie duan,,2017.0,,arXiv,Jiang2017,True,,arXiv,Not available,Data-Centric Mobile Crowdsensing,d490a07e33895548f90de3aef3852a8e,http://arxiv.org/abs/1705.06055v1
16405," Mobile crowdsensing (MCS) is a promising sensing paradigm that leverages the
diverse embedded sensors in massive mobile devices. A key objective in MCS is
to efficiently schedule mobile users to perform multiple sensing tasks. Prior
work mainly focused on interactions between the task-layer and the user-layer,
without considering tasks' similar data requirements and users' heterogeneous
sensing capabilities. In this work, we propose a three-layer data-centric MCS
model by introducing a new data-layer between tasks and users, enable different
tasks to reuse the same data items, hence effectively leverage both task
similarities and user heterogeneities. We formulate a joint task selection and
user scheduling problem based on the new framework, aiming at maximizing social
welfare. We first analyze the centralized optimization problem with the
statistical information of tasks and users, and show the bound of the social
welfare gain due to data reuse. Then we consider the two-sided information
asymmetry of selfish task-owners and users, and propose a decentralized market
mechanism for achieving the centralized social optimality. In particular,
considering the NP-hardness of the optimization, we propose a truthful
two-sided randomized auction mechanism that ensures computational efficiency
and a close-to-optimal performance. Simulations verify the effectiveness of our
proposed model and mechanism.
",jianwei huang,,2017.0,,arXiv,Jiang2017,True,,arXiv,Not available,Data-Centric Mobile Crowdsensing,d490a07e33895548f90de3aef3852a8e,http://arxiv.org/abs/1705.06055v1
16406," We develop an optimization model and corresponding algorithm for the
management of a demand-side platform (DSP), whereby the DSP aims to maximize
its own profit while acquiring valuable impressions for its advertiser clients.
We formulate the problem of profit maximization for a DSP interacting with ad
exchanges in a real-time bidding environment in a
cost-per-click/cost-per-action pricing model. Our proposed formulation leads to
a nonconvex optimization problem due to the joint optimization over both
impression allocation and bid price decisions. We use Lagrangian relaxation to
develop a tractable convex dual problem, which, due to the properties of
second-price auctions, may be solved efficiently with subgradient methods. We
propose a two-phase solution procedure, whereby in the first phase we solve the
convex dual problem using a subgradient algorithm, and in the second phase we
use the previously computed dual solution to set bid prices and then solve a
linear optimization problem to obtain the allocation probability variables. On
several synthetic examples, we demonstrate that our proposed solution approach
leads to superior performance over a baseline method that is used in practice.
",paul grigas,,2017.0,,arXiv,Grigas2017,True,,arXiv,Not available,Profit Maximization for Online Advertising Demand-Side Platforms,1c416796828edfc80b2fb22f945c82a5,http://arxiv.org/abs/1706.01614v1
16407," Modern organizations (e.g., hospitals, social networks, government agencies)
rely heavily on audit to detect and punish insiders who inappropriately access
and disclose confidential information. Recent work on audit games models the
strategic interaction between an auditor with a single audit resource and
auditees as a Stackelberg game, augmenting associated well-studied security
games with a configurable punishment parameter. We significantly generalize
this audit game model to account for multiple audit resources where each
resource is restricted to audit a subset of all potential violations, thus
enabling application to practical auditing scenarios. We provide an FPTAS that
computes an approximately optimal solution to the resulting non-convex
optimization problem. The main technical novelty is in the design and
correctness proof of an optimization transformation that enables the
construction of this FPTAS. In addition, we experimentally demonstrate that
this transformation significantly speeds up computation of solutions for a
class of audit games and security games.
",anupam datta,,2014.0,,arXiv,Blocki2014,True,,arXiv,Not available,Audit Games with Multiple Defender Resources,08a641620dd57ba419821d6e8f9ce5bf,http://arxiv.org/abs/1409.4503v3
16408," We develop an optimization model and corresponding algorithm for the
management of a demand-side platform (DSP), whereby the DSP aims to maximize
its own profit while acquiring valuable impressions for its advertiser clients.
We formulate the problem of profit maximization for a DSP interacting with ad
exchanges in a real-time bidding environment in a
cost-per-click/cost-per-action pricing model. Our proposed formulation leads to
a nonconvex optimization problem due to the joint optimization over both
impression allocation and bid price decisions. We use Lagrangian relaxation to
develop a tractable convex dual problem, which, due to the properties of
second-price auctions, may be solved efficiently with subgradient methods. We
propose a two-phase solution procedure, whereby in the first phase we solve the
convex dual problem using a subgradient algorithm, and in the second phase we
use the previously computed dual solution to set bid prices and then solve a
linear optimization problem to obtain the allocation probability variables. On
several synthetic examples, we demonstrate that our proposed solution approach
leads to superior performance over a baseline method that is used in practice.
",alfonso lobos,,2017.0,,arXiv,Grigas2017,True,,arXiv,Not available,Profit Maximization for Online Advertising Demand-Side Platforms,1c416796828edfc80b2fb22f945c82a5,http://arxiv.org/abs/1706.01614v1
16409," We develop an optimization model and corresponding algorithm for the
management of a demand-side platform (DSP), whereby the DSP aims to maximize
its own profit while acquiring valuable impressions for its advertiser clients.
We formulate the problem of profit maximization for a DSP interacting with ad
exchanges in a real-time bidding environment in a
cost-per-click/cost-per-action pricing model. Our proposed formulation leads to
a nonconvex optimization problem due to the joint optimization over both
impression allocation and bid price decisions. We use Lagrangian relaxation to
develop a tractable convex dual problem, which, due to the properties of
second-price auctions, may be solved efficiently with subgradient methods. We
propose a two-phase solution procedure, whereby in the first phase we solve the
convex dual problem using a subgradient algorithm, and in the second phase we
use the previously computed dual solution to set bid prices and then solve a
linear optimization problem to obtain the allocation probability variables. On
several synthetic examples, we demonstrate that our proposed solution approach
leads to superior performance over a baseline method that is used in practice.
",zheng wen,,2017.0,,arXiv,Grigas2017,True,,arXiv,Not available,Profit Maximization for Online Advertising Demand-Side Platforms,1c416796828edfc80b2fb22f945c82a5,http://arxiv.org/abs/1706.01614v1
16410," We develop an optimization model and corresponding algorithm for the
management of a demand-side platform (DSP), whereby the DSP aims to maximize
its own profit while acquiring valuable impressions for its advertiser clients.
We formulate the problem of profit maximization for a DSP interacting with ad
exchanges in a real-time bidding environment in a
cost-per-click/cost-per-action pricing model. Our proposed formulation leads to
a nonconvex optimization problem due to the joint optimization over both
impression allocation and bid price decisions. We use Lagrangian relaxation to
develop a tractable convex dual problem, which, due to the properties of
second-price auctions, may be solved efficiently with subgradient methods. We
propose a two-phase solution procedure, whereby in the first phase we solve the
convex dual problem using a subgradient algorithm, and in the second phase we
use the previously computed dual solution to set bid prices and then solve a
linear optimization problem to obtain the allocation probability variables. On
several synthetic examples, we demonstrate that our proposed solution approach
leads to superior performance over a baseline method that is used in practice.
",kuang-chih lee,,2017.0,,arXiv,Grigas2017,True,,arXiv,Not available,Profit Maximization for Online Advertising Demand-Side Platforms,1c416796828edfc80b2fb22f945c82a5,http://arxiv.org/abs/1706.01614v1
16411," The question of the minimum menu-size for approximate (i.e.,
up-to-$\varepsilon$) Bayesian revenue maximization when selling two goods to an
additive risk-neutral quasilinear buyer was introduced by Hart and Nisan
(2013), who give an upper bound of $O(\frac{1}{\varepsilon^4})$ for this
problem. Using the optimal-transport duality framework of Daskalakis et al.
(2013, 2015), we derive the first lower bound for this problem - of
$\Omega(\frac{1}{\sqrt[4]{\varepsilon}})$, even when the values for the two
goods are drawn i.i.d. from ""nice"" distributions, establishing how to reason
about approximately optimal mechanisms via this duality framework. This bound
implies, for any fixed number of goods, a tight bound of
$\Theta(\log\frac{1}{\varepsilon})$ on the minimum deterministic communication
complexity guaranteed to suffice for running some approximately
revenue-maximizing mechanism, thereby completely resolving this problem. As a
secondary result, we show that under standard economic assumptions on
distributions, the above upper bound of Hart and Nisan (2013) can be
strengthened to $O(\frac{1}{\varepsilon^2})$.
",yannai gonczarowski,,2017.0,,arXiv,Gonczarowski2017,True,,arXiv,Not available,"Bounding the Menu-Size of Approximately Optimal Auctions via
Optimal-Transport Duality",e3f58022422ecfc2a296e38decdc183f,http://arxiv.org/abs/1708.08907v4
16412," This paper studies the revenue of simple mechanisms in settings where a
third-party data provider is present. When no data provider is present, it is
known that simple mechanisms achieve a constant fraction of the revenue of
optimal mechanisms. The results in this paper demonstrate that this is no
longer true in the presence of a third party data provider who can provide the
bidder with a signal that is correlated with the item type. Specifically, we
show that even with a single seller, a single bidder, and a single item of
uncertain type for sale, pricing each item-type separately (the analog of item
pricing for multi-item auctions) and bundling all item-types under a single
price (the analog of grand bundling) can both simultaneously be a logarithmic
factor worse than the optimal revenue. Further, in the presence of a data
provider, item-type partitioning mechanisms---a more general class of
mechanisms which divide item-types into disjoint groups and offer prices for
each group---still cannot achieve within a $\log \log$ factor of the optimal
revenue.
",yang cai,,2018.0,,arXiv,Cai2018,True,,arXiv,Not available,Third-Party Data Providers Ruin Simple Mechanisms,1c192bb1573a1c2272ea1c9471439800,http://arxiv.org/abs/1802.07407v1
16413," This paper studies the revenue of simple mechanisms in settings where a
third-party data provider is present. When no data provider is present, it is
known that simple mechanisms achieve a constant fraction of the revenue of
optimal mechanisms. The results in this paper demonstrate that this is no
longer true in the presence of a third party data provider who can provide the
bidder with a signal that is correlated with the item type. Specifically, we
show that even with a single seller, a single bidder, and a single item of
uncertain type for sale, pricing each item-type separately (the analog of item
pricing for multi-item auctions) and bundling all item-types under a single
price (the analog of grand bundling) can both simultaneously be a logarithmic
factor worse than the optimal revenue. Further, in the presence of a data
provider, item-type partitioning mechanisms---a more general class of
mechanisms which divide item-types into disjoint groups and offer prices for
each group---still cannot achieve within a $\log \log$ factor of the optimal
revenue.
",federico echenique,,2018.0,,arXiv,Cai2018,True,,arXiv,Not available,Third-Party Data Providers Ruin Simple Mechanisms,1c192bb1573a1c2272ea1c9471439800,http://arxiv.org/abs/1802.07407v1
16414," This paper studies the revenue of simple mechanisms in settings where a
third-party data provider is present. When no data provider is present, it is
known that simple mechanisms achieve a constant fraction of the revenue of
optimal mechanisms. The results in this paper demonstrate that this is no
longer true in the presence of a third party data provider who can provide the
bidder with a signal that is correlated with the item type. Specifically, we
show that even with a single seller, a single bidder, and a single item of
uncertain type for sale, pricing each item-type separately (the analog of item
pricing for multi-item auctions) and bundling all item-types under a single
price (the analog of grand bundling) can both simultaneously be a logarithmic
factor worse than the optimal revenue. Further, in the presence of a data
provider, item-type partitioning mechanisms---a more general class of
mechanisms which divide item-types into disjoint groups and offer prices for
each group---still cannot achieve within a $\log \log$ factor of the optimal
revenue.
",hu fu,,2018.0,,arXiv,Cai2018,True,,arXiv,Not available,Third-Party Data Providers Ruin Simple Mechanisms,1c192bb1573a1c2272ea1c9471439800,http://arxiv.org/abs/1802.07407v1
16415," This paper studies the revenue of simple mechanisms in settings where a
third-party data provider is present. When no data provider is present, it is
known that simple mechanisms achieve a constant fraction of the revenue of
optimal mechanisms. The results in this paper demonstrate that this is no
longer true in the presence of a third party data provider who can provide the
bidder with a signal that is correlated with the item type. Specifically, we
show that even with a single seller, a single bidder, and a single item of
uncertain type for sale, pricing each item-type separately (the analog of item
pricing for multi-item auctions) and bundling all item-types under a single
price (the analog of grand bundling) can both simultaneously be a logarithmic
factor worse than the optimal revenue. Further, in the presence of a data
provider, item-type partitioning mechanisms---a more general class of
mechanisms which divide item-types into disjoint groups and offer prices for
each group---still cannot achieve within a $\log \log$ factor of the optimal
revenue.
",katrina ligett,,2018.0,,arXiv,Cai2018,True,,arXiv,Not available,Third-Party Data Providers Ruin Simple Mechanisms,1c192bb1573a1c2272ea1c9471439800,http://arxiv.org/abs/1802.07407v1
16416," This paper studies the revenue of simple mechanisms in settings where a
third-party data provider is present. When no data provider is present, it is
known that simple mechanisms achieve a constant fraction of the revenue of
optimal mechanisms. The results in this paper demonstrate that this is no
longer true in the presence of a third party data provider who can provide the
bidder with a signal that is correlated with the item type. Specifically, we
show that even with a single seller, a single bidder, and a single item of
uncertain type for sale, pricing each item-type separately (the analog of item
pricing for multi-item auctions) and bundling all item-types under a single
price (the analog of grand bundling) can both simultaneously be a logarithmic
factor worse than the optimal revenue. Further, in the presence of a data
provider, item-type partitioning mechanisms---a more general class of
mechanisms which divide item-types into disjoint groups and offer prices for
each group---still cannot achieve within a $\log \log$ factor of the optimal
revenue.
",adam wierman,,2018.0,,arXiv,Cai2018,True,,arXiv,Not available,Third-Party Data Providers Ruin Simple Mechanisms,1c192bb1573a1c2272ea1c9471439800,http://arxiv.org/abs/1802.07407v1
16417," This paper studies the revenue of simple mechanisms in settings where a
third-party data provider is present. When no data provider is present, it is
known that simple mechanisms achieve a constant fraction of the revenue of
optimal mechanisms. The results in this paper demonstrate that this is no
longer true in the presence of a third party data provider who can provide the
bidder with a signal that is correlated with the item type. Specifically, we
show that even with a single seller, a single bidder, and a single item of
uncertain type for sale, pricing each item-type separately (the analog of item
pricing for multi-item auctions) and bundling all item-types under a single
price (the analog of grand bundling) can both simultaneously be a logarithmic
factor worse than the optimal revenue. Further, in the presence of a data
provider, item-type partitioning mechanisms---a more general class of
mechanisms which divide item-types into disjoint groups and offer prices for
each group---still cannot achieve within a $\log \log$ factor of the optimal
revenue.
",juba ziani,,2018.0,,arXiv,Cai2018,True,,arXiv,Not available,Third-Party Data Providers Ruin Simple Mechanisms,1c192bb1573a1c2272ea1c9471439800,http://arxiv.org/abs/1802.07407v1
16418," Modern organizations (e.g., hospitals, social networks, government agencies)
rely heavily on audit to detect and punish insiders who inappropriately access
and disclose confidential information. Recent work on audit games models the
strategic interaction between an auditor with a single audit resource and
auditees as a Stackelberg game, augmenting associated well-studied security
games with a configurable punishment parameter. We significantly generalize
this audit game model to account for multiple audit resources where each
resource is restricted to audit a subset of all potential violations, thus
enabling application to practical auditing scenarios. We provide an FPTAS that
computes an approximately optimal solution to the resulting non-convex
optimization problem. The main technical novelty is in the design and
correctness proof of an optimization transformation that enables the
construction of this FPTAS. In addition, we experimentally demonstrate that
this transformation significantly speeds up computation of solutions for a
class of audit games and security games.
",ariel procaccia,,2014.0,,arXiv,Blocki2014,True,,arXiv,Not available,Audit Games with Multiple Defender Resources,08a641620dd57ba419821d6e8f9ce5bf,http://arxiv.org/abs/1409.4503v3
16419," We study the problem of exchange when 1) agents are endowed with
heterogeneous indivisible objects, and 2) there is no money. In general, no
rule satisfies the three central properties Pareto-efficiency, individual
rationality, and strategy-proofness \cite{Sonmez1999}. Recently, it was shown
that Top Trading Cycles is $\NP$-hard to manipulate \cite{FujitaEA2015}, a
relaxation of strategy-proofness. However, parameterized complexity is a more
appropriate framework for this and other economic settings. Certain aspects of
the problem - number of objects each agent brings to the table, goods up for
auction, candidates in an election \cite{consandlang2007}, legislative figures
to influence \cite{christian2007complexity} - may face natural bounds or are
fixed as the problem grows. We take a parameterized complexity approach to
indivisible goods exchange for the first time. Our results represent good and
bad news for TTC. When the size of the endowments $k$ is a fixed constant, we
show that the computational task of manipulating TTC can be performed in
polynomial time. On the other hand, we show that this parameterized problem is
$\W[1]$-hard, and therefore unlikely to be \emph{fixed parameter tractable}.
",william phan,,2018.0,,arXiv,Phan2018,True,,arXiv,Not available,On the parameterized complexity of manipulating Top Trading Cycles,142e831e5816639fd159dd0eaf8123ea,http://arxiv.org/abs/1803.02409v1
16420," We study the problem of exchange when 1) agents are endowed with
heterogeneous indivisible objects, and 2) there is no money. In general, no
rule satisfies the three central properties Pareto-efficiency, individual
rationality, and strategy-proofness \cite{Sonmez1999}. Recently, it was shown
that Top Trading Cycles is $\NP$-hard to manipulate \cite{FujitaEA2015}, a
relaxation of strategy-proofness. However, parameterized complexity is a more
appropriate framework for this and other economic settings. Certain aspects of
the problem - number of objects each agent brings to the table, goods up for
auction, candidates in an election \cite{consandlang2007}, legislative figures
to influence \cite{christian2007complexity} - may face natural bounds or are
fixed as the problem grows. We take a parameterized complexity approach to
indivisible goods exchange for the first time. Our results represent good and
bad news for TTC. When the size of the endowments $k$ is a fixed constant, we
show that the computational task of manipulating TTC can be performed in
polynomial time. On the other hand, we show that this parameterized problem is
$\W[1]$-hard, and therefore unlikely to be \emph{fixed parameter tractable}.
",christopher purcell,,2018.0,,arXiv,Phan2018,True,,arXiv,Not available,On the parameterized complexity of manipulating Top Trading Cycles,142e831e5816639fd159dd0eaf8123ea,http://arxiv.org/abs/1803.02409v1
16421," We study the performance of anonymous posted-price selling mechanisms for a
standard Bayesian auction setting, where $n$ bidders have i.i.d. valuations for
a single item. We show that for the natural class of Monotone Hazard Rate (MHR)
distributions, offering the same, take-it-or-leave-it price to all bidders can
achieve an (asymptotically) optimal revenue. In particular, the approximation
ratio is shown to be $1+O(\ln \ln n / \ln n)$, matched by a tight lower bound
for the case of exponential distributions. This improves upon the previously
best-known upper bound of $e/(e-1)\approx 1.58$ for the slightly more general
class of regular distributions. In the worst case (over $n$), we still show a
global upper bound of $1.35$. We give a simple, closed-form description of our
prices which, interestingly enough, relies only on minimal knowledge of the
prior distribution, namely just the expectation of its second-highest order
statistic.
",yiannis giannakopoulos,,2018.0,,arXiv,Giannakopoulos2018,True,,arXiv,Not available,Optimal Pricing For MHR Distributions,08abd963a1ea7211818b28a56c712626,http://arxiv.org/abs/1810.00800v1
16422," We study the performance of anonymous posted-price selling mechanisms for a
standard Bayesian auction setting, where $n$ bidders have i.i.d. valuations for
a single item. We show that for the natural class of Monotone Hazard Rate (MHR)
distributions, offering the same, take-it-or-leave-it price to all bidders can
achieve an (asymptotically) optimal revenue. In particular, the approximation
ratio is shown to be $1+O(\ln \ln n / \ln n)$, matched by a tight lower bound
for the case of exponential distributions. This improves upon the previously
best-known upper bound of $e/(e-1)\approx 1.58$ for the slightly more general
class of regular distributions. In the worst case (over $n$), we still show a
global upper bound of $1.35$. We give a simple, closed-form description of our
prices which, interestingly enough, relies only on minimal knowledge of the
prior distribution, namely just the expectation of its second-highest order
statistic.
",keyu zhu,,2018.0,,arXiv,Giannakopoulos2018,True,,arXiv,Not available,Optimal Pricing For MHR Distributions,08abd963a1ea7211818b28a56c712626,http://arxiv.org/abs/1810.00800v1
16423," Cloudlet deployment and resource allocation for mobile users (MUs) have been
extensively studied in existing works for computation resource scarcity.
However, most of them failed to jointly consider the two techniques together,
and the selfishness of cloudlet and access point (AP) are ignored. Inspired by
the group-buying mechanism, this paper proposes three-stage auction schemes by
combining cloudlet placement and resource assignment, to improve the social
welfare subject to the economic properties. We first divide all MUs into some
small groups according to the associated APs. Then the MUs in same group can
trade with cloudlets in a group-buying way through the APs. Finally, the MUs
pay for the cloudlets if they are the winners in the auction scheme. We prove
that our auction schemes can work in polynomial time. We also provide the
proofs for economic properties in theory. For the purpose of performance
comparison, we compare the proposed schemes with HAF, which is a centralized
cloudlet placement scheme without auction. Numerical results confirm the
correctness and efficiency of the proposed schemes.
",gangqiang zhou,,2018.0,,arXiv,Zhou2018,True,,arXiv,Not available,"Efficient Three-stage Auction Schemes for Cloudlets Deployment in
Wireless Access Network",3be0f0a6b751b671460b7e0665bc29ef,http://arxiv.org/abs/1804.01512v1
16424," Cloudlet deployment and resource allocation for mobile users (MUs) have been
extensively studied in existing works for computation resource scarcity.
However, most of them failed to jointly consider the two techniques together,
and the selfishness of cloudlet and access point (AP) are ignored. Inspired by
the group-buying mechanism, this paper proposes three-stage auction schemes by
combining cloudlet placement and resource assignment, to improve the social
welfare subject to the economic properties. We first divide all MUs into some
small groups according to the associated APs. Then the MUs in same group can
trade with cloudlets in a group-buying way through the APs. Finally, the MUs
pay for the cloudlets if they are the winners in the auction scheme. We prove
that our auction schemes can work in polynomial time. We also provide the
proofs for economic properties in theory. For the purpose of performance
comparison, we compare the proposed schemes with HAF, which is a centralized
cloudlet placement scheme without auction. Numerical results confirm the
correctness and efficiency of the proposed schemes.
",jigang wu,,2018.0,,arXiv,Zhou2018,True,,arXiv,Not available,"Efficient Three-stage Auction Schemes for Cloudlets Deployment in
Wireless Access Network",3be0f0a6b751b671460b7e0665bc29ef,http://arxiv.org/abs/1804.01512v1
16425," Cloudlet deployment and resource allocation for mobile users (MUs) have been
extensively studied in existing works for computation resource scarcity.
However, most of them failed to jointly consider the two techniques together,
and the selfishness of cloudlet and access point (AP) are ignored. Inspired by
the group-buying mechanism, this paper proposes three-stage auction schemes by
combining cloudlet placement and resource assignment, to improve the social
welfare subject to the economic properties. We first divide all MUs into some
small groups according to the associated APs. Then the MUs in same group can
trade with cloudlets in a group-buying way through the APs. Finally, the MUs
pay for the cloudlets if they are the winners in the auction scheme. We prove
that our auction schemes can work in polynomial time. We also provide the
proofs for economic properties in theory. For the purpose of performance
comparison, we compare the proposed schemes with HAF, which is a centralized
cloudlet placement scheme without auction. Numerical results confirm the
correctness and efficiency of the proposed schemes.
",long chen,,2018.0,,arXiv,Zhou2018,True,,arXiv,Not available,"Efficient Three-stage Auction Schemes for Cloudlets Deployment in
Wireless Access Network",3be0f0a6b751b671460b7e0665bc29ef,http://arxiv.org/abs/1804.01512v1
16426," Cloudlet deployment and resource allocation for mobile users (MUs) have been
extensively studied in existing works for computation resource scarcity.
However, most of them failed to jointly consider the two techniques together,
and the selfishness of cloudlet and access point (AP) are ignored. Inspired by
the group-buying mechanism, this paper proposes three-stage auction schemes by
combining cloudlet placement and resource assignment, to improve the social
welfare subject to the economic properties. We first divide all MUs into some
small groups according to the associated APs. Then the MUs in same group can
trade with cloudlets in a group-buying way through the APs. Finally, the MUs
pay for the cloudlets if they are the winners in the auction scheme. We prove
that our auction schemes can work in polynomial time. We also provide the
proofs for economic properties in theory. For the purpose of performance
comparison, we compare the proposed schemes with HAF, which is a centralized
cloudlet placement scheme without auction. Numerical results confirm the
correctness and efficiency of the proposed schemes.
",guiyuan jiang,,2018.0,,arXiv,Zhou2018,True,,arXiv,Not available,"Efficient Three-stage Auction Schemes for Cloudlets Deployment in
Wireless Access Network",3be0f0a6b751b671460b7e0665bc29ef,http://arxiv.org/abs/1804.01512v1
16427," Cloudlet deployment and resource allocation for mobile users (MUs) have been
extensively studied in existing works for computation resource scarcity.
However, most of them failed to jointly consider the two techniques together,
and the selfishness of cloudlet and access point (AP) are ignored. Inspired by
the group-buying mechanism, this paper proposes three-stage auction schemes by
combining cloudlet placement and resource assignment, to improve the social
welfare subject to the economic properties. We first divide all MUs into some
small groups according to the associated APs. Then the MUs in same group can
trade with cloudlets in a group-buying way through the APs. Finally, the MUs
pay for the cloudlets if they are the winners in the auction scheme. We prove
that our auction schemes can work in polynomial time. We also provide the
proofs for economic properties in theory. For the purpose of performance
comparison, we compare the proposed schemes with HAF, which is a centralized
cloudlet placement scheme without auction. Numerical results confirm the
correctness and efficiency of the proposed schemes.
",siew-kei lam,,2018.0,,arXiv,Zhou2018,True,,arXiv,Not available,"Efficient Three-stage Auction Schemes for Cloudlets Deployment in
Wireless Access Network",3be0f0a6b751b671460b7e0665bc29ef,http://arxiv.org/abs/1804.01512v1
16428," Game-theoretic analyses of distributed and peer-to-peer systems typically use
the Nash equilibrium solution concept, but this explicitly excludes the
possibility of strategic behavior involving more than one agent. We examine the
effects of two types of strategic behavior involving more than one agent,
sybils and collusion, in the context of scrip systems where agents provide each
other with service in exchange for scrip. Sybils make an agent more likely to
be chosen to provide service, which generally makes it harder for agents
without sybils to earn money and decreases social welfare. Surprisingly, in
certain circumstances it is possible for sybils to make all agents better off.
While collusion is generally bad, in the context of scrip systems it actually
tends to make all agents better off, not merely those who collude. These
results also provide insight into the effects of allowing agents to advertise
and loan money. While many extensions of Nash equilibrium have been proposed
that address collusion and other issues relevant to distributed and
peer-to-peer systems, our results show that none of them adequately address the
issues raised by sybils and collusion in scrip systems.
",ian kash,,2009.0,10.1007/978-3-642-03821-1_4,arXiv,Kash2009,True,,arXiv,Not available,Manipulating Scrip Systems: Sybils and Collusion,366b56814a12333122c5c8e31f9ba344,http://arxiv.org/abs/0903.2278v1
16429," Modern organizations (e.g., hospitals, social networks, government agencies)
rely heavily on audit to detect and punish insiders who inappropriately access
and disclose confidential information. Recent work on audit games models the
strategic interaction between an auditor with a single audit resource and
auditees as a Stackelberg game, augmenting associated well-studied security
games with a configurable punishment parameter. We significantly generalize
this audit game model to account for multiple audit resources where each
resource is restricted to audit a subset of all potential violations, thus
enabling application to practical auditing scenarios. We provide an FPTAS that
computes an approximately optimal solution to the resulting non-convex
optimization problem. The main technical novelty is in the design and
correctness proof of an optimization transformation that enables the
construction of this FPTAS. In addition, we experimentally demonstrate that
this transformation significantly speeds up computation of solutions for a
class of audit games and security games.
",arunesh sinha,,2014.0,,arXiv,Blocki2014,True,,arXiv,Not available,Audit Games with Multiple Defender Resources,08a641620dd57ba419821d6e8f9ce5bf,http://arxiv.org/abs/1409.4503v3
16430," Game-theoretic analyses of distributed and peer-to-peer systems typically use
the Nash equilibrium solution concept, but this explicitly excludes the
possibility of strategic behavior involving more than one agent. We examine the
effects of two types of strategic behavior involving more than one agent,
sybils and collusion, in the context of scrip systems where agents provide each
other with service in exchange for scrip. Sybils make an agent more likely to
be chosen to provide service, which generally makes it harder for agents
without sybils to earn money and decreases social welfare. Surprisingly, in
certain circumstances it is possible for sybils to make all agents better off.
While collusion is generally bad, in the context of scrip systems it actually
tends to make all agents better off, not merely those who collude. These
results also provide insight into the effects of allowing agents to advertise
and loan money. While many extensions of Nash equilibrium have been proposed
that address collusion and other issues relevant to distributed and
peer-to-peer systems, our results show that none of them adequately address the
issues raised by sybils and collusion in scrip systems.
",eric friedman,,2009.0,10.1007/978-3-642-03821-1_4,arXiv,Kash2009,True,,arXiv,Not available,Manipulating Scrip Systems: Sybils and Collusion,366b56814a12333122c5c8e31f9ba344,http://arxiv.org/abs/0903.2278v1
16431," Game-theoretic analyses of distributed and peer-to-peer systems typically use
the Nash equilibrium solution concept, but this explicitly excludes the
possibility of strategic behavior involving more than one agent. We examine the
effects of two types of strategic behavior involving more than one agent,
sybils and collusion, in the context of scrip systems where agents provide each
other with service in exchange for scrip. Sybils make an agent more likely to
be chosen to provide service, which generally makes it harder for agents
without sybils to earn money and decreases social welfare. Surprisingly, in
certain circumstances it is possible for sybils to make all agents better off.
While collusion is generally bad, in the context of scrip systems it actually
tends to make all agents better off, not merely those who collude. These
results also provide insight into the effects of allowing agents to advertise
and loan money. While many extensions of Nash equilibrium have been proposed
that address collusion and other issues relevant to distributed and
peer-to-peer systems, our results show that none of them adequately address the
issues raised by sybils and collusion in scrip systems.
",joseph halpern,,2009.0,10.1007/978-3-642-03821-1_4,arXiv,Kash2009,True,,arXiv,Not available,Manipulating Scrip Systems: Sybils and Collusion,366b56814a12333122c5c8e31f9ba344,http://arxiv.org/abs/0903.2278v1
16432," In a multi-battle contest, each time a player competes by investing some of
her budgets or resources in a component battle to collect a value if winning
the battle. There are multiple battles to fight, and the budgets get consumed
over time. The final winner in the overall contest is the one who first reaches
some amount of total value. Examples include R & D races, sports competition,
elections, and many more. A player needs to make adequate sequential actions to
win the contest against dynamic competition over time from the others. We are
interested in how much budgets the players would need and what actions they
should take in order to perform well.
We model and study such budget-constrained multi-battle contests where each
component battle is a first-price or all-pay auction. We focus on analyzing the
2-player budget ratio that guarantees a player's winning (or falling behind in
just a bounded amount of collected value) against the other omnipotent player.
In the settings considered, we give efficient dynamic programs to find the
optimal budget ratios and the corresponding bidding strategies. Our definition
of game, budget constraints, and emphasis on budget analyses provide a new
perspective and analysis in the related context.
",chu-han cheng,,2016.0,,arXiv,Cheng2016,True,,arXiv,Not available,Budget-Constrained Multi-Battle Contests: A New Perspective and Analysis,b29c2704af5a1acdcc1c1920311aba36,http://arxiv.org/abs/1602.04000v1
16433," In a multi-battle contest, each time a player competes by investing some of
her budgets or resources in a component battle to collect a value if winning
the battle. There are multiple battles to fight, and the budgets get consumed
over time. The final winner in the overall contest is the one who first reaches
some amount of total value. Examples include R & D races, sports competition,
elections, and many more. A player needs to make adequate sequential actions to
win the contest against dynamic competition over time from the others. We are
interested in how much budgets the players would need and what actions they
should take in order to perform well.
We model and study such budget-constrained multi-battle contests where each
component battle is a first-price or all-pay auction. We focus on analyzing the
2-player budget ratio that guarantees a player's winning (or falling behind in
just a bounded amount of collected value) against the other omnipotent player.
In the settings considered, we give efficient dynamic programs to find the
optimal budget ratios and the corresponding bidding strategies. Our definition
of game, budget constraints, and emphasis on budget analyses provide a new
perspective and analysis in the related context.
",po-an chen,,2016.0,,arXiv,Cheng2016,True,,arXiv,Not available,Budget-Constrained Multi-Battle Contests: A New Perspective and Analysis,b29c2704af5a1acdcc1c1920311aba36,http://arxiv.org/abs/1602.04000v1
16434," In a multi-battle contest, each time a player competes by investing some of
her budgets or resources in a component battle to collect a value if winning
the battle. There are multiple battles to fight, and the budgets get consumed
over time. The final winner in the overall contest is the one who first reaches
some amount of total value. Examples include R & D races, sports competition,
elections, and many more. A player needs to make adequate sequential actions to
win the contest against dynamic competition over time from the others. We are
interested in how much budgets the players would need and what actions they
should take in order to perform well.
We model and study such budget-constrained multi-battle contests where each
component battle is a first-price or all-pay auction. We focus on analyzing the
2-player budget ratio that guarantees a player's winning (or falling behind in
just a bounded amount of collected value) against the other omnipotent player.
In the settings considered, we give efficient dynamic programs to find the
optimal budget ratios and the corresponding bidding strategies. Our definition
of game, budget constraints, and emphasis on budget analyses provide a new
perspective and analysis in the related context.
",wing-kai hon,,2016.0,,arXiv,Cheng2016,True,,arXiv,Not available,Budget-Constrained Multi-Battle Contests: A New Perspective and Analysis,b29c2704af5a1acdcc1c1920311aba36,http://arxiv.org/abs/1602.04000v1
16435," This survey outlines a general and modular theory for proving approximation
guarantees for equilibria of auctions in complex settings. This theory
complements traditional economic techniques, which generally focus on exact and
optimal solutions and are accordingly limited to relatively stylized settings.
We highlight three user-friendly analytical tools: smoothness-type
inequalities, which immediately yield approximation guarantees for many auction
formats of interest in the special case of complete information and
deterministic strategies; extension theorems, which extend such guarantees to
randomized strategies, no-regret learning outcomes, and incomplete-information
settings; and composition theorems, which extend such guarantees from simpler
to more complex auctions. Combining these tools yields tight worst-case
approximation guarantees for the equilibria of many widely-used auction
formats.
",tim roughgarden,,2016.0,,arXiv,Roughgarden2016,True,,arXiv,Not available,The Price of Anarchy in Auctions,b50c6b44abd49fbc9c5119dfbcf531c4,http://arxiv.org/abs/1607.07684v1
16436," This survey outlines a general and modular theory for proving approximation
guarantees for equilibria of auctions in complex settings. This theory
complements traditional economic techniques, which generally focus on exact and
optimal solutions and are accordingly limited to relatively stylized settings.
We highlight three user-friendly analytical tools: smoothness-type
inequalities, which immediately yield approximation guarantees for many auction
formats of interest in the special case of complete information and
deterministic strategies; extension theorems, which extend such guarantees to
randomized strategies, no-regret learning outcomes, and incomplete-information
settings; and composition theorems, which extend such guarantees from simpler
to more complex auctions. Combining these tools yields tight worst-case
approximation guarantees for the equilibria of many widely-used auction
formats.
",vasilis syrgkanis,,2016.0,,arXiv,Roughgarden2016,True,,arXiv,Not available,The Price of Anarchy in Auctions,b50c6b44abd49fbc9c5119dfbcf531c4,http://arxiv.org/abs/1607.07684v1
16437," This survey outlines a general and modular theory for proving approximation
guarantees for equilibria of auctions in complex settings. This theory
complements traditional economic techniques, which generally focus on exact and
optimal solutions and are accordingly limited to relatively stylized settings.
We highlight three user-friendly analytical tools: smoothness-type
inequalities, which immediately yield approximation guarantees for many auction
formats of interest in the special case of complete information and
deterministic strategies; extension theorems, which extend such guarantees to
randomized strategies, no-regret learning outcomes, and incomplete-information
settings; and composition theorems, which extend such guarantees from simpler
to more complex auctions. Combining these tools yields tight worst-case
approximation guarantees for the equilibria of many widely-used auction
formats.
",eva tardos,,2016.0,,arXiv,Roughgarden2016,True,,arXiv,Not available,The Price of Anarchy in Auctions,b50c6b44abd49fbc9c5119dfbcf531c4,http://arxiv.org/abs/1607.07684v1
16438," Auctions are markets with strict regulations governing the information
available to traders in the market and the possible actions they can take.
Since well designed auctions achieve desirable economic outcomes, they have
been widely used in solving real-world optimization problems, and in
structuring stock or futures exchanges. Auctions also provide a very valuable
testing-ground for economic theory, and they play an important role in
computer-based control systems.
Auction mechanism design aims to manipulate the rules of an auction in order
to achieve specific goals. Economists traditionally use mathematical methods,
mainly game theory, to analyze auctions and design new auction forms. However,
due to the high complexity of auctions, the mathematical models are typically
simplified to obtain results, and this makes it difficult to apply results
derived from such models to market environments in the real world. As a result,
researchers are turning to empirical approaches.
This report aims to survey the theoretical and empirical approaches to
designing auction mechanisms and trading strategies with more weights on
empirical ones, and build the foundation for further research in the field.
",jinzhong niu,,2009.0,,arXiv,Niu2009,True,,arXiv,Not available,An Investigation Report on Auction Mechanism Design,a361692f9e4b8aaa72b2e9d6c12fe02f,http://arxiv.org/abs/0904.1258v2
16439," Auctions are markets with strict regulations governing the information
available to traders in the market and the possible actions they can take.
Since well designed auctions achieve desirable economic outcomes, they have
been widely used in solving real-world optimization problems, and in
structuring stock or futures exchanges. Auctions also provide a very valuable
testing-ground for economic theory, and they play an important role in
computer-based control systems.
Auction mechanism design aims to manipulate the rules of an auction in order
to achieve specific goals. Economists traditionally use mathematical methods,
mainly game theory, to analyze auctions and design new auction forms. However,
due to the high complexity of auctions, the mathematical models are typically
simplified to obtain results, and this makes it difficult to apply results
derived from such models to market environments in the real world. As a result,
researchers are turning to empirical approaches.
This report aims to survey the theoretical and empirical approaches to
designing auction mechanisms and trading strategies with more weights on
empirical ones, and build the foundation for further research in the field.
",simon parsons,,2009.0,,arXiv,Niu2009,True,,arXiv,Not available,An Investigation Report on Auction Mechanism Design,a361692f9e4b8aaa72b2e9d6c12fe02f,http://arxiv.org/abs/0904.1258v2
16440," Cooperative interval games are a generalized model of cooperative games in
which the worth of every coalition corresponds to a closed interval
representing the possible outcomes of its cooperation. Selections are all
possible outcomes of the interval game with no additional uncertainty.
We introduce new selection-based classes of interval games and prove their
characterization theorems and relations to existing classes based on the
interval weakly better operator. We show new results regarding the core and
imputations and examine a problem of equivalence for two different versions of
the core, the main stability solution of cooperative games. Finally, we
introduce the definition of strong imputation and strong core as universal
solution concepts of interval games.
",jan bok,,2014.0,,arXiv,Bok2014,True,,arXiv,Not available,Selection-based Approach to Cooperative Interval Games,e392766459baf204092a2984f4707a4f,http://arxiv.org/abs/1410.3877v6
16441," This paper develops a general approach, rooted in statistical learning
theory, to learning an approximately revenue-maximizing auction from data. We
introduce $t$-level auctions to interpolate between simple auctions, such as
welfare maximization with reserve prices, and optimal auctions, thereby
balancing the competing demands of expressivity and simplicity. We prove that
such auctions have small representation error, in the sense that for every
product distribution $F$ over bidders' valuations, there exists a $t$-level
auction with small $t$ and expected revenue close to optimal. We show that the
set of $t$-level auctions has modest pseudo-dimension (for polynomial $t$) and
therefore leads to small learning error. One consequence of our results is
that, in arbitrary single-parameter settings, one can learn a mechanism with
expected revenue arbitrarily close to optimal from a polynomial number of
samples.
",jamie morgenstern,,2015.0,,arXiv,Morgenstern2015,True,,arXiv,Not available,The Pseudo-Dimension of Near-Optimal Auctions,358ae158c68af120be262f1f2ba3535b,http://arxiv.org/abs/1506.03684v1
16442," This paper develops a general approach, rooted in statistical learning
theory, to learning an approximately revenue-maximizing auction from data. We
introduce $t$-level auctions to interpolate between simple auctions, such as
welfare maximization with reserve prices, and optimal auctions, thereby
balancing the competing demands of expressivity and simplicity. We prove that
such auctions have small representation error, in the sense that for every
product distribution $F$ over bidders' valuations, there exists a $t$-level
auction with small $t$ and expected revenue close to optimal. We show that the
set of $t$-level auctions has modest pseudo-dimension (for polynomial $t$) and
therefore leads to small learning error. One consequence of our results is
that, in arbitrary single-parameter settings, one can learn a mechanism with
expected revenue arbitrarily close to optimal from a polynomial number of
samples.
",tim roughgarden,,2015.0,,arXiv,Morgenstern2015,True,,arXiv,Not available,The Pseudo-Dimension of Near-Optimal Auctions,358ae158c68af120be262f1f2ba3535b,http://arxiv.org/abs/1506.03684v1
16449," Finding the optimal assignment in budget-constrained auctions is a
combinatorial optimization problem with many important applications, a notable
example being the sale of advertisement space by search engines (in this
context the problem is often referred to as the off-line AdWords problem).
Based on the cavity method of statistical mechanics, we introduce a message
passing algorithm that is capable of solving efficiently random instances of
the problem extracted from a natural distribution, and we derive from its
properties the phase diagram of the problem. As the control parameter (average
value of the budgets) is varied, we find two phase transitions delimiting a
region in which long-range correlations arise.
",f. altarelli,,2009.0,10.1088/1742-5468/2009/07/P07002,JSTAT 2009;2009:P07002 (27pp),Altarelli2009,True,,arXiv,Not available,Statistical mechanics of budget-constrained auctions,e36e3a8bdc0d960fa43fa9a96483d17c,http://arxiv.org/abs/0903.2429v2
16450," Finding the optimal assignment in budget-constrained auctions is a
combinatorial optimization problem with many important applications, a notable
example being the sale of advertisement space by search engines (in this
context the problem is often referred to as the off-line AdWords problem).
Based on the cavity method of statistical mechanics, we introduce a message
passing algorithm that is capable of solving efficiently random instances of
the problem extracted from a natural distribution, and we derive from its
properties the phase diagram of the problem. As the control parameter (average
value of the budgets) is varied, we find two phase transitions delimiting a
region in which long-range correlations arise.
",a. braunstein,,2009.0,10.1088/1742-5468/2009/07/P07002,JSTAT 2009;2009:P07002 (27pp),Altarelli2009,True,,arXiv,Not available,Statistical mechanics of budget-constrained auctions,e36e3a8bdc0d960fa43fa9a96483d17c,http://arxiv.org/abs/0903.2429v2
16451," We study a game for recognising formal languages, in which two players with
imperfect information need to coordinate on a common decision, given private
input words correlated by a finite graph. The players have a joint objective to
avoid an inadmissible decision, in spite of the uncertainty induced by the
input.
We show that the acceptor model based on consensus games characterises
context-sensitive languages. Further, we describe the expressiveness of these
games in terms of iterated synchronous transductions and identify a subclass
that characterises context-free languages.
",marie bogaard,,2015.0,,arXiv,Berwanger2015,True,,arXiv,Not available,Consensus Game Acceptors and Iterated Transductions,fb66475077eaf68af0c9cd96dbe7d038,http://arxiv.org/abs/1501.07131v3
16452," Cooperative interval games are a generalized model of cooperative games in
which the worth of every coalition corresponds to a closed interval
representing the possible outcomes of its cooperation. Selections are all
possible outcomes of the interval game with no additional uncertainty.
We introduce new selection-based classes of interval games and prove their
characterization theorems and relations to existing classes based on the
interval weakly better operator. We show new results regarding the core and
imputations and examine a problem of equivalence for two different versions of
the core, the main stability solution of cooperative games. Finally, we
introduce the definition of strong imputation and strong core as universal
solution concepts of interval games.
",milan hladik,,2014.0,,arXiv,Bok2014,True,,arXiv,Not available,Selection-based Approach to Cooperative Interval Games,e392766459baf204092a2984f4707a4f,http://arxiv.org/abs/1410.3877v6
16453," Finding the optimal assignment in budget-constrained auctions is a
combinatorial optimization problem with many important applications, a notable
example being the sale of advertisement space by search engines (in this
context the problem is often referred to as the off-line AdWords problem).
Based on the cavity method of statistical mechanics, we introduce a message
passing algorithm that is capable of solving efficiently random instances of
the problem extracted from a natural distribution, and we derive from its
properties the phase diagram of the problem. As the control parameter (average
value of the budgets) is varied, we find two phase transitions delimiting a
region in which long-range correlations arise.
",j. realpe-gomez,,2009.0,10.1088/1742-5468/2009/07/P07002,JSTAT 2009;2009:P07002 (27pp),Altarelli2009,True,,arXiv,Not available,Statistical mechanics of budget-constrained auctions,e36e3a8bdc0d960fa43fa9a96483d17c,http://arxiv.org/abs/0903.2429v2
16454," Finding the optimal assignment in budget-constrained auctions is a
combinatorial optimization problem with many important applications, a notable
example being the sale of advertisement space by search engines (in this
context the problem is often referred to as the off-line AdWords problem).
Based on the cavity method of statistical mechanics, we introduce a message
passing algorithm that is capable of solving efficiently random instances of
the problem extracted from a natural distribution, and we derive from its
properties the phase diagram of the problem. As the control parameter (average
value of the budgets) is varied, we find two phase transitions delimiting a
region in which long-range correlations arise.
",r. zecchina,,2009.0,10.1088/1742-5468/2009/07/P07002,JSTAT 2009;2009:P07002 (27pp),Altarelli2009,True,,arXiv,Not available,Statistical mechanics of budget-constrained auctions,e36e3a8bdc0d960fa43fa9a96483d17c,http://arxiv.org/abs/0903.2429v2
16455," We consider the problem of designing a revenue-maximizing auction for a
single item, when the values of the bidders are drawn from a correlated
distribution. We observe that there exists an algorithm that finds the optimal
randomized mechanism that runs in time polynomial in the size of the support.
We leverage this result to show that in the oracle model introduced by Ronen
and Saberi [FOCS'02], there exists a polynomial time truthful in expectation
mechanism that provides a $(\frac 3 2+\epsilon)$-approximation to the revenue
achievable by an optimal truthful-in-expectation mechanism, and a polynomial
time deterministic truthful mechanism that guarantees $\frac 5 3$ approximation
to the revenue achievable by an optimal deterministic truthful mechanism.
We show that the $\frac 5 3$-approximation mechanism provides the same
approximation ratio also with respect to the optimal truthful-in-expectation
mechanism. This shows that the performance gap between truthful-in-expectation
and deterministic mechanisms is relatively small. En route, we solve an open
question of Mehta and Vazirani [EC'04].
Finally, we extend some of our results to the multi-item case, and show how
to compute the optimal truthful-in-expectation mechanisms for bidders with more
complex valuations.
",shahar dobzinski,,2010.0,,arXiv,Dobzinski2010,True,,arXiv,Not available,Optimal Auctions with Correlated Bidders are Easy,e74f1707fccc591302763ca27e1ad4eb,http://arxiv.org/abs/1011.2413v1
16456," We consider the problem of designing a revenue-maximizing auction for a
single item, when the values of the bidders are drawn from a correlated
distribution. We observe that there exists an algorithm that finds the optimal
randomized mechanism that runs in time polynomial in the size of the support.
We leverage this result to show that in the oracle model introduced by Ronen
and Saberi [FOCS'02], there exists a polynomial time truthful in expectation
mechanism that provides a $(\frac 3 2+\epsilon)$-approximation to the revenue
achievable by an optimal truthful-in-expectation mechanism, and a polynomial
time deterministic truthful mechanism that guarantees $\frac 5 3$ approximation
to the revenue achievable by an optimal deterministic truthful mechanism.
We show that the $\frac 5 3$-approximation mechanism provides the same
approximation ratio also with respect to the optimal truthful-in-expectation
mechanism. This shows that the performance gap between truthful-in-expectation
and deterministic mechanisms is relatively small. En route, we solve an open
question of Mehta and Vazirani [EC'04].
Finally, we extend some of our results to the multi-item case, and show how
to compute the optimal truthful-in-expectation mechanisms for bidders with more
complex valuations.
",hu fu,,2010.0,,arXiv,Dobzinski2010,True,,arXiv,Not available,Optimal Auctions with Correlated Bidders are Easy,e74f1707fccc591302763ca27e1ad4eb,http://arxiv.org/abs/1011.2413v1
16457," We consider the problem of designing a revenue-maximizing auction for a
single item, when the values of the bidders are drawn from a correlated
distribution. We observe that there exists an algorithm that finds the optimal
randomized mechanism that runs in time polynomial in the size of the support.
We leverage this result to show that in the oracle model introduced by Ronen
and Saberi [FOCS'02], there exists a polynomial time truthful in expectation
mechanism that provides a $(\frac 3 2+\epsilon)$-approximation to the revenue
achievable by an optimal truthful-in-expectation mechanism, and a polynomial
time deterministic truthful mechanism that guarantees $\frac 5 3$ approximation
to the revenue achievable by an optimal deterministic truthful mechanism.
We show that the $\frac 5 3$-approximation mechanism provides the same
approximation ratio also with respect to the optimal truthful-in-expectation
mechanism. This shows that the performance gap between truthful-in-expectation
and deterministic mechanisms is relatively small. En route, we solve an open
question of Mehta and Vazirani [EC'04].
Finally, we extend some of our results to the multi-item case, and show how
to compute the optimal truthful-in-expectation mechanisms for bidders with more
complex valuations.
",robert kleinberg,,2010.0,,arXiv,Dobzinski2010,True,,arXiv,Not available,Optimal Auctions with Correlated Bidders are Easy,e74f1707fccc591302763ca27e1ad4eb,http://arxiv.org/abs/1011.2413v1
16462," When agents with independent priors bid for a single item, Myerson's optimal
auction maximizes expected revenue, whereas Vickrey's second-price auction
optimizes social welfare. We address the natural question of trade-offs between
the two criteria, that is, auctions that optimize, say, revenue under the
constraint that the welfare is above a given level. If one allows for
randomized mechanisms, it is easy to see that there are polynomial-time
mechanisms that achieve any point in the trade-off (the Pareto curve) between
revenue and welfare. We investigate whether one can achieve the same guarantees
using deterministic mechanisms. We provide a negative answer to this question
by showing that this is a (weakly) NP-hard problem. On the positive side, we
provide polynomial-time deterministic mechanisms that approximate with
arbitrary precision any point of the trade-off between these two fundamental
objectives for the case of two bidders, even when the valuations are correlated
arbitrarily. The major problem left open by our work is whether there is such
an algorithm for three or more bidders with independent valuation
distributions.
",ilias diakonikolas,,2012.0,,arXiv,Diakonikolas2012,True,,arXiv,Not available,Efficiency-Revenue Trade-offs in Auctions,f070a4a2ac7be85aa856f850e6935527,http://arxiv.org/abs/1205.3077v1
16463," We put forward a new model of congestion games where agents have uncertainty
over the routes used by other agents. We take a non-probabilistic approach,
assuming that each agent knows that the number of agents using an edge is
within a certain range. Given this uncertainty, we model agents who either
minimize their worst-case cost (WCC) or their worst-case regret (WCR), and
study implications on equilibrium existence, convergence through adaptive play,
and efficiency. Under the WCC behavior the game reduces to a modified
congestion game, and welfare improves when agents have moderate uncertainty.
Under WCR behavior the game is not, in general, a congestion game, but we show
convergence and efficiency bounds for a simple class of games.
",reshef meir,,2014.0,,arXiv,Meir2014,True,,arXiv,Not available,Congestion Games with Distance-Based Strict Uncertainty,c674bb600a590efa17de65ffa1ac0fc8,http://arxiv.org/abs/1411.4943v2
16464," When agents with independent priors bid for a single item, Myerson's optimal
auction maximizes expected revenue, whereas Vickrey's second-price auction
optimizes social welfare. We address the natural question of trade-offs between
the two criteria, that is, auctions that optimize, say, revenue under the
constraint that the welfare is above a given level. If one allows for
randomized mechanisms, it is easy to see that there are polynomial-time
mechanisms that achieve any point in the trade-off (the Pareto curve) between
revenue and welfare. We investigate whether one can achieve the same guarantees
using deterministic mechanisms. We provide a negative answer to this question
by showing that this is a (weakly) NP-hard problem. On the positive side, we
provide polynomial-time deterministic mechanisms that approximate with
arbitrary precision any point of the trade-off between these two fundamental
objectives for the case of two bidders, even when the valuations are correlated
arbitrarily. The major problem left open by our work is whether there is such
an algorithm for three or more bidders with independent valuation
distributions.
",christos papadimitriou,,2012.0,,arXiv,Diakonikolas2012,True,,arXiv,Not available,Efficiency-Revenue Trade-offs in Auctions,f070a4a2ac7be85aa856f850e6935527,http://arxiv.org/abs/1205.3077v1
16465," When agents with independent priors bid for a single item, Myerson's optimal
auction maximizes expected revenue, whereas Vickrey's second-price auction
optimizes social welfare. We address the natural question of trade-offs between
the two criteria, that is, auctions that optimize, say, revenue under the
constraint that the welfare is above a given level. If one allows for
randomized mechanisms, it is easy to see that there are polynomial-time
mechanisms that achieve any point in the trade-off (the Pareto curve) between
revenue and welfare. We investigate whether one can achieve the same guarantees
using deterministic mechanisms. We provide a negative answer to this question
by showing that this is a (weakly) NP-hard problem. On the positive side, we
provide polynomial-time deterministic mechanisms that approximate with
arbitrary precision any point of the trade-off between these two fundamental
objectives for the case of two bidders, even when the valuations are correlated
arbitrarily. The major problem left open by our work is whether there is such
an algorithm for three or more bidders with independent valuation
distributions.
",george pierrakos,,2012.0,,arXiv,Diakonikolas2012,True,,arXiv,Not available,Efficiency-Revenue Trade-offs in Auctions,f070a4a2ac7be85aa856f850e6935527,http://arxiv.org/abs/1205.3077v1
16466," When agents with independent priors bid for a single item, Myerson's optimal
auction maximizes expected revenue, whereas Vickrey's second-price auction
optimizes social welfare. We address the natural question of trade-offs between
the two criteria, that is, auctions that optimize, say, revenue under the
constraint that the welfare is above a given level. If one allows for
randomized mechanisms, it is easy to see that there are polynomial-time
mechanisms that achieve any point in the trade-off (the Pareto curve) between
revenue and welfare. We investigate whether one can achieve the same guarantees
using deterministic mechanisms. We provide a negative answer to this question
by showing that this is a (weakly) NP-hard problem. On the positive side, we
provide polynomial-time deterministic mechanisms that approximate with
arbitrary precision any point of the trade-off between these two fundamental
objectives for the case of two bidders, even when the valuations are correlated
arbitrarily. The major problem left open by our work is whether there is such
an algorithm for three or more bidders with independent valuation
distributions.
",yaron singer,,2012.0,,arXiv,Diakonikolas2012,True,,arXiv,Not available,Efficiency-Revenue Trade-offs in Auctions,f070a4a2ac7be85aa856f850e6935527,http://arxiv.org/abs/1205.3077v1
16467," We provide a Polynomial Time Approximation Scheme (PTAS) for the Bayesian
optimal multi-item multi-bidder auction problem under two conditions. First,
bidders are independent, have additive valuations and are from the same
population. Second, every bidder's value distributions of items are independent
but not necessarily identical monotone hazard rate (MHR) distributions. For
non-i.i.d. bidders, we also provide a PTAS when the number of bidders is small.
Prior to our work, even for a single bidder, only constant factor
approximations are known.
Another appealing feature of our mechanism is the simple allocation rule.
Indeed, the mechanism we use is either the second-price auction with reserve
price on every item individually, or VCG allocation with a few outlying items
that requires additional treatments. It is surprising that such simple
allocation rules suffice to obtain nearly optimal revenue.
",yang cai,,2012.0,,arXiv,Cai2012,True,,arXiv,Not available,Simple and Nearly Optimal Multi-Item Auctions,23a06439d9a36630ce3595f638453e8a,http://arxiv.org/abs/1210.3560v2
16468," We provide a Polynomial Time Approximation Scheme (PTAS) for the Bayesian
optimal multi-item multi-bidder auction problem under two conditions. First,
bidders are independent, have additive valuations and are from the same
population. Second, every bidder's value distributions of items are independent
but not necessarily identical monotone hazard rate (MHR) distributions. For
non-i.i.d. bidders, we also provide a PTAS when the number of bidders is small.
Prior to our work, even for a single bidder, only constant factor
approximations are known.
Another appealing feature of our mechanism is the simple allocation rule.
Indeed, the mechanism we use is either the second-price auction with reserve
price on every item individually, or VCG allocation with a few outlying items
that requires additional treatments. It is surprising that such simple
allocation rules suffice to obtain nearly optimal revenue.
",zhiyi huang,,2012.0,,arXiv,Cai2012,True,,arXiv,Not available,Simple and Nearly Optimal Multi-Item Auctions,23a06439d9a36630ce3595f638453e8a,http://arxiv.org/abs/1210.3560v2
16469," Auctions have a long history, having been recorded as early as 500 B.C.
Nowadays, electronic auctions have been a great success and are increasingly
used. Many cryptographic protocols have been proposed to address the various
security requirements of these electronic transactions, in particular to ensure
privacy. Brandt developed a protocol that computes the winner using homomorphic
operations on a distributed ElGamal encryption of the bids. He claimed that it
ensures full privacy of the bidders, i.e. no information apart from the winner
and the winning price is leaked. We first show that this protocol -- when using
malleable interactive zero-knowledge proofs -- is vulnerable to attacks by
dishonest bidders. Such bidders can manipulate the publicly available data in a
way that allows the seller to deduce all participants' bids. Additionally we
discuss some issues with verifiability as well as attacks on non-repudiation,
fairness and the privacy of individual bidders exploiting authentication
problems.
",jannik dreier,,2012.0,,arXiv,Dreier2012,True,,arXiv,Not available,Brandt's Fully Private Auction Protocol Revisited,abaeaaf0117ab90b6c3758da84138210,http://arxiv.org/abs/1210.6780v3
16470," Auctions have a long history, having been recorded as early as 500 B.C.
Nowadays, electronic auctions have been a great success and are increasingly
used. Many cryptographic protocols have been proposed to address the various
security requirements of these electronic transactions, in particular to ensure
privacy. Brandt developed a protocol that computes the winner using homomorphic
operations on a distributed ElGamal encryption of the bids. He claimed that it
ensures full privacy of the bidders, i.e. no information apart from the winner
and the winning price is leaked. We first show that this protocol -- when using
malleable interactive zero-knowledge proofs -- is vulnerable to attacks by
dishonest bidders. Such bidders can manipulate the publicly available data in a
way that allows the seller to deduce all participants' bids. Additionally we
discuss some issues with verifiability as well as attacks on non-repudiation,
fairness and the privacy of individual bidders exploiting authentication
problems.
",jean-guillaume dumas,,2012.0,,arXiv,Dreier2012,True,,arXiv,Not available,Brandt's Fully Private Auction Protocol Revisited,abaeaaf0117ab90b6c3758da84138210,http://arxiv.org/abs/1210.6780v3
16471," Auctions have a long history, having been recorded as early as 500 B.C.
Nowadays, electronic auctions have been a great success and are increasingly
used. Many cryptographic protocols have been proposed to address the various
security requirements of these electronic transactions, in particular to ensure
privacy. Brandt developed a protocol that computes the winner using homomorphic
operations on a distributed ElGamal encryption of the bids. He claimed that it
ensures full privacy of the bidders, i.e. no information apart from the winner
and the winning price is leaked. We first show that this protocol -- when using
malleable interactive zero-knowledge proofs -- is vulnerable to attacks by
dishonest bidders. Such bidders can manipulate the publicly available data in a
way that allows the seller to deduce all participants' bids. Additionally we
discuss some issues with verifiability as well as attacks on non-repudiation,
fairness and the privacy of individual bidders exploiting authentication
problems.
",pascal lafourcade,,2012.0,,arXiv,Dreier2012,True,,arXiv,Not available,Brandt's Fully Private Auction Protocol Revisited,abaeaaf0117ab90b6c3758da84138210,http://arxiv.org/abs/1210.6780v3
16472," We show that the multiplicative weight update method provides a simple recipe
for designing and analyzing optimal Bayesian Incentive Compatible (BIC)
auctions, and reduces the time complexity of the problem to pseudo-polynomial
in parameters that depend on single agent instead of depending on the size of
the joint type space. We use this framework to design computationally efficient
optimal auctions that satisfy ex-post Individual Rationality in the presence of
constraints such as (hard, private) budgets and envy-freeness. We also design
optimal auctions when buyers and a seller's utility functions are non-linear.
Scenarios with such functions include (a) auctions with ""quitting rights"", (b)
cost to borrow money beyond budget, (c) a seller's and buyers' risk aversion.
Finally, we show how our framework also yields optimal auctions for variety of
auction settings considered in Cai et al, Alaei et al, albeit with
pseudo-polynomial running times.
",anand bhalgat,,2012.0,,arXiv,Bhalgat2012,True,,arXiv,Not available,Optimal Auctions via the Multiplicative Weight Method,e16b45ae03f0524fe1c821c46f6dcc2b,http://arxiv.org/abs/1211.1699v3
16473," We show that the multiplicative weight update method provides a simple recipe
for designing and analyzing optimal Bayesian Incentive Compatible (BIC)
auctions, and reduces the time complexity of the problem to pseudo-polynomial
in parameters that depend on single agent instead of depending on the size of
the joint type space. We use this framework to design computationally efficient
optimal auctions that satisfy ex-post Individual Rationality in the presence of
constraints such as (hard, private) budgets and envy-freeness. We also design
optimal auctions when buyers and a seller's utility functions are non-linear.
Scenarios with such functions include (a) auctions with ""quitting rights"", (b)
cost to borrow money beyond budget, (c) a seller's and buyers' risk aversion.
Finally, we show how our framework also yields optimal auctions for variety of
auction settings considered in Cai et al, Alaei et al, albeit with
pseudo-polynomial running times.
",sreenivas gollapudi,,2012.0,,arXiv,Bhalgat2012,True,,arXiv,Not available,Optimal Auctions via the Multiplicative Weight Method,e16b45ae03f0524fe1c821c46f6dcc2b,http://arxiv.org/abs/1211.1699v3
16474," We put forward a new model of congestion games where agents have uncertainty
over the routes used by other agents. We take a non-probabilistic approach,
assuming that each agent knows that the number of agents using an edge is
within a certain range. Given this uncertainty, we model agents who either
minimize their worst-case cost (WCC) or their worst-case regret (WCR), and
study implications on equilibrium existence, convergence through adaptive play,
and efficiency. Under the WCC behavior the game reduces to a modified
congestion game, and welfare improves when agents have moderate uncertainty.
Under WCR behavior the game is not, in general, a congestion game, but we show
convergence and efficiency bounds for a simple class of games.
",david parkes,,2014.0,,arXiv,Meir2014,True,,arXiv,Not available,Congestion Games with Distance-Based Strict Uncertainty,c674bb600a590efa17de65ffa1ac0fc8,http://arxiv.org/abs/1411.4943v2
16475," We show that the multiplicative weight update method provides a simple recipe
for designing and analyzing optimal Bayesian Incentive Compatible (BIC)
auctions, and reduces the time complexity of the problem to pseudo-polynomial
in parameters that depend on single agent instead of depending on the size of
the joint type space. We use this framework to design computationally efficient
optimal auctions that satisfy ex-post Individual Rationality in the presence of
constraints such as (hard, private) budgets and envy-freeness. We also design
optimal auctions when buyers and a seller's utility functions are non-linear.
Scenarios with such functions include (a) auctions with ""quitting rights"", (b)
cost to borrow money beyond budget, (c) a seller's and buyers' risk aversion.
Finally, we show how our framework also yields optimal auctions for variety of
auction settings considered in Cai et al, Alaei et al, albeit with
pseudo-polynomial running times.
",kamesh munagala,,2012.0,,arXiv,Bhalgat2012,True,,arXiv,Not available,Optimal Auctions via the Multiplicative Weight Method,e16b45ae03f0524fe1c821c46f6dcc2b,http://arxiv.org/abs/1211.1699v3
16476," Calibration is a basic property for prediction systems, and algorithms for
achieving it are well-studied in both statistics and machine learning. In many
applications, however, the predictions are used to make decisions that select
which observations are made. This makes calibration difficult, as adjusting
predictions to achieve calibration changes future data. We focus on
click-through-rate (CTR) prediction for search ad auctions. Here, CTR
predictions are used by an auction that determines which ads are shown, and we
want to maximize the value generated by the auction.
We show that certain natural notions of calibration can be impossible to
achieve, depending on the details of the auction. We also show that it can be
impossible to maximize auction efficiency while using calibrated predictions.
Finally, we give conditions under which calibration is achievable and
simultaneously maximizes auction efficiency: roughly speaking, bids and queries
must not contain information about CTRs that is not already captured by the
predictions.
",h. mcmahan,,2012.0,,arXiv,McMahan2012,True,,arXiv,Not available,On Calibrated Predictions for Auction Selection Mechanisms,ec434173b4875a963a3f635f26281c3e,http://arxiv.org/abs/1211.3955v1
16477," Calibration is a basic property for prediction systems, and algorithms for
achieving it are well-studied in both statistics and machine learning. In many
applications, however, the predictions are used to make decisions that select
which observations are made. This makes calibration difficult, as adjusting
predictions to achieve calibration changes future data. We focus on
click-through-rate (CTR) prediction for search ad auctions. Here, CTR
predictions are used by an auction that determines which ads are shown, and we
want to maximize the value generated by the auction.
We show that certain natural notions of calibration can be impossible to
achieve, depending on the details of the auction. We also show that it can be
impossible to maximize auction efficiency while using calibrated predictions.
Finally, we give conditions under which calibration is achievable and
simultaneously maximizes auction efficiency: roughly speaking, bids and queries
must not contain information about CTRs that is not already captured by the
predictions.
",omkar muralidharan,,2012.0,,arXiv,McMahan2012,True,,arXiv,Not available,On Calibrated Predictions for Auction Selection Mechanisms,ec434173b4875a963a3f635f26281c3e,http://arxiv.org/abs/1211.3955v1
16478," In the context of auctions for digital goods, an interesting random sampling
auction has been proposed by Goldberg, Hartline, and Wright [2001]. This
auction has been analyzed by Feige, Flaxman, Hartline, and Kleinberg [2005],
who have shown that it is 15-competitive in the worst case {which is
substantially better than the previously proven constant bounds but still far
from the conjectured competitive ratio of 4. In this paper, we prove that the
aforementioned random sampling auction is indeed 4-competitive for a large
class of instances where the number of bids above (or equal to) the optimal
sale price is at least 6. We also show that it is 4:68-competitive for the
small class of remaining instances thus leaving a negligible gap between the
lower and upper bound. We employ a mix of probabilistic techniques and dynamic
programming to compute these bounds.
",saeed alaei,,2013.0,,arXiv,Alaei2013,True,,arXiv,Not available,On Random Sampling Auctions for Digital Goods,53a187d0046ee86735ccf8b3c0ea050b,http://arxiv.org/abs/1303.4438v1
16479," In the context of auctions for digital goods, an interesting random sampling
auction has been proposed by Goldberg, Hartline, and Wright [2001]. This
auction has been analyzed by Feige, Flaxman, Hartline, and Kleinberg [2005],
who have shown that it is 15-competitive in the worst case {which is
substantially better than the previously proven constant bounds but still far
from the conjectured competitive ratio of 4. In this paper, we prove that the
aforementioned random sampling auction is indeed 4-competitive for a large
class of instances where the number of bids above (or equal to) the optimal
sale price is at least 6. We also show that it is 4:68-competitive for the
small class of remaining instances thus leaving a negligible gap between the
lower and upper bound. We employ a mix of probabilistic techniques and dynamic
programming to compute these bounds.
",azarakhsh malekian,,2013.0,,arXiv,Alaei2013,True,,arXiv,Not available,On Random Sampling Auctions for Digital Goods,53a187d0046ee86735ccf8b3c0ea050b,http://arxiv.org/abs/1303.4438v1
16480," In the context of auctions for digital goods, an interesting random sampling
auction has been proposed by Goldberg, Hartline, and Wright [2001]. This
auction has been analyzed by Feige, Flaxman, Hartline, and Kleinberg [2005],
who have shown that it is 15-competitive in the worst case {which is
substantially better than the previously proven constant bounds but still far
from the conjectured competitive ratio of 4. In this paper, we prove that the
aforementioned random sampling auction is indeed 4-competitive for a large
class of instances where the number of bids above (or equal to) the optimal
sale price is at least 6. We also show that it is 4:68-competitive for the
small class of remaining instances thus leaving a negligible gap between the
lower and upper bound. We employ a mix of probabilistic techniques and dynamic
programming to compute these bounds.
",aravind srinivasan,,2013.0,,arXiv,Alaei2013,True,,arXiv,Not available,On Random Sampling Auctions for Digital Goods,53a187d0046ee86735ccf8b3c0ea050b,http://arxiv.org/abs/1303.4438v1
16481," In this paper, we introduce a novel, non-recursive, maximal matching
algorithm for double auctions, which aims to maximize the amount of commodities
to be traded. It differs from the usual equilibrium matching, which clears a
market at the equilibrium price. We compare the two algorithms through
experimental analyses, showing that the maximal matching algorithm is favored
in scenarios where trading volume is a priority and that it may possibly
improve allocative efficiency over equilibrium matching as well. A
parameterized algorithm that incorporates both maximal matching and equilibrium
matching as special cases is also presented to allow flexible control on how
much to trade in a double auction.
",jinzhong niu,,2013.0,,arXiv,Niu2013,True,,arXiv,Not available,Maximizing Matching in Double-sided Auctions,51e83871b18a0c591f28fb9a712fc3d2,http://arxiv.org/abs/1304.3135v1
16482," In this paper, we introduce a novel, non-recursive, maximal matching
algorithm for double auctions, which aims to maximize the amount of commodities
to be traded. It differs from the usual equilibrium matching, which clears a
market at the equilibrium price. We compare the two algorithms through
experimental analyses, showing that the maximal matching algorithm is favored
in scenarios where trading volume is a priority and that it may possibly
improve allocative efficiency over equilibrium matching as well. A
parameterized algorithm that incorporates both maximal matching and equilibrium
matching as special cases is also presented to allow flexible control on how
much to trade in a double auction.
",simon parsons,,2013.0,,arXiv,Niu2013,True,,arXiv,Not available,Maximizing Matching in Double-sided Auctions,51e83871b18a0c591f28fb9a712fc3d2,http://arxiv.org/abs/1304.3135v1
16483," In a sponsored search auction, decisions about how to rank ads impose
tradeoffs between objectives such as revenue and welfare. In this paper, we
examine how these tradeoffs should be made. We begin by arguing that the most
natural solution concept to evaluate these tradeoffs is the lowest symmetric
Nash equilibrium (SNE). As part of this argument, we generalise the well known
connection between the lowest SNE and the VCG outcome. We then propose a new
ranking algorithm, loosely based on the revenue-optimal auction, that uses a
reserve price to order the ads (not just to filter them) and give conditions
under which it raises more revenue than simply applying that reserve price.
Finally, we conduct extensive simulations examining the tradeoffs enabled by
different ranking algorithms and show that our proposed algorithm enables
superior operating points by a variety of metrics.
",ben roberts,,2013.0,,arXiv,Roberts2013,True,,arXiv,Not available,Ranking and Tradeoffs in Sponsored Search Auctions,033a3029b9193dd8d04f4dc6e7707b0c,http://arxiv.org/abs/1304.7642v1
16484," In a sponsored search auction, decisions about how to rank ads impose
tradeoffs between objectives such as revenue and welfare. In this paper, we
examine how these tradeoffs should be made. We begin by arguing that the most
natural solution concept to evaluate these tradeoffs is the lowest symmetric
Nash equilibrium (SNE). As part of this argument, we generalise the well known
connection between the lowest SNE and the VCG outcome. We then propose a new
ranking algorithm, loosely based on the revenue-optimal auction, that uses a
reserve price to order the ads (not just to filter them) and give conditions
under which it raises more revenue than simply applying that reserve price.
Finally, we conduct extensive simulations examining the tradeoffs enabled by
different ranking algorithms and show that our proposed algorithm enables
superior operating points by a variety of metrics.
",dinan gunawardena,,2013.0,,arXiv,Roberts2013,True,,arXiv,Not available,Ranking and Tradeoffs in Sponsored Search Auctions,033a3029b9193dd8d04f4dc6e7707b0c,http://arxiv.org/abs/1304.7642v1
16485," A repeated game is an effective tool to model interactions and conflicts for
players aiming to achieve their objectives in a long-term basis. Contrary to
static noncooperative games that model an interaction among players in only one
period, in repeated games, interactions of players repeat for multiple periods;
and thus the players become aware of other players' past behaviors and their
future benefits, and will adapt their behavior accordingly. In wireless
networks, conflicts among wireless nodes can lead to selfish behaviors,
resulting in poor network performances and detrimental individual payoffs. In
this paper, we survey the applications of repeated games in different wireless
networks. The main goal is to demonstrate the use of repeated games to
encourage wireless nodes to cooperate, thereby improving network performances
and avoiding network disruption due to selfish behaviors. Furthermore, various
problems in wireless networks and variations of repeated game models together
with the corresponding solutions are discussed in this survey. Finally, we
outline some open issues and future research directions.
",dinh hoang,,2015.0,,arXiv,Hoang2015,True,,arXiv,Not available,Applications of Repeated Games in Wireless Networks: A Survey,f2963180e0e32bd515fb584b5dd3386c,http://arxiv.org/abs/1501.02886v1
16486," In a sponsored search auction, decisions about how to rank ads impose
tradeoffs between objectives such as revenue and welfare. In this paper, we
examine how these tradeoffs should be made. We begin by arguing that the most
natural solution concept to evaluate these tradeoffs is the lowest symmetric
Nash equilibrium (SNE). As part of this argument, we generalise the well known
connection between the lowest SNE and the VCG outcome. We then propose a new
ranking algorithm, loosely based on the revenue-optimal auction, that uses a
reserve price to order the ads (not just to filter them) and give conditions
under which it raises more revenue than simply applying that reserve price.
Finally, we conduct extensive simulations examining the tradeoffs enabled by
different ranking algorithms and show that our proposed algorithm enables
superior operating points by a variety of metrics.
",ian kash,,2013.0,,arXiv,Roberts2013,True,,arXiv,Not available,Ranking and Tradeoffs in Sponsored Search Auctions,033a3029b9193dd8d04f4dc6e7707b0c,http://arxiv.org/abs/1304.7642v1
16487," In a sponsored search auction, decisions about how to rank ads impose
tradeoffs between objectives such as revenue and welfare. In this paper, we
examine how these tradeoffs should be made. We begin by arguing that the most
natural solution concept to evaluate these tradeoffs is the lowest symmetric
Nash equilibrium (SNE). As part of this argument, we generalise the well known
connection between the lowest SNE and the VCG outcome. We then propose a new
ranking algorithm, loosely based on the revenue-optimal auction, that uses a
reserve price to order the ads (not just to filter them) and give conditions
under which it raises more revenue than simply applying that reserve price.
Finally, we conduct extensive simulations examining the tradeoffs enabled by
different ranking algorithms and show that our proposed algorithm enables
superior operating points by a variety of metrics.
",peter key,,2013.0,,arXiv,Roberts2013,True,,arXiv,Not available,Ranking and Tradeoffs in Sponsored Search Auctions,033a3029b9193dd8d04f4dc6e7707b0c,http://arxiv.org/abs/1304.7642v1
16491," We consider Vickrey-Clarke-Groves (VCG) auctions for a very general
combinatorial structure, in an average-case setting where item costs are
independent, identically distributed uniform random variables. We prove that
the expected VCG cost is at least double the expected nominal cost, and exactly
double when the desired structure is a basis of a bridgeless matroid. In the
matroid case we further show that, conditioned upon the VCG cost, the
expectation of the nominal cost is exactly half the VCG cost, and we show
several results on variances and covariances among the nominal cost, the VCG
cost, and related quantities. As an application, we find the asymptotic
variance of the VCG cost of the minimum spanning tree in a complete graph with
random edge costs.
",svante janson,,2013.0,,arXiv,Janson2013,True,,arXiv,Not available,VCG Auction Mechanism Cost Expectations and Variances,01b86454da7398df7472fb794a641836,http://arxiv.org/abs/1310.1777v1
16492," We consider Vickrey-Clarke-Groves (VCG) auctions for a very general
combinatorial structure, in an average-case setting where item costs are
independent, identically distributed uniform random variables. We prove that
the expected VCG cost is at least double the expected nominal cost, and exactly
double when the desired structure is a basis of a bridgeless matroid. In the
matroid case we further show that, conditioned upon the VCG cost, the
expectation of the nominal cost is exactly half the VCG cost, and we show
several results on variances and covariances among the nominal cost, the VCG
cost, and related quantities. As an application, we find the asymptotic
variance of the VCG cost of the minimum spanning tree in a complete graph with
random edge costs.
",gregory sorkin,,2013.0,,arXiv,Janson2013,True,,arXiv,Not available,VCG Auction Mechanism Cost Expectations and Variances,01b86454da7398df7472fb794a641836,http://arxiv.org/abs/1310.1777v1
16496," A repeated game is an effective tool to model interactions and conflicts for
players aiming to achieve their objectives in a long-term basis. Contrary to
static noncooperative games that model an interaction among players in only one
period, in repeated games, interactions of players repeat for multiple periods;
and thus the players become aware of other players' past behaviors and their
future benefits, and will adapt their behavior accordingly. In wireless
networks, conflicts among wireless nodes can lead to selfish behaviors,
resulting in poor network performances and detrimental individual payoffs. In
this paper, we survey the applications of repeated games in different wireless
networks. The main goal is to demonstrate the use of repeated games to
encourage wireless nodes to cooperate, thereby improving network performances
and avoiding network disruption due to selfish behaviors. Furthermore, various
problems in wireless networks and variations of repeated game models together
with the corresponding solutions are discussed in this survey. Finally, we
outline some open issues and future research directions.
",xiao lu,,2015.0,,arXiv,Hoang2015,True,,arXiv,Not available,Applications of Repeated Games in Wireless Networks: A Survey,f2963180e0e32bd515fb584b5dd3386c,http://arxiv.org/abs/1501.02886v1
16497," Bidding in simultaneous auctions is challenging because an agent's value for
a good in one auction may depend on the uncertain outcome of other auctions:
the so-called exposure problem. Given the gap in understanding of general
simultaneous auction games, previous works have tackled this problem with
heuristic strategies that employ probabilistic price predictions. We define a
concept of self-confirming prices, and show that within an independent private
value model, Bayes-Nash equilibrium can be fully characterized as a profile of
optimal price prediction strategies with self-confirming predictions. We
exhibit practical procedures to compute approximately optimal bids given a
probabilistic price prediction, and near self-confirming price predictions
given a price-prediction strategy. An extensive empirical game-theoretic
analysis demonstrates that self-confirming price prediction strategies are
effective in simultaneous auction games with both complementary and
substitutable preference structures.
",michael wellman,,2012.0,,arXiv,Wellman2012,True,,arXiv,Not available,"Self-Confirming Price Prediction Strategies for Simultaneous One-Shot
Auctions",5f4547b16928cafa41df0fa5992e92e0,http://arxiv.org/abs/1210.4915v1
16498," Bidding in simultaneous auctions is challenging because an agent's value for
a good in one auction may depend on the uncertain outcome of other auctions:
the so-called exposure problem. Given the gap in understanding of general
simultaneous auction games, previous works have tackled this problem with
heuristic strategies that employ probabilistic price predictions. We define a
concept of self-confirming prices, and show that within an independent private
value model, Bayes-Nash equilibrium can be fully characterized as a profile of
optimal price prediction strategies with self-confirming predictions. We
exhibit practical procedures to compute approximately optimal bids given a
probabilistic price prediction, and near self-confirming price predictions
given a price-prediction strategy. An extensive empirical game-theoretic
analysis demonstrates that self-confirming price prediction strategies are
effective in simultaneous auction games with both complementary and
substitutable preference structures.
",eric sodomka,,2012.0,,arXiv,Wellman2012,True,,arXiv,Not available,"Self-Confirming Price Prediction Strategies for Simultaneous One-Shot
Auctions",5f4547b16928cafa41df0fa5992e92e0,http://arxiv.org/abs/1210.4915v1
16499," Bidding in simultaneous auctions is challenging because an agent's value for
a good in one auction may depend on the uncertain outcome of other auctions:
the so-called exposure problem. Given the gap in understanding of general
simultaneous auction games, previous works have tackled this problem with
heuristic strategies that employ probabilistic price predictions. We define a
concept of self-confirming prices, and show that within an independent private
value model, Bayes-Nash equilibrium can be fully characterized as a profile of
optimal price prediction strategies with self-confirming predictions. We
exhibit practical procedures to compute approximately optimal bids given a
probabilistic price prediction, and near self-confirming price predictions
given a price-prediction strategy. An extensive empirical game-theoretic
analysis demonstrates that self-confirming price prediction strategies are
effective in simultaneous auction games with both complementary and
substitutable preference structures.
",amy greenwald,,2012.0,,arXiv,Wellman2012,True,,arXiv,Not available,"Self-Confirming Price Prediction Strategies for Simultaneous One-Shot
Auctions",5f4547b16928cafa41df0fa5992e92e0,http://arxiv.org/abs/1210.4915v1
16500," In many natural settings agents participate in multiple different auctions
that are not simultaneous. In such auctions, future opportunities affect
strategic considerations of the players. The goal of this paper is to develop a
quantitative understanding of outcomes of such sequential auctions. In earlier
work (Paes Leme et al. 2012) we initiated the study of the price of anarchy in
sequential auctions. We considered sequential first price auctions in the full
information model, where players are aware of all future opportunities, as well
as the valuation of all players. In this paper, we study efficiency in
sequential auctions in the Bayesian environment, relaxing the informational
assumption on the players. We focus on two environments, both studied in the
full information model in Paes Leme et al. 2012, matching markets and matroid
auctions. In the full information environment, a sequential first price cut
auction for matroid settings is efficient. In Bayesian environments this is no
longer the case, as we show using a simple example with three players. Our main
result is a bound of $1+\frac{e}{e-1}\approx 2.58$ on the price of anarchy in
both matroid auctions and single-value matching markets (even with correlated
types) and a bound of $2\frac{e}{e-1}\approx 3.16$ for general matching markets
with independent types. To bound the price of anarchy we need to consider
possible deviations at an equilibrium. In a sequential Bayesian environment the
effect of deviations is more complex than in one-shot games; early bids allow
others to infer information about the player's value. We create effective
deviations despite the presence of this difficulty by introducing a bluffing
technique of independent interest.
",vasilis syrgkanis,,2012.0,,arXiv,Syrgkanis2012,True,,arXiv,Not available,Bayesian Sequential Auctions,f4d6d2838cdc78bdbdd4f3b9f4370193,http://arxiv.org/abs/1206.4771v1
16501," In many natural settings agents participate in multiple different auctions
that are not simultaneous. In such auctions, future opportunities affect
strategic considerations of the players. The goal of this paper is to develop a
quantitative understanding of outcomes of such sequential auctions. In earlier
work (Paes Leme et al. 2012) we initiated the study of the price of anarchy in
sequential auctions. We considered sequential first price auctions in the full
information model, where players are aware of all future opportunities, as well
as the valuation of all players. In this paper, we study efficiency in
sequential auctions in the Bayesian environment, relaxing the informational
assumption on the players. We focus on two environments, both studied in the
full information model in Paes Leme et al. 2012, matching markets and matroid
auctions. In the full information environment, a sequential first price cut
auction for matroid settings is efficient. In Bayesian environments this is no
longer the case, as we show using a simple example with three players. Our main
result is a bound of $1+\frac{e}{e-1}\approx 2.58$ on the price of anarchy in
both matroid auctions and single-value matching markets (even with correlated
types) and a bound of $2\frac{e}{e-1}\approx 3.16$ for general matching markets
with independent types. To bound the price of anarchy we need to consider
possible deviations at an equilibrium. In a sequential Bayesian environment the
effect of deviations is more complex than in one-shot games; early bids allow
others to infer information about the player's value. We create effective
deviations despite the presence of this difficulty by introducing a bluffing
technique of independent interest.
",eva tardos,,2012.0,,arXiv,Syrgkanis2012,True,,arXiv,Not available,Bayesian Sequential Auctions,f4d6d2838cdc78bdbdd4f3b9f4370193,http://arxiv.org/abs/1206.4771v1
16502," This paper studies an environment of simultaneous, separate, first-price
auctions for complementary goods. Agents observe private values of each good
before making bids, and the complementarity between goods is explicitly
incorporated in their utility. For simplicity, a model is presented with two
first-price auctions and two bidders. We show that a monotone pure-strategy
Bayesian Nash Equilibrium exists in the environment.
",wiroy shin,,2013.0,,arXiv,Shin2013,True,,arXiv,Not available,Simultaneous auctions for complementary goods,878b4680f06c7892ffefd2adfbc511d7,http://arxiv.org/abs/1312.2641v1
16503," A central issue in applying auction theory in practice is the problem of
dealing with budget-constrained agents. A desirable goal in practice is to
design incentive compatible, individually rational, and Pareto optimal auctions
while respecting the budget constraints. Achieving this goal is particularly
challenging in the presence of nontrivial combinatorial constraints over the
set of feasible allocations.
Toward this goal and motivated by AdWords auctions, we present an auction for
{\em polymatroidal} environments satisfying the above properties. Our auction
employs a novel clinching technique with a clean geometric description and only
needs an oracle access to the submodular function defining the polymatroid. As
a result, this auction not only simplifies and generalizes all previous
results, it applies to several new applications including AdWords Auctions,
bandwidth markets, and video on demand. In particular, our characterization of
the AdWords auction as polymatroidal constraints might be of independent
interest. This allows us to design the first mechanism for Ad Auctions taking
into account simultaneously budgets, multiple keywords and multiple slots.
We show that it is impossible to extend this result to generic polyhedral
constraints. This also implies an impossibility result for multi-unit auctions
with decreasing marginal utilities in the presence of budget constraints.
",gagan goel,,2012.0,,arXiv,Goel2012,True,,arXiv,Not available,Polyhedral Clinching Auctions and the Adwords Polytope,226262e8a94ae769c0097e00eae7770f,http://arxiv.org/abs/1201.0404v3
16504," A central issue in applying auction theory in practice is the problem of
dealing with budget-constrained agents. A desirable goal in practice is to
design incentive compatible, individually rational, and Pareto optimal auctions
while respecting the budget constraints. Achieving this goal is particularly
challenging in the presence of nontrivial combinatorial constraints over the
set of feasible allocations.
Toward this goal and motivated by AdWords auctions, we present an auction for
{\em polymatroidal} environments satisfying the above properties. Our auction
employs a novel clinching technique with a clean geometric description and only
needs an oracle access to the submodular function defining the polymatroid. As
a result, this auction not only simplifies and generalizes all previous
results, it applies to several new applications including AdWords Auctions,
bandwidth markets, and video on demand. In particular, our characterization of
the AdWords auction as polymatroidal constraints might be of independent
interest. This allows us to design the first mechanism for Ad Auctions taking
into account simultaneously budgets, multiple keywords and multiple slots.
We show that it is impossible to extend this result to generic polyhedral
constraints. This also implies an impossibility result for multi-unit auctions
with decreasing marginal utilities in the presence of budget constraints.
",vahab mirrokni,,2012.0,,arXiv,Goel2012,True,,arXiv,Not available,Polyhedral Clinching Auctions and the Adwords Polytope,226262e8a94ae769c0097e00eae7770f,http://arxiv.org/abs/1201.0404v3
16505," A central issue in applying auction theory in practice is the problem of
dealing with budget-constrained agents. A desirable goal in practice is to
design incentive compatible, individually rational, and Pareto optimal auctions
while respecting the budget constraints. Achieving this goal is particularly
challenging in the presence of nontrivial combinatorial constraints over the
set of feasible allocations.
Toward this goal and motivated by AdWords auctions, we present an auction for
{\em polymatroidal} environments satisfying the above properties. Our auction
employs a novel clinching technique with a clean geometric description and only
needs an oracle access to the submodular function defining the polymatroid. As
a result, this auction not only simplifies and generalizes all previous
results, it applies to several new applications including AdWords Auctions,
bandwidth markets, and video on demand. In particular, our characterization of
the AdWords auction as polymatroidal constraints might be of independent
interest. This allows us to design the first mechanism for Ad Auctions taking
into account simultaneously budgets, multiple keywords and multiple slots.
We show that it is impossible to extend this result to generic polyhedral
constraints. This also implies an impossibility result for multi-unit auctions
with decreasing marginal utilities in the presence of budget constraints.
",renato leme,,2012.0,,arXiv,Goel2012,True,,arXiv,Not available,Polyhedral Clinching Auctions and the Adwords Polytope,226262e8a94ae769c0097e00eae7770f,http://arxiv.org/abs/1201.0404v3
16506," This paper develops a general approach, rooted in statistical learning
theory, to learning an approximately revenue-maximizing auction from data. We
introduce $t$-level auctions to interpolate between simple auctions, such as
welfare maximization with reserve prices, and optimal auctions, thereby
balancing the competing demands of expressivity and simplicity. We prove that
such auctions have small representation error, in the sense that for every
product distribution $F$ over bidders' valuations, there exists a $t$-level
auction with small $t$ and expected revenue close to optimal. We show that the
set of $t$-level auctions has modest pseudo-dimension (for polynomial $t$) and
therefore leads to small learning error. One consequence of our results is
that, in arbitrary single-parameter settings, one can learn a mechanism with
expected revenue arbitrarily close to optimal from a polynomial number of
samples.
",jamie morgenstern,,2015.0,,arXiv,Morgenstern2015,True,,arXiv,Not available,The Pseudo-Dimension of Near-Optimal Auctions,358ae158c68af120be262f1f2ba3535b,http://arxiv.org/abs/1506.03684v1
16507," A repeated game is an effective tool to model interactions and conflicts for
players aiming to achieve their objectives in a long-term basis. Contrary to
static noncooperative games that model an interaction among players in only one
period, in repeated games, interactions of players repeat for multiple periods;
and thus the players become aware of other players' past behaviors and their
future benefits, and will adapt their behavior accordingly. In wireless
networks, conflicts among wireless nodes can lead to selfish behaviors,
resulting in poor network performances and detrimental individual payoffs. In
this paper, we survey the applications of repeated games in different wireless
networks. The main goal is to demonstrate the use of repeated games to
encourage wireless nodes to cooperate, thereby improving network performances
and avoiding network disruption due to selfish behaviors. Furthermore, various
problems in wireless networks and variations of repeated game models together
with the corresponding solutions are discussed in this survey. Finally, we
outline some open issues and future research directions.
",dusit niyato,,2015.0,,arXiv,Hoang2015,True,,arXiv,Not available,Applications of Repeated Games in Wireless Networks: A Survey,f2963180e0e32bd515fb584b5dd3386c,http://arxiv.org/abs/1501.02886v1
16508," This paper develops a general approach, rooted in statistical learning
theory, to learning an approximately revenue-maximizing auction from data. We
introduce $t$-level auctions to interpolate between simple auctions, such as
welfare maximization with reserve prices, and optimal auctions, thereby
balancing the competing demands of expressivity and simplicity. We prove that
such auctions have small representation error, in the sense that for every
product distribution $F$ over bidders' valuations, there exists a $t$-level
auction with small $t$ and expected revenue close to optimal. We show that the
set of $t$-level auctions has modest pseudo-dimension (for polynomial $t$) and
therefore leads to small learning error. One consequence of our results is
that, in arbitrary single-parameter settings, one can learn a mechanism with
expected revenue arbitrarily close to optimal from a polynomial number of
samples.
",tim roughgarden,,2015.0,,arXiv,Morgenstern2015,True,,arXiv,Not available,The Pseudo-Dimension of Near-Optimal Auctions,358ae158c68af120be262f1f2ba3535b,http://arxiv.org/abs/1506.03684v1
16509," Increasing number of the cloud-based Internet applications demands for
efficient resource and cost management. This paper proposes a real-time group
auction system for the cloud instance market. The system is designed based on a
combinatorial double auction, and its applicability and effectiveness are
evaluated in terms of resource efficiency and monetary benefits to auction
participants (e.g., cloud users and providers). The proposed auction system
assists them to decide when and how providers allocate their resources to which
users. Furthermore, we propose a distributed algorithm using a group formation
game that determines which users and providers will trade resources by their
cooperative decisions. To find how to allocate the resources, the utility
optimization problem is formulated as a binary integer programming problem, and
the nearly optimal solution is obtained by a heuristic algorithm with quadratic
time complexity. In comparison studies, the proposed real-time group auction
system with cooperation outperforms an individual auction in terms of the
resource efficiency (e.g., the request acceptance rate for users and resource
utilization for providers) and monetary benefits (e.g., average payments for
users and total profits for providers).
",chonho lee,,2013.0,,arXiv,Lee2013,True,,arXiv,Not available,"A Real-time Group Auction System for Efficient Allocation of Cloud
Internet Applications",007c6809dc4ca00e600550b9b8e62478,http://arxiv.org/abs/1304.0539v1
16510," Increasing number of the cloud-based Internet applications demands for
efficient resource and cost management. This paper proposes a real-time group
auction system for the cloud instance market. The system is designed based on a
combinatorial double auction, and its applicability and effectiveness are
evaluated in terms of resource efficiency and monetary benefits to auction
participants (e.g., cloud users and providers). The proposed auction system
assists them to decide when and how providers allocate their resources to which
users. Furthermore, we propose a distributed algorithm using a group formation
game that determines which users and providers will trade resources by their
cooperative decisions. To find how to allocate the resources, the utility
optimization problem is formulated as a binary integer programming problem, and
the nearly optimal solution is obtained by a heuristic algorithm with quadratic
time complexity. In comparison studies, the proposed real-time group auction
system with cooperation outperforms an individual auction in terms of the
resource efficiency (e.g., the request acceptance rate for users and resource
utilization for providers) and monetary benefits (e.g., average payments for
users and total profits for providers).
",ping wang,,2013.0,,arXiv,Lee2013,True,,arXiv,Not available,"A Real-time Group Auction System for Efficient Allocation of Cloud
Internet Applications",007c6809dc4ca00e600550b9b8e62478,http://arxiv.org/abs/1304.0539v1
16511," Increasing number of the cloud-based Internet applications demands for
efficient resource and cost management. This paper proposes a real-time group
auction system for the cloud instance market. The system is designed based on a
combinatorial double auction, and its applicability and effectiveness are
evaluated in terms of resource efficiency and monetary benefits to auction
participants (e.g., cloud users and providers). The proposed auction system
assists them to decide when and how providers allocate their resources to which
users. Furthermore, we propose a distributed algorithm using a group formation
game that determines which users and providers will trade resources by their
cooperative decisions. To find how to allocate the resources, the utility
optimization problem is formulated as a binary integer programming problem, and
the nearly optimal solution is obtained by a heuristic algorithm with quadratic
time complexity. In comparison studies, the proposed real-time group auction
system with cooperation outperforms an individual auction in terms of the
resource efficiency (e.g., the request acceptance rate for users and resource
utilization for providers) and monetary benefits (e.g., average payments for
users and total profits for providers).
",dusit niyato,,2013.0,,arXiv,Lee2013,True,,arXiv,Not available,"A Real-time Group Auction System for Efficient Allocation of Cloud
Internet Applications",007c6809dc4ca00e600550b9b8e62478,http://arxiv.org/abs/1304.0539v1
16515," We study truthful mechanisms for hiring a team of agents in three classes of
set systems: Vertex Cover auctions, k-flow auctions, and cut auctions. For
Vertex Cover auctions, the vertices are owned by selfish and rational agents,
and the auctioneer wants to purchase a vertex cover from them. For k-flow
auctions, the edges are owned by the agents, and the auctioneer wants to
purchase k edge-disjoint s-t paths, for given s and t. In the same setting, for
cut auctions, the auctioneer wants to purchase an s-t cut. Only the agents know
their costs, and the auctioneer needs to select a feasible set and payments
based on bids made by the agents.
We present constant-competitive truthful mechanisms for all three set
systems. That is, the maximum overpayment of the mechanism is within a constant
factor of the maximum overpayment of any truthful mechanism, for every set
system in the class. The mechanism for Vertex Cover is based on scaling each
bid by a multiplier derived from the dominant eigenvector of a certain matrix.
The mechanism for k-flows prunes the graph to be minimally (k+1)-connected, and
then applies the Vertex Cover mechanism. Similarly, the mechanism for cuts
contracts the graph until all s-t paths have length exactly 2, and then applies
the Vertex Cover mechanism.
",david kempe,,2009.0,,arXiv,Kempe2009,True,,arXiv,Not available,"Frugal and Truthful Auctions for Vertex Covers, Flows, and Cuts",c03b2cc6986525592437afb81b43c5e6,http://arxiv.org/abs/0912.3310v2
16516," We study truthful mechanisms for hiring a team of agents in three classes of
set systems: Vertex Cover auctions, k-flow auctions, and cut auctions. For
Vertex Cover auctions, the vertices are owned by selfish and rational agents,
and the auctioneer wants to purchase a vertex cover from them. For k-flow
auctions, the edges are owned by the agents, and the auctioneer wants to
purchase k edge-disjoint s-t paths, for given s and t. In the same setting, for
cut auctions, the auctioneer wants to purchase an s-t cut. Only the agents know
their costs, and the auctioneer needs to select a feasible set and payments
based on bids made by the agents.
We present constant-competitive truthful mechanisms for all three set
systems. That is, the maximum overpayment of the mechanism is within a constant
factor of the maximum overpayment of any truthful mechanism, for every set
system in the class. The mechanism for Vertex Cover is based on scaling each
bid by a multiplier derived from the dominant eigenvector of a certain matrix.
The mechanism for k-flows prunes the graph to be minimally (k+1)-connected, and
then applies the Vertex Cover mechanism. Similarly, the mechanism for cuts
contracts the graph until all s-t paths have length exactly 2, and then applies
the Vertex Cover mechanism.
",mahyar salek,,2009.0,,arXiv,Kempe2009,True,,arXiv,Not available,"Frugal and Truthful Auctions for Vertex Covers, Flows, and Cuts",c03b2cc6986525592437afb81b43c5e6,http://arxiv.org/abs/0912.3310v2
16517," We study truthful mechanisms for hiring a team of agents in three classes of
set systems: Vertex Cover auctions, k-flow auctions, and cut auctions. For
Vertex Cover auctions, the vertices are owned by selfish and rational agents,
and the auctioneer wants to purchase a vertex cover from them. For k-flow
auctions, the edges are owned by the agents, and the auctioneer wants to
purchase k edge-disjoint s-t paths, for given s and t. In the same setting, for
cut auctions, the auctioneer wants to purchase an s-t cut. Only the agents know
their costs, and the auctioneer needs to select a feasible set and payments
based on bids made by the agents.
We present constant-competitive truthful mechanisms for all three set
systems. That is, the maximum overpayment of the mechanism is within a constant
factor of the maximum overpayment of any truthful mechanism, for every set
system in the class. The mechanism for Vertex Cover is based on scaling each
bid by a multiplier derived from the dominant eigenvector of a certain matrix.
The mechanism for k-flows prunes the graph to be minimally (k+1)-connected, and
then applies the Vertex Cover mechanism. Similarly, the mechanism for cuts
contracts the graph until all s-t paths have length exactly 2, and then applies
the Vertex Cover mechanism.
",cristopher moore,,2009.0,,arXiv,Kempe2009,True,,arXiv,Not available,"Frugal and Truthful Auctions for Vertex Covers, Flows, and Cuts",c03b2cc6986525592437afb81b43c5e6,http://arxiv.org/abs/0912.3310v2
16518," A repeated game is an effective tool to model interactions and conflicts for
players aiming to achieve their objectives in a long-term basis. Contrary to
static noncooperative games that model an interaction among players in only one
period, in repeated games, interactions of players repeat for multiple periods;
and thus the players become aware of other players' past behaviors and their
future benefits, and will adapt their behavior accordingly. In wireless
networks, conflicts among wireless nodes can lead to selfish behaviors,
resulting in poor network performances and detrimental individual payoffs. In
this paper, we survey the applications of repeated games in different wireless
networks. The main goal is to demonstrate the use of repeated games to
encourage wireless nodes to cooperate, thereby improving network performances
and avoiding network disruption due to selfish behaviors. Furthermore, various
problems in wireless networks and variations of repeated game models together
with the corresponding solutions are discussed in this survey. Finally, we
outline some open issues and future research directions.
",ping wang,,2015.0,,arXiv,Hoang2015,True,,arXiv,Not available,Applications of Repeated Games in Wireless Networks: A Survey,f2963180e0e32bd515fb584b5dd3386c,http://arxiv.org/abs/1501.02886v1
16519," With the increasing use of auctions in online advertising, there has been a
large effort to study seller revenue maximization, following Myerson's seminal
work, both theoretically and practically. We take the point of view of the
buyer in classical auctions and ask the question of whether she has an
incentive to shade her bid even in auctions that are reputed to be truthful,
when aware of the revenue optimization mechanism.
We show that in auctions such as the Myerson auction or a VCG with reserve
price set as the monopoly price, the buyer who is aware of this information has
indeed an incentive to shade. Intuitively, by selecting the revenue maximizing
auction, the seller introduces a dependency on the buyers' distributions in the
choice of the auction. We study in depth the case of the Myerson auction and
show that a symmetric equilibrium exists in which buyers shade non-linearly
what would be their first price bid. They then end up with an expected payoff
that is equal to what they would get in a first price auction with no reserve
price.
We conclude that a return to simple first price auctions with no reserve
price or at least non-dynamic anonymous ones is desirable from the point of
view of both buyers, sellers and increasing transparency.
",marc abeille,,2018.0,,arXiv,Abeille2018,True,,arXiv,Not available,Explicit shading strategies for repeated truthful auctions,79ead3760524629b5553ee965407af92,http://arxiv.org/abs/1805.00256v2
16520," With the increasing use of auctions in online advertising, there has been a
large effort to study seller revenue maximization, following Myerson's seminal
work, both theoretically and practically. We take the point of view of the
buyer in classical auctions and ask the question of whether she has an
incentive to shade her bid even in auctions that are reputed to be truthful,
when aware of the revenue optimization mechanism.
We show that in auctions such as the Myerson auction or a VCG with reserve
price set as the monopoly price, the buyer who is aware of this information has
indeed an incentive to shade. Intuitively, by selecting the revenue maximizing
auction, the seller introduces a dependency on the buyers' distributions in the
choice of the auction. We study in depth the case of the Myerson auction and
show that a symmetric equilibrium exists in which buyers shade non-linearly
what would be their first price bid. They then end up with an expected payoff
that is equal to what they would get in a first price auction with no reserve
price.
We conclude that a return to simple first price auctions with no reserve
price or at least non-dynamic anonymous ones is desirable from the point of
view of both buyers, sellers and increasing transparency.
",clement calauzenes,,2018.0,,arXiv,Abeille2018,True,,arXiv,Not available,Explicit shading strategies for repeated truthful auctions,79ead3760524629b5553ee965407af92,http://arxiv.org/abs/1805.00256v2
16521," With the increasing use of auctions in online advertising, there has been a
large effort to study seller revenue maximization, following Myerson's seminal
work, both theoretically and practically. We take the point of view of the
buyer in classical auctions and ask the question of whether she has an
incentive to shade her bid even in auctions that are reputed to be truthful,
when aware of the revenue optimization mechanism.
We show that in auctions such as the Myerson auction or a VCG with reserve
price set as the monopoly price, the buyer who is aware of this information has
indeed an incentive to shade. Intuitively, by selecting the revenue maximizing
auction, the seller introduces a dependency on the buyers' distributions in the
choice of the auction. We study in depth the case of the Myerson auction and
show that a symmetric equilibrium exists in which buyers shade non-linearly
what would be their first price bid. They then end up with an expected payoff
that is equal to what they would get in a first price auction with no reserve
price.
We conclude that a return to simple first price auctions with no reserve
price or at least non-dynamic anonymous ones is desirable from the point of
view of both buyers, sellers and increasing transparency.
",noureddine karoui,,2018.0,,arXiv,Abeille2018,True,,arXiv,Not available,Explicit shading strategies for repeated truthful auctions,79ead3760524629b5553ee965407af92,http://arxiv.org/abs/1805.00256v2
16522," With the increasing use of auctions in online advertising, there has been a
large effort to study seller revenue maximization, following Myerson's seminal
work, both theoretically and practically. We take the point of view of the
buyer in classical auctions and ask the question of whether she has an
incentive to shade her bid even in auctions that are reputed to be truthful,
when aware of the revenue optimization mechanism.
We show that in auctions such as the Myerson auction or a VCG with reserve
price set as the monopoly price, the buyer who is aware of this information has
indeed an incentive to shade. Intuitively, by selecting the revenue maximizing
auction, the seller introduces a dependency on the buyers' distributions in the
choice of the auction. We study in depth the case of the Myerson auction and
show that a symmetric equilibrium exists in which buyers shade non-linearly
what would be their first price bid. They then end up with an expected payoff
that is equal to what they would get in a first price auction with no reserve
price.
We conclude that a return to simple first price auctions with no reserve
price or at least non-dynamic anonymous ones is desirable from the point of
view of both buyers, sellers and increasing transparency.
",thomas nedelec,,2018.0,,arXiv,Abeille2018,True,,arXiv,Not available,Explicit shading strategies for repeated truthful auctions,79ead3760524629b5553ee965407af92,http://arxiv.org/abs/1805.00256v2
16523," With the increasing use of auctions in online advertising, there has been a
large effort to study seller revenue maximization, following Myerson's seminal
work, both theoretically and practically. We take the point of view of the
buyer in classical auctions and ask the question of whether she has an
incentive to shade her bid even in auctions that are reputed to be truthful,
when aware of the revenue optimization mechanism.
We show that in auctions such as the Myerson auction or a VCG with reserve
price set as the monopoly price, the buyer who is aware of this information has
indeed an incentive to shade. Intuitively, by selecting the revenue maximizing
auction, the seller introduces a dependency on the buyers' distributions in the
choice of the auction. We study in depth the case of the Myerson auction and
show that a symmetric equilibrium exists in which buyers shade non-linearly
what would be their first price bid. They then end up with an expected payoff
that is equal to what they would get in a first price auction with no reserve
price.
We conclude that a return to simple first price auctions with no reserve
price or at least non-dynamic anonymous ones is desirable from the point of
view of both buyers, sellers and increasing transparency.
",vianney perchet,,2018.0,,arXiv,Abeille2018,True,,arXiv,Not available,Explicit shading strategies for repeated truthful auctions,79ead3760524629b5553ee965407af92,http://arxiv.org/abs/1805.00256v2
16524," We consider budget constrained combinatorial auctions where bidder $i$ has a
private value $v_i$, a budget $b_i$, and is interested in all the items in
$S_i$. The value to agent $i$ of a set of items $R$ is $|R \cap S_i| \cdot
v_i$. Such auctions capture adword auctions, where advertisers offer a bid for
ads in response to an advertiser-dependent set of adwords, and advertisers have
budgets. It is known that even of all items are identical and all budgets are
public it is not possible to be truthful and efficient. Our main result is a
novel auction that runs in polynomial time, is incentive compatible, and
ensures Pareto-optimality for such auctions when the valuations are private and
the budgets are public knowledge. This extends the result of Dobzinski et al.
(FOCS 2008) for auctions of multiple {\sl identical} items and public budgets
to single-valued {\sl combinatorial} auctions with public budgets.
",amos fiat,,2010.0,,arXiv,Fiat2010,True,,arXiv,Not available,Combinatorial Auctions with Budgets,8f33ad3f5eb49d4d1f7de52b394f8ff7,http://arxiv.org/abs/1001.1686v2
16525," We consider budget constrained combinatorial auctions where bidder $i$ has a
private value $v_i$, a budget $b_i$, and is interested in all the items in
$S_i$. The value to agent $i$ of a set of items $R$ is $|R \cap S_i| \cdot
v_i$. Such auctions capture adword auctions, where advertisers offer a bid for
ads in response to an advertiser-dependent set of adwords, and advertisers have
budgets. It is known that even of all items are identical and all budgets are
public it is not possible to be truthful and efficient. Our main result is a
novel auction that runs in polynomial time, is incentive compatible, and
ensures Pareto-optimality for such auctions when the valuations are private and
the budgets are public knowledge. This extends the result of Dobzinski et al.
(FOCS 2008) for auctions of multiple {\sl identical} items and public budgets
to single-valued {\sl combinatorial} auctions with public budgets.
",stefano leonardi,,2010.0,,arXiv,Fiat2010,True,,arXiv,Not available,Combinatorial Auctions with Budgets,8f33ad3f5eb49d4d1f7de52b394f8ff7,http://arxiv.org/abs/1001.1686v2
16526," We consider budget constrained combinatorial auctions where bidder $i$ has a
private value $v_i$, a budget $b_i$, and is interested in all the items in
$S_i$. The value to agent $i$ of a set of items $R$ is $|R \cap S_i| \cdot
v_i$. Such auctions capture adword auctions, where advertisers offer a bid for
ads in response to an advertiser-dependent set of adwords, and advertisers have
budgets. It is known that even of all items are identical and all budgets are
public it is not possible to be truthful and efficient. Our main result is a
novel auction that runs in polynomial time, is incentive compatible, and
ensures Pareto-optimality for such auctions when the valuations are private and
the budgets are public knowledge. This extends the result of Dobzinski et al.
(FOCS 2008) for auctions of multiple {\sl identical} items and public budgets
to single-valued {\sl combinatorial} auctions with public budgets.
",jared saia,,2010.0,,arXiv,Fiat2010,True,,arXiv,Not available,Combinatorial Auctions with Budgets,8f33ad3f5eb49d4d1f7de52b394f8ff7,http://arxiv.org/abs/1001.1686v2
16527," We consider budget constrained combinatorial auctions where bidder $i$ has a
private value $v_i$, a budget $b_i$, and is interested in all the items in
$S_i$. The value to agent $i$ of a set of items $R$ is $|R \cap S_i| \cdot
v_i$. Such auctions capture adword auctions, where advertisers offer a bid for
ads in response to an advertiser-dependent set of adwords, and advertisers have
budgets. It is known that even of all items are identical and all budgets are
public it is not possible to be truthful and efficient. Our main result is a
novel auction that runs in polynomial time, is incentive compatible, and
ensures Pareto-optimality for such auctions when the valuations are private and
the budgets are public knowledge. This extends the result of Dobzinski et al.
(FOCS 2008) for auctions of multiple {\sl identical} items and public budgets
to single-valued {\sl combinatorial} auctions with public budgets.
",piotr sankowski,,2010.0,,arXiv,Fiat2010,True,,arXiv,Not available,Combinatorial Auctions with Budgets,8f33ad3f5eb49d4d1f7de52b394f8ff7,http://arxiv.org/abs/1001.1686v2
16528," IaaS clouds invest substantial capital in operating their data centers.
Reducing the cost of resource provisioning, is their forever pursuing goal.
Computing resource trading among multiple IaaS clouds provide a potential for
IaaS clouds to utilize cheaper resources to fulfill their jobs, by exploiting
the diversities of different clouds' workloads and operational costs. In this
paper, we focus on studying the IaaS clouds' cost reduction through computing
resource trading among multiple IaaS clouds. We formulate the global cost
minimization problem among multiple IaaS clouds under cooperative scenario
where each individual cloud's workload and cost information is known. Taking
into consideration jobs with disparate lengths, a non-preemptive approximation
algorithm for leftover job migration and new job scheduling is designed. Given
to the selfishness of individual clouds, we further design a randomized double
auction mechanism to elicit clouds' truthful bidding for buying or selling
virtual machines. We evaluate our algorithms using trace-driven simulations.
",jian zhao,,2013.0,,arXiv,Zhao2013,True,,arXiv,Not available,Cost Minimization in Multiple IaaS Clouds: A Double Auction Approach,a0c1d83c091bda707ac392755b2ac3d3,http://arxiv.org/abs/1308.0841v3
16529," A repeated game is an effective tool to model interactions and conflicts for
players aiming to achieve their objectives in a long-term basis. Contrary to
static noncooperative games that model an interaction among players in only one
period, in repeated games, interactions of players repeat for multiple periods;
and thus the players become aware of other players' past behaviors and their
future benefits, and will adapt their behavior accordingly. In wireless
networks, conflicts among wireless nodes can lead to selfish behaviors,
resulting in poor network performances and detrimental individual payoffs. In
this paper, we survey the applications of repeated games in different wireless
networks. The main goal is to demonstrate the use of repeated games to
encourage wireless nodes to cooperate, thereby improving network performances
and avoiding network disruption due to selfish behaviors. Furthermore, various
problems in wireless networks and variations of repeated game models together
with the corresponding solutions are discussed in this survey. Finally, we
outline some open issues and future research directions.
",zhu han,,2015.0,,arXiv,Hoang2015,True,,arXiv,Not available,Applications of Repeated Games in Wireless Networks: A Survey,f2963180e0e32bd515fb584b5dd3386c,http://arxiv.org/abs/1501.02886v1
16530," IaaS clouds invest substantial capital in operating their data centers.
Reducing the cost of resource provisioning, is their forever pursuing goal.
Computing resource trading among multiple IaaS clouds provide a potential for
IaaS clouds to utilize cheaper resources to fulfill their jobs, by exploiting
the diversities of different clouds' workloads and operational costs. In this
paper, we focus on studying the IaaS clouds' cost reduction through computing
resource trading among multiple IaaS clouds. We formulate the global cost
minimization problem among multiple IaaS clouds under cooperative scenario
where each individual cloud's workload and cost information is known. Taking
into consideration jobs with disparate lengths, a non-preemptive approximation
algorithm for leftover job migration and new job scheduling is designed. Given
to the selfishness of individual clouds, we further design a randomized double
auction mechanism to elicit clouds' truthful bidding for buying or selling
virtual machines. We evaluate our algorithms using trace-driven simulations.
",chuan wu,,2013.0,,arXiv,Zhao2013,True,,arXiv,Not available,Cost Minimization in Multiple IaaS Clouds: A Double Auction Approach,a0c1d83c091bda707ac392755b2ac3d3,http://arxiv.org/abs/1308.0841v3
16531," IaaS clouds invest substantial capital in operating their data centers.
Reducing the cost of resource provisioning, is their forever pursuing goal.
Computing resource trading among multiple IaaS clouds provide a potential for
IaaS clouds to utilize cheaper resources to fulfill their jobs, by exploiting
the diversities of different clouds' workloads and operational costs. In this
paper, we focus on studying the IaaS clouds' cost reduction through computing
resource trading among multiple IaaS clouds. We formulate the global cost
minimization problem among multiple IaaS clouds under cooperative scenario
where each individual cloud's workload and cost information is known. Taking
into consideration jobs with disparate lengths, a non-preemptive approximation
algorithm for leftover job migration and new job scheduling is designed. Given
to the selfishness of individual clouds, we further design a randomized double
auction mechanism to elicit clouds' truthful bidding for buying or selling
virtual machines. We evaluate our algorithms using trace-driven simulations.
",zongpeng li,,2013.0,,arXiv,Zhao2013,True,,arXiv,Not available,Cost Minimization in Multiple IaaS Clouds: A Double Auction Approach,a0c1d83c091bda707ac392755b2ac3d3,http://arxiv.org/abs/1308.0841v3
16532," Recent work has suggested reducing electricity generation cost by cutting the
peak to average ratio (PAR) without reducing the total amount of the loads.
However, most of these proposals rely on consumer's willingness to act. In this
paper, we propose an approach to cut PAR explicitly from the supply side. The
resulting cut loads are then distributed among consumers by the means of a
multiunit auction which is done by an intelligent agent on behalf of the
consumer. This approach is also in line with the future vision of the smart
grid to have the demand side matched with the supply side. Experiments suggest
that our approach reduces overall system cost and gives benefit to both
consumers and the energy provider.
",tri wijaya,,2013.0,10.1109/COMSNETS.2013.6465595,"2013 Fifth International Conference on Communication Systems and
Networks (COMSNETS), vol., no., pp.1,6, 7-10 Jan. 2013",Wijaya2013,True,,arXiv,Not available,"Matching Demand with Supply in the Smart Grid using Agent-Based
Multiunit Auction",12c2f1f285175fae21698b094ba4a231,http://arxiv.org/abs/1308.4761v1
16533," Recent work has suggested reducing electricity generation cost by cutting the
peak to average ratio (PAR) without reducing the total amount of the loads.
However, most of these proposals rely on consumer's willingness to act. In this
paper, we propose an approach to cut PAR explicitly from the supply side. The
resulting cut loads are then distributed among consumers by the means of a
multiunit auction which is done by an intelligent agent on behalf of the
consumer. This approach is also in line with the future vision of the smart
grid to have the demand side matched with the supply side. Experiments suggest
that our approach reduces overall system cost and gives benefit to both
consumers and the energy provider.
",kate larson,,2013.0,10.1109/COMSNETS.2013.6465595,"2013 Fifth International Conference on Communication Systems and
Networks (COMSNETS), vol., no., pp.1,6, 7-10 Jan. 2013",Wijaya2013,True,,arXiv,Not available,"Matching Demand with Supply in the Smart Grid using Agent-Based
Multiunit Auction",12c2f1f285175fae21698b094ba4a231,http://arxiv.org/abs/1308.4761v1
16534," Recent work has suggested reducing electricity generation cost by cutting the
peak to average ratio (PAR) without reducing the total amount of the loads.
However, most of these proposals rely on consumer's willingness to act. In this
paper, we propose an approach to cut PAR explicitly from the supply side. The
resulting cut loads are then distributed among consumers by the means of a
multiunit auction which is done by an intelligent agent on behalf of the
consumer. This approach is also in line with the future vision of the smart
grid to have the demand side matched with the supply side. Experiments suggest
that our approach reduces overall system cost and gives benefit to both
consumers and the energy provider.
",karl aberer,,2013.0,10.1109/COMSNETS.2013.6465595,"2013 Fifth International Conference on Communication Systems and
Networks (COMSNETS), vol., no., pp.1,6, 7-10 Jan. 2013",Wijaya2013,True,,arXiv,Not available,"Matching Demand with Supply in the Smart Grid using Agent-Based
Multiunit Auction",12c2f1f285175fae21698b094ba4a231,http://arxiv.org/abs/1308.4761v1
16535," We consider Vickrey-Clarke-Groves (VCG) auctions for a very general
combinatorial structure, in an average-case setting where item costs are
independent, identically distributed uniform random variables. We prove that
the expected VCG cost is at least double the expected nominal cost, and exactly
double when the desired structure is a basis of a bridgeless matroid. In the
matroid case we further show that, conditioned upon the VCG cost, the
expectation of the nominal cost is exactly half the VCG cost, and we show
several results on variances and covariances among the nominal cost, the VCG
cost, and related quantities. As an application, we find the asymptotic
variance of the VCG cost of the minimum spanning tree in a complete graph with
random edge costs.
",svante janson,,2013.0,,arXiv,Janson2013,True,,arXiv,Not available,VCG Auction Mechanism Cost Expectations and Variances,01b86454da7398df7472fb794a641836,http://arxiv.org/abs/1310.1777v1
16536," We consider Vickrey-Clarke-Groves (VCG) auctions for a very general
combinatorial structure, in an average-case setting where item costs are
independent, identically distributed uniform random variables. We prove that
the expected VCG cost is at least double the expected nominal cost, and exactly
double when the desired structure is a basis of a bridgeless matroid. In the
matroid case we further show that, conditioned upon the VCG cost, the
expectation of the nominal cost is exactly half the VCG cost, and we show
several results on variances and covariances among the nominal cost, the VCG
cost, and related quantities. As an application, we find the asymptotic
variance of the VCG cost of the minimum spanning tree in a complete graph with
random edge costs.
",gregory sorkin,,2013.0,,arXiv,Janson2013,True,,arXiv,Not available,VCG Auction Mechanism Cost Expectations and Variances,01b86454da7398df7472fb794a641836,http://arxiv.org/abs/1310.1777v1
16540," We model evolution according to an asymmetric game as occurring in multiple
finite populations, one for each role in the game, and study the effect of
subjecting individuals to stochastic strategy mutations. We show that, when
these mutations occur sufficiently infrequently, the dynamics over all
population states simplify to an ergodic Markov chain over just the pure
population states (where each population is monomorphic). This makes
calculation of the stationary distribution computationally feasible. The
transition probabilities of this embedded Markov chain involve fixation
probabilities of mutants in single populations. The asymmetry of the underlying
game leads to fixation probabilities that are derived from
frequency-independent selection, in contrast to the analogous single-population
symmetric-game case (Fudenberg and Imhof 2006). This frequency independence is
useful in that it allows us to employ results from the population genetics
literature to calculate the stationary distribution of the evolutionary
process, giving sharper, and sometimes even analytic, results. We demonstrate
the utility of this approach by applying it to a battle-of-the-sexes game, a
Crawford-Sobel signalling game, and the beer-quiche game of Cho and Kreps
(1987).
",carl veller,,2015.0,10.1016/j.jet.2015.12.005,arXiv,Veller2015,True,,arXiv,Not available,Finite-population evolution with rare mutations in asymmetric games,151ab23a28e85eb97a90a25de2a74f7d,http://arxiv.org/abs/1503.06245v2
16541," We consider dynamic pricing schemes in online settings where selfish agents
generate online events. Previous work on online mechanisms has dealt almost
entirely with the goal of maximizing social welfare or revenue in an auction
settings. This paper deals with quite general settings and minimizing social
costs. We show that appropriately computed posted prices allow one to achieve
essentially the same performance as the best online algorithm. This holds in a
wide variety of settings. Unlike online algorithms that learn about the event,
and then make enforceable decisions, prices are posted without knowing the
future events or even the current event, and are thus inherently dominant
strategy incentive compatible.
In particular we show that one can give efficient posted price mechanisms for
metrical task systems, some instances of the $k$-server problem, and metrical
matching problems. We give both deterministic and randomized algorithms. Such
posted price mechanisms decrease the social cost dramatically over selfish
behavior where no decision incurs a charge. One alluring application of this is
reducing the social cost of free parking exponentially.
",ilan cohen,,2015.0,,arXiv,Cohen2015,True,,arXiv,Not available,Pricing Online Decisions: Beyond Auctions,aaf8e816d7ef98a1b1425fc9a3d403d0,http://arxiv.org/abs/1504.01093v1
16542," We consider dynamic pricing schemes in online settings where selfish agents
generate online events. Previous work on online mechanisms has dealt almost
entirely with the goal of maximizing social welfare or revenue in an auction
settings. This paper deals with quite general settings and minimizing social
costs. We show that appropriately computed posted prices allow one to achieve
essentially the same performance as the best online algorithm. This holds in a
wide variety of settings. Unlike online algorithms that learn about the event,
and then make enforceable decisions, prices are posted without knowing the
future events or even the current event, and are thus inherently dominant
strategy incentive compatible.
In particular we show that one can give efficient posted price mechanisms for
metrical task systems, some instances of the $k$-server problem, and metrical
matching problems. We give both deterministic and randomized algorithms. Such
posted price mechanisms decrease the social cost dramatically over selfish
behavior where no decision incurs a charge. One alluring application of this is
reducing the social cost of free parking exponentially.
",alon eden,,2015.0,,arXiv,Cohen2015,True,,arXiv,Not available,Pricing Online Decisions: Beyond Auctions,aaf8e816d7ef98a1b1425fc9a3d403d0,http://arxiv.org/abs/1504.01093v1
16543," We consider dynamic pricing schemes in online settings where selfish agents
generate online events. Previous work on online mechanisms has dealt almost
entirely with the goal of maximizing social welfare or revenue in an auction
settings. This paper deals with quite general settings and minimizing social
costs. We show that appropriately computed posted prices allow one to achieve
essentially the same performance as the best online algorithm. This holds in a
wide variety of settings. Unlike online algorithms that learn about the event,
and then make enforceable decisions, prices are posted without knowing the
future events or even the current event, and are thus inherently dominant
strategy incentive compatible.
In particular we show that one can give efficient posted price mechanisms for
metrical task systems, some instances of the $k$-server problem, and metrical
matching problems. We give both deterministic and randomized algorithms. Such
posted price mechanisms decrease the social cost dramatically over selfish
behavior where no decision incurs a charge. One alluring application of this is
reducing the social cost of free parking exponentially.
",amos fiat,,2015.0,,arXiv,Cohen2015,True,,arXiv,Not available,Pricing Online Decisions: Beyond Auctions,aaf8e816d7ef98a1b1425fc9a3d403d0,http://arxiv.org/abs/1504.01093v1
16544," We consider dynamic pricing schemes in online settings where selfish agents
generate online events. Previous work on online mechanisms has dealt almost
entirely with the goal of maximizing social welfare or revenue in an auction
settings. This paper deals with quite general settings and minimizing social
costs. We show that appropriately computed posted prices allow one to achieve
essentially the same performance as the best online algorithm. This holds in a
wide variety of settings. Unlike online algorithms that learn about the event,
and then make enforceable decisions, prices are posted without knowing the
future events or even the current event, and are thus inherently dominant
strategy incentive compatible.
In particular we show that one can give efficient posted price mechanisms for
metrical task systems, some instances of the $k$-server problem, and metrical
matching problems. We give both deterministic and randomized algorithms. Such
posted price mechanisms decrease the social cost dramatically over selfish
behavior where no decision incurs a charge. One alluring application of this is
reducing the social cost of free parking exponentially.
",lukasz jez,,2015.0,,arXiv,Cohen2015,True,,arXiv,Not available,Pricing Online Decisions: Beyond Auctions,aaf8e816d7ef98a1b1425fc9a3d403d0,http://arxiv.org/abs/1504.01093v1
16545," We introduce robust learning equilibrium. The idea of learning equilibrium is
that learning algorithms in multi-agent systems should themselves be in
equilibrium rather than only lead to equilibrium. That is, learning equilibrium
is immune to strategic deviations: Every agent is better off using its
prescribed learning algorithm, if all other agents follow their algorithms,
regardless of the unknown state of the environment. However, a learning
equilibrium may not be immune to non strategic mistakes. For example, if for a
certain period of time there is a failure in the monitoring devices (e.g., the
correct input does not reach the agents), then it may not be in equilibrium to
follow the algorithm after the devices are corrected. A robust learning
equilibrium is immune also to such non-strategic mistakes. The existence of
(robust) learning equilibrium is especially challenging when the monitoring
devices are 'weak'. That is, the information available to each agent at each
stage is limited. We initiate a study of robust learning equilibrium with
general monitoring structure and apply it to the context of auctions. We prove
the existence of robust learning equilibrium in repeated first-price auctions,
and discuss its properties.
",itai ashlagi,,2012.0,,arXiv,Ashlagi2012,True,,arXiv,Not available,Robust Learning Equilibrium,46850abcaeae252fc678dc2c21786a53,http://arxiv.org/abs/1206.6826v1
16546," We introduce robust learning equilibrium. The idea of learning equilibrium is
that learning algorithms in multi-agent systems should themselves be in
equilibrium rather than only lead to equilibrium. That is, learning equilibrium
is immune to strategic deviations: Every agent is better off using its
prescribed learning algorithm, if all other agents follow their algorithms,
regardless of the unknown state of the environment. However, a learning
equilibrium may not be immune to non strategic mistakes. For example, if for a
certain period of time there is a failure in the monitoring devices (e.g., the
correct input does not reach the agents), then it may not be in equilibrium to
follow the algorithm after the devices are corrected. A robust learning
equilibrium is immune also to such non-strategic mistakes. The existence of
(robust) learning equilibrium is especially challenging when the monitoring
devices are 'weak'. That is, the information available to each agent at each
stage is limited. We initiate a study of robust learning equilibrium with
general monitoring structure and apply it to the context of auctions. We prove
the existence of robust learning equilibrium in repeated first-price auctions,
and discuss its properties.
",dov monderer,,2012.0,,arXiv,Ashlagi2012,True,,arXiv,Not available,Robust Learning Equilibrium,46850abcaeae252fc678dc2c21786a53,http://arxiv.org/abs/1206.6826v1
16547," We introduce robust learning equilibrium. The idea of learning equilibrium is
that learning algorithms in multi-agent systems should themselves be in
equilibrium rather than only lead to equilibrium. That is, learning equilibrium
is immune to strategic deviations: Every agent is better off using its
prescribed learning algorithm, if all other agents follow their algorithms,
regardless of the unknown state of the environment. However, a learning
equilibrium may not be immune to non strategic mistakes. For example, if for a
certain period of time there is a failure in the monitoring devices (e.g., the
correct input does not reach the agents), then it may not be in equilibrium to
follow the algorithm after the devices are corrected. A robust learning
equilibrium is immune also to such non-strategic mistakes. The existence of
(robust) learning equilibrium is especially challenging when the monitoring
devices are 'weak'. That is, the information available to each agent at each
stage is limited. We initiate a study of robust learning equilibrium with
general monitoring structure and apply it to the context of auctions. We prove
the existence of robust learning equilibrium in repeated first-price auctions,
and discuss its properties.
",moshe tennenholtz,,2012.0,,arXiv,Ashlagi2012,True,,arXiv,Not available,Robust Learning Equilibrium,46850abcaeae252fc678dc2c21786a53,http://arxiv.org/abs/1206.6826v1
16548," Using data obtained in a controlled ad-auction experiment that we ran, we
evaluate the regret-based approach to econometrics that was recently suggested
by Nekipelov, Syrgkanis, and Tardos (EC 2015). We found that despite the weak
regret-based assumptions, the results were (at least) as accurate as those
obtained using classic equilibrium-based assumptions. En route we studied to
what extent humans actually minimize regret in our ad auction, and found a
significant difference between the ""high types"" (players with a high valuation)
who indeed rationally minimized regret and the ""low types"" who significantly
overbid. We suggest that correcting for these biases and adjusting the
regret-based econometric method may improve the accuracy of estimated values.
",noam nisan,,2016.0,10.1145/3038912.3052621,arXiv,Nisan2016,True,,arXiv,Not available,An Experimental Evaluation of Regret-Based Econometrics,442641c200499fbcbcc239815cfc291d,http://arxiv.org/abs/1605.03838v2
16549," Using data obtained in a controlled ad-auction experiment that we ran, we
evaluate the regret-based approach to econometrics that was recently suggested
by Nekipelov, Syrgkanis, and Tardos (EC 2015). We found that despite the weak
regret-based assumptions, the results were (at least) as accurate as those
obtained using classic equilibrium-based assumptions. En route we studied to
what extent humans actually minimize regret in our ad auction, and found a
significant difference between the ""high types"" (players with a high valuation)
who indeed rationally minimized regret and the ""low types"" who significantly
overbid. We suggest that correcting for these biases and adjusting the
regret-based econometric method may improve the accuracy of estimated values.
",gali noti,,2016.0,10.1145/3038912.3052621,arXiv,Nisan2016,True,,arXiv,Not available,An Experimental Evaluation of Regret-Based Econometrics,442641c200499fbcbcc239815cfc291d,http://arxiv.org/abs/1605.03838v2
16550," Online double auctions (DAs) model a dynamic two-sided matching problem with
private information and self-interest, and are relevant for dynamic resource
and task allocation problems. We present a general method to design truthful
DAs, such that no agent can benefit from misreporting its arrival time,
duration, or value. The family of DAs is parameterized by a pricing rule, and
includes a generalization of McAfee's truthful DA to this dynamic setting. We
present an empirical study, in which we study the allocative-surplus and agent
surplus for a number of different DAs. Our results illustrate that dynamic
pricing rules are important to provide good market efficiency for markets with
high volatility or low volume.
",jonathan bredin,,2012.0,,arXiv,Bredin2012,True,,arXiv,Not available,Models for Truthful Online Double Auctions,4317e3466310ada4542c96752d79ff62,http://arxiv.org/abs/1207.1360v1
16551," We model evolution according to an asymmetric game as occurring in multiple
finite populations, one for each role in the game, and study the effect of
subjecting individuals to stochastic strategy mutations. We show that, when
these mutations occur sufficiently infrequently, the dynamics over all
population states simplify to an ergodic Markov chain over just the pure
population states (where each population is monomorphic). This makes
calculation of the stationary distribution computationally feasible. The
transition probabilities of this embedded Markov chain involve fixation
probabilities of mutants in single populations. The asymmetry of the underlying
game leads to fixation probabilities that are derived from
frequency-independent selection, in contrast to the analogous single-population
symmetric-game case (Fudenberg and Imhof 2006). This frequency independence is
useful in that it allows us to employ results from the population genetics
literature to calculate the stationary distribution of the evolutionary
process, giving sharper, and sometimes even analytic, results. We demonstrate
the utility of this approach by applying it to a battle-of-the-sexes game, a
Crawford-Sobel signalling game, and the beer-quiche game of Cho and Kreps
(1987).
",laura hayward,,2015.0,10.1016/j.jet.2015.12.005,arXiv,Veller2015,True,,arXiv,Not available,Finite-population evolution with rare mutations in asymmetric games,151ab23a28e85eb97a90a25de2a74f7d,http://arxiv.org/abs/1503.06245v2
16552," Online double auctions (DAs) model a dynamic two-sided matching problem with
private information and self-interest, and are relevant for dynamic resource
and task allocation problems. We present a general method to design truthful
DAs, such that no agent can benefit from misreporting its arrival time,
duration, or value. The family of DAs is parameterized by a pricing rule, and
includes a generalization of McAfee's truthful DA to this dynamic setting. We
present an empirical study, in which we study the allocative-surplus and agent
surplus for a number of different DAs. Our results illustrate that dynamic
pricing rules are important to provide good market efficiency for markets with
high volatility or low volume.
",david parkes,,2012.0,,arXiv,Bredin2012,True,,arXiv,Not available,Models for Truthful Online Double Auctions,4317e3466310ada4542c96752d79ff62,http://arxiv.org/abs/1207.1360v1
16553," We study individual rational, Pareto optimal, and incentive compatible
mechanisms for auctions with heterogeneous items and budget limits. For
multi-dimensional valuations we show that there can be no deterministic
mechanism with these properties for divisible items. We use this to show that
there can also be no randomized mechanism that achieves this for either
divisible or indivisible items. For single-dimensional valuations we show that
there can be no deterministic mechanism with these properties for indivisible
items, but that there is a randomized mechanism that achieves this for either
divisible or indivisible items. The impossibility results hold for public
budgets, while the mechanism allows private budgets, which is in both cases the
harder variant to show. While all positive results are polynomial-time
algorithms, all negative results hold independent of complexity considerations.
",paul duetting,,2012.0,,arXiv,Duetting2012,True,,arXiv,Not available,Auctions with Heterogeneous Items and Budget Limits,49f5df7704e237fe6b24c88de3effc21,http://arxiv.org/abs/1209.6448v1
16554," We study individual rational, Pareto optimal, and incentive compatible
mechanisms for auctions with heterogeneous items and budget limits. For
multi-dimensional valuations we show that there can be no deterministic
mechanism with these properties for divisible items. We use this to show that
there can also be no randomized mechanism that achieves this for either
divisible or indivisible items. For single-dimensional valuations we show that
there can be no deterministic mechanism with these properties for indivisible
items, but that there is a randomized mechanism that achieves this for either
divisible or indivisible items. The impossibility results hold for public
budgets, while the mechanism allows private budgets, which is in both cases the
harder variant to show. While all positive results are polynomial-time
algorithms, all negative results hold independent of complexity considerations.
",monika henzinger,,2012.0,,arXiv,Duetting2012,True,,arXiv,Not available,Auctions with Heterogeneous Items and Budget Limits,49f5df7704e237fe6b24c88de3effc21,http://arxiv.org/abs/1209.6448v1
16555," We study individual rational, Pareto optimal, and incentive compatible
mechanisms for auctions with heterogeneous items and budget limits. For
multi-dimensional valuations we show that there can be no deterministic
mechanism with these properties for divisible items. We use this to show that
there can also be no randomized mechanism that achieves this for either
divisible or indivisible items. For single-dimensional valuations we show that
there can be no deterministic mechanism with these properties for indivisible
items, but that there is a randomized mechanism that achieves this for either
divisible or indivisible items. The impossibility results hold for public
budgets, while the mechanism allows private budgets, which is in both cases the
harder variant to show. While all positive results are polynomial-time
algorithms, all negative results hold independent of complexity considerations.
",martin starnberger,,2012.0,,arXiv,Duetting2012,True,,arXiv,Not available,Auctions with Heterogeneous Items and Budget Limits,49f5df7704e237fe6b24c88de3effc21,http://arxiv.org/abs/1209.6448v1
16556," It was recently shown in [http://arxiv.org/abs/1207.5518] that revenue
optimization can be computationally efficiently reduced to welfare optimization
in all multi-dimensional Bayesian auction problems with arbitrary (possibly
combinatorial) feasibility constraints and independent additive bidders with
arbitrary (possibly combinatorial) demand constraints. This reduction provides
a poly-time solution to the optimal mechanism design problem in all auction
settings where welfare optimization can be solved efficiently, but it is
fragile to approximation and cannot provide solutions to settings where welfare
maximization can only be tractably approximated. In this paper, we extend the
reduction to accommodate approximation algorithms, providing an approximation
preserving reduction from (truthful) revenue maximization to (not necessarily
truthful) welfare maximization. The mechanisms output by our reduction choose
allocations via black-box calls to welfare approximation on randomly selected
inputs, thereby generalizing also our earlier structural results on optimal
multi-dimensional mechanisms to approximately optimal mechanisms. Unlike
[http://arxiv.org/abs/1207.5518], our results here are obtained through novel
uses of the Ellipsoid algorithm and other optimization techniques over {\em
non-convex regions}.
",yang cai,,2013.0,,arXiv,Cai2013,True,,arXiv,Not available,"Reducing Revenue to Welfare Maximization: Approximation Algorithms and
other Generalizations",47ac4f128d25c93f75d2cedc8278c9af,http://arxiv.org/abs/1305.4000v1
16557," It was recently shown in [http://arxiv.org/abs/1207.5518] that revenue
optimization can be computationally efficiently reduced to welfare optimization
in all multi-dimensional Bayesian auction problems with arbitrary (possibly
combinatorial) feasibility constraints and independent additive bidders with
arbitrary (possibly combinatorial) demand constraints. This reduction provides
a poly-time solution to the optimal mechanism design problem in all auction
settings where welfare optimization can be solved efficiently, but it is
fragile to approximation and cannot provide solutions to settings where welfare
maximization can only be tractably approximated. In this paper, we extend the
reduction to accommodate approximation algorithms, providing an approximation
preserving reduction from (truthful) revenue maximization to (not necessarily
truthful) welfare maximization. The mechanisms output by our reduction choose
allocations via black-box calls to welfare approximation on randomly selected
inputs, thereby generalizing also our earlier structural results on optimal
multi-dimensional mechanisms to approximately optimal mechanisms. Unlike
[http://arxiv.org/abs/1207.5518], our results here are obtained through novel
uses of the Ellipsoid algorithm and other optimization techniques over {\em
non-convex regions}.
",constantinos daskalakis,,2013.0,,arXiv,Cai2013,True,,arXiv,Not available,"Reducing Revenue to Welfare Maximization: Approximation Algorithms and
other Generalizations",47ac4f128d25c93f75d2cedc8278c9af,http://arxiv.org/abs/1305.4000v1
16558," It was recently shown in [http://arxiv.org/abs/1207.5518] that revenue
optimization can be computationally efficiently reduced to welfare optimization
in all multi-dimensional Bayesian auction problems with arbitrary (possibly
combinatorial) feasibility constraints and independent additive bidders with
arbitrary (possibly combinatorial) demand constraints. This reduction provides
a poly-time solution to the optimal mechanism design problem in all auction
settings where welfare optimization can be solved efficiently, but it is
fragile to approximation and cannot provide solutions to settings where welfare
maximization can only be tractably approximated. In this paper, we extend the
reduction to accommodate approximation algorithms, providing an approximation
preserving reduction from (truthful) revenue maximization to (not necessarily
truthful) welfare maximization. The mechanisms output by our reduction choose
allocations via black-box calls to welfare approximation on randomly selected
inputs, thereby generalizing also our earlier structural results on optimal
multi-dimensional mechanisms to approximately optimal mechanisms. Unlike
[http://arxiv.org/abs/1207.5518], our results here are obtained through novel
uses of the Ellipsoid algorithm and other optimization techniques over {\em
non-convex regions}.
",s. weinberg,,2013.0,,arXiv,Cai2013,True,,arXiv,Not available,"Reducing Revenue to Welfare Maximization: Approximation Algorithms and
other Generalizations",47ac4f128d25c93f75d2cedc8278c9af,http://arxiv.org/abs/1305.4000v1
16559," This brief paper describes the single-player card game called ""Perpetual
Motion"" and reports on a computational analysis of the game's outcome. The
analysis follows a Monte Carlo methodology based on a sample of 10,000 randomly
generated games. The key result is that 54.55% +/- 0.89% of games can be
completed (by a patient player!) but that the remaining 45.45% result in
non-terminating cycles. The lengths of these non-terminating cycles leave some
outstanding questions.
",matthew clarke,,2009.0,,arXiv,Clarke2009,True,,arXiv,Not available,"On the Chances of Completing the Game of ""Perpetual Motion""",1689fabd98a18b8e36c6dc5f3a6bb2a2,http://arxiv.org/abs/0907.1955v1
16560," We present a direct reduction from k-player games to 2-player games that
preserves approximate Nash equilibrium. Previously, the computational
equivalence of computing approximate Nash equilibrium in k-player and 2-player
games was established via an indirect reduction. This included a sequence of
works defining the complexity class PPAD, identifying complete problems for
this class, showing that computing approximate Nash equilibrium for k-player
games is in PPAD, and reducing a PPAD-complete problem to computing approximate
Nash equilibrium for 2-player games. Our direct reduction makes no use of the
concept of PPAD, thus eliminating some of the difficulties involved in
following the known indirect reduction.
",uriel feige,,2010.0,10.1007/978-3-642-16170-4_13,arXiv,Feige2010,True,,arXiv,Not available,"A Direct Reduction from k-Player to 2-Player Approximate Nash
Equilibrium",414beac6d20cba0920116a9305028f32,http://arxiv.org/abs/1007.3886v1
16561," We present a direct reduction from k-player games to 2-player games that
preserves approximate Nash equilibrium. Previously, the computational
equivalence of computing approximate Nash equilibrium in k-player and 2-player
games was established via an indirect reduction. This included a sequence of
works defining the complexity class PPAD, identifying complete problems for
this class, showing that computing approximate Nash equilibrium for k-player
games is in PPAD, and reducing a PPAD-complete problem to computing approximate
Nash equilibrium for 2-player games. Our direct reduction makes no use of the
concept of PPAD, thus eliminating some of the difficulties involved in
following the known indirect reduction.
",inbal talgam-cohen,,2010.0,10.1007/978-3-642-16170-4_13,arXiv,Feige2010,True,,arXiv,Not available,"A Direct Reduction from k-Player to 2-Player Approximate Nash
Equilibrium",414beac6d20cba0920116a9305028f32,http://arxiv.org/abs/1007.3886v1
16562," In this paper, we study the distribution and behaviour of internal equilibria
in a $d$-player $n$-strategy random evolutionary game where the game payoff
matrix is generated from normal distributions. The study of this paper reveals
and exploits interesting connections between evolutionary game theory and
random polynomial theory. The main novelties of the paper are some qualitative
and quantitative results on the expected density, $f_{n,d}$, and the expected
number, $E(n,d)$, of (stable) internal equilibria. Firstly, we show that in
multi-player two-strategy games, they behave asymptotically as $\sqrt{d-1}$ as
$d$ is sufficiently large. Secondly, we prove that they are monotone functions
of $d$. We also make a conjecture for games with more than two strategies.
Thirdly, we provide numerical simulations for our analytical results and to
support the conjecture. As consequences of our analysis, some qualitative and
quantitative results on the distribution of zeros of a random Bernstein
polynomial are also obtained.
",manh duong,,2015.0,,arXiv,Duong2015,True,,arXiv,Not available,"Analysis of the expected density of internal equilibria in random
evolutionary multi-player multi-strategy games",195738357aa8fe703aeba1f2b92817d9,http://arxiv.org/abs/1505.04676v3
16563," We study pure-strategy Nash equilibria in multi-player concurrent
deterministic games, for a variety of preference relations. We provide a novel
construction, called the suspect game, which transforms a multi-player
concurrent game into a two-player turn-based game which turns Nash equilibria
into winning strategies (for some objective that depends on the preference
relations of the players in the original game). We use that transformation to
design algorithms for computing Nash equilibria in finite games, which in most
cases have optimal worst-case complexity, for large classes of preference
relations. This includes the purely qualitative framework, where each player
has a single omega-regular objective that she wants to satisfy, but also the
larger class of semi-quantitative objectives, where each player has several
omega-regular objectives equipped with a preorder (for instance, a player may
want to satisfy all her objectives, or to maximise the number of objectives
that she achieves.)
",patricia bouyer,,2015.0,10.2168/LMCS-11(2:9)2015,"Logical Methods in Computer Science, Volume 11, Issue 2 (June 19,
2015) lmcs:1569",Bouyer2015,True,,arXiv,Not available,Pure Nash Equilibria in Concurrent Deterministic Games,a6b94c317a0af8dc06d94788e7b239f9,http://arxiv.org/abs/1503.06826v2
16564," We describe an algorithm for computing best response strategies in a class of
two-player infinite games of incomplete information, defined by payoffs
piecewise linear in agents' types and actions, conditional on linear
comparisons of agents' actions. We show that this class includes many
well-known games including a variety of auctions and a novel allocation game.
In some cases, the best-response algorithm can be iterated to compute
Bayes-Nash equilibria. We demonstrate the efficiency of our approach on
existing and new games.
",daniel reeves,,2012.0,,arXiv,Reeves2012,True,,arXiv,Not available,"Computing Best-Response Strategies in Infinite Games of Incomplete
Information",067fb14ef316953fb2525cc774bd6388,http://arxiv.org/abs/1207.4171v1
16565," We describe an algorithm for computing best response strategies in a class of
two-player infinite games of incomplete information, defined by payoffs
piecewise linear in agents' types and actions, conditional on linear
comparisons of agents' actions. We show that this class includes many
well-known games including a variety of auctions and a novel allocation game.
In some cases, the best-response algorithm can be iterated to compute
Bayes-Nash equilibria. We demonstrate the efficiency of our approach on
existing and new games.
",michael wellman,,2012.0,,arXiv,Reeves2012,True,,arXiv,Not available,"Computing Best-Response Strategies in Infinite Games of Incomplete
Information",067fb14ef316953fb2525cc774bd6388,http://arxiv.org/abs/1207.4171v1
16566," Throughout the history of games, representing the abilities of the various
agents acting on behalf of the players has been a central concern. With
increasingly sophisticated games emerging, these simulations have become more
realistic, but the underlying mechanisms are still, to a large extent, of an ad
hoc nature. This paper proposes using a logistic model from psychometrics as a
unified mechanism for task resolution in simulation-oriented games.
",magnus hetland,,2013.0,10.1007/978-3-642-40790-1_22,"Serious Games Development and Applications. Lecture Notes in
Computer Science Volume 8101, 2013, pp 226-238",Hetland2013,True,,arXiv,Not available,Simulating Ability: Representing Skills in Games,0df4d4850fe3737623f4d085a1457f25,http://arxiv.org/abs/1307.0201v2
16567," In this paper we survey various notions of symmetry for finite strategic-form
games; show that game bijections and game isomorphisms form groupoids;
introduce matchings as a convenient characterisation of strategy triviality;
and outline how to construct and partially order parameterised symmetric games
with numerous examples that range all combinations of surveyed symmetry
notions.
",nicholas ham,,2013.0,,arXiv,Ham2013,True,,arXiv,Not available,Notions of Symmetry for Finite Strategic-Form Games,d5da56535bfabb98313576b9d3b4640c,http://arxiv.org/abs/1311.4766v4
16568," The optimal value computation for turned-based stochastic games with
reachability objectives, also known as simple stochastic games, is one of the
few problems in $NP \cap coNP$ which are not known to be in $P$. However, there
are some cases where these games can be easily solved, as for instance when the
underlying graph is acyclic. In this work, we try to extend this tractability
to several classes of games that can be thought as ""almost"" acyclic. We give
some fixed-parameter tractable or polynomial algorithms in terms of different
parameters such as the number of cycles or the size of the minimal feedback
vertex set.
",david auger,,2014.0,,arXiv,Auger2014,True,,arXiv,Not available,Finding Optimal Strategies of Almost Acyclic Simple Stochatic Games,98368c309ff9842d22b62fb014db5764,http://arxiv.org/abs/1402.0471v1
16569," The optimal value computation for turned-based stochastic games with
reachability objectives, also known as simple stochastic games, is one of the
few problems in $NP \cap coNP$ which are not known to be in $P$. However, there
are some cases where these games can be easily solved, as for instance when the
underlying graph is acyclic. In this work, we try to extend this tractability
to several classes of games that can be thought as ""almost"" acyclic. We give
some fixed-parameter tractable or polynomial algorithms in terms of different
parameters such as the number of cycles or the size of the minimal feedback
vertex set.
",pierre coucheney,,2014.0,,arXiv,Auger2014,True,,arXiv,Not available,Finding Optimal Strategies of Almost Acyclic Simple Stochatic Games,98368c309ff9842d22b62fb014db5764,http://arxiv.org/abs/1402.0471v1
16570," The optimal value computation for turned-based stochastic games with
reachability objectives, also known as simple stochastic games, is one of the
few problems in $NP \cap coNP$ which are not known to be in $P$. However, there
are some cases where these games can be easily solved, as for instance when the
underlying graph is acyclic. In this work, we try to extend this tractability
to several classes of games that can be thought as ""almost"" acyclic. We give
some fixed-parameter tractable or polynomial algorithms in terms of different
parameters such as the number of cycles or the size of the minimal feedback
vertex set.
",yann strozecki,,2014.0,,arXiv,Auger2014,True,,arXiv,Not available,Finding Optimal Strategies of Almost Acyclic Simple Stochatic Games,98368c309ff9842d22b62fb014db5764,http://arxiv.org/abs/1402.0471v1
16571," We propose a benchmark suite for parity games that includes all benchmarks
that have been used in the literature, and make it available online. We give an
overview of the parity games, including a description of how they have been
generated. We also describe structural properties of parity games, and using
these properties we show that our benchmarks are representative. With this work
we provide a starting point for further experimentation with parity games.
",jeroen keiren,,2014.0,,arXiv,Keiren2014,True,,arXiv,Not available,Benchmarks for Parity Games (extended version),7426be34a963fae7dc4314b074b913aa,http://arxiv.org/abs/1407.3121v2
16572," We introduce the concept of budget games. Players choose a set of tasks and
each task has a certain demand on every resource in the game. Each resource has
a budget. If the budget is not enough to satisfy the sum of all demands, it has
to be shared between the tasks. We study strategic budget games, where the
budget is shared proportionally. We also consider a variant in which the order
of the strategic decisions influences the distribution of the budgets. The
complexity of the optimal solution as well as existence, complexity and quality
of equilibria are analyzed. Finally, we show that the time an ordered budget
game needs to convergence towards an equilibrium may be exponential.
",maximilian drees,,2014.0,,arXiv,Drees2014,True,,arXiv,Not available,Budget-restricted utility games with ordered strategic decisions,c4127866338b15e4bd0315ff6129e21e,http://arxiv.org/abs/1407.3123v1
16573," We introduce the concept of budget games. Players choose a set of tasks and
each task has a certain demand on every resource in the game. Each resource has
a budget. If the budget is not enough to satisfy the sum of all demands, it has
to be shared between the tasks. We study strategic budget games, where the
budget is shared proportionally. We also consider a variant in which the order
of the strategic decisions influences the distribution of the budgets. The
complexity of the optimal solution as well as existence, complexity and quality
of equilibria are analyzed. Finally, we show that the time an ordered budget
game needs to convergence towards an equilibrium may be exponential.
",soren riechers,,2014.0,,arXiv,Drees2014,True,,arXiv,Not available,Budget-restricted utility games with ordered strategic decisions,c4127866338b15e4bd0315ff6129e21e,http://arxiv.org/abs/1407.3123v1
16574," We study pure-strategy Nash equilibria in multi-player concurrent
deterministic games, for a variety of preference relations. We provide a novel
construction, called the suspect game, which transforms a multi-player
concurrent game into a two-player turn-based game which turns Nash equilibria
into winning strategies (for some objective that depends on the preference
relations of the players in the original game). We use that transformation to
design algorithms for computing Nash equilibria in finite games, which in most
cases have optimal worst-case complexity, for large classes of preference
relations. This includes the purely qualitative framework, where each player
has a single omega-regular objective that she wants to satisfy, but also the
larger class of semi-quantitative objectives, where each player has several
omega-regular objectives equipped with a preorder (for instance, a player may
want to satisfy all her objectives, or to maximise the number of objectives
that she achieves.)
",romain brenguier,,2015.0,10.2168/LMCS-11(2:9)2015,"Logical Methods in Computer Science, Volume 11, Issue 2 (June 19,
2015) lmcs:1569",Bouyer2015,True,,arXiv,Not available,Pure Nash Equilibria in Concurrent Deterministic Games,a6b94c317a0af8dc06d94788e7b239f9,http://arxiv.org/abs/1503.06826v2
16575," We introduce the concept of budget games. Players choose a set of tasks and
each task has a certain demand on every resource in the game. Each resource has
a budget. If the budget is not enough to satisfy the sum of all demands, it has
to be shared between the tasks. We study strategic budget games, where the
budget is shared proportionally. We also consider a variant in which the order
of the strategic decisions influences the distribution of the budgets. The
complexity of the optimal solution as well as existence, complexity and quality
of equilibria are analyzed. Finally, we show that the time an ordered budget
game needs to convergence towards an equilibrium may be exponential.
",alexander skopalik,,2014.0,,arXiv,Drees2014,True,,arXiv,Not available,Budget-restricted utility games with ordered strategic decisions,c4127866338b15e4bd0315ff6129e21e,http://arxiv.org/abs/1407.3123v1
16576," An algorithm based on backward induction is devised in order to compute the
optimal sequence of games to be played in Parrondo games. The algorithm can be
used to find the optimal sequence for any finite number of turns or in the
steady state, showing that ABABB... is the sequence with the highest steady
state average gain. The algorithm can also be generalised to find the optimal
adaptive strategy in a multi-player version of the games, where a finite number
of players may choose, at every turn, the game the whole ensemble should play.
",l. dinis,,2014.0,10.1103/PhysRevE.77.021124,"Physical Review E 77, 021124 (2008)",Dinis2014,True,,arXiv,Not available,Optimal sequence for Parrondo games,2293b7aa06c4ae52426814a6b7f13cb1,http://arxiv.org/abs/1409.6497v1
16577," Poset games have been the object of mathematical study for over a century,
but little has been written on the computational complexity of determining
important properties of these games. In this introduction we develop the
fundamentals of combinatorial game theory and focus for the most part on poset
games, of which Nim is perhaps the best-known example. We present the
complexity results known to date, some discovered very recently.
",stephen fenner,,2015.0,,arXiv,Fenner2015,True,,arXiv,Not available,Combinatorial Game Complexity: An Introduction with Poset Games,e7129aa6aa2d7d384a481e8dd4e5e4b7,http://arxiv.org/abs/1505.07416v2
16578," Poset games have been the object of mathematical study for over a century,
but little has been written on the computational complexity of determining
important properties of these games. In this introduction we develop the
fundamentals of combinatorial game theory and focus for the most part on poset
games, of which Nim is perhaps the best-known example. We present the
complexity results known to date, some discovered very recently.
",john rogers,,2015.0,,arXiv,Fenner2015,True,,arXiv,Not available,Combinatorial Game Complexity: An Introduction with Poset Games,e7129aa6aa2d7d384a481e8dd4e5e4b7,http://arxiv.org/abs/1505.07416v2
16579," We provide an exact analytical solution of the Nash equilibrium for $k$-
price auctions. We also introduce a new type of auction and demonstrate that it
has fair solutions other than the second price auctions, therefore paving the
way for replacing second price auctions.
",martin mihelich,,2018.0,,arXiv,Mihelich2018,True,,arXiv,Not available,k-price auctions and Combination auctions,9e5690462e56356f76d38d42dcf1fcd0,http://arxiv.org/abs/1810.03494v2
16580," We provide an exact analytical solution of the Nash equilibrium for $k$-
price auctions. We also introduce a new type of auction and demonstrate that it
has fair solutions other than the second price auctions, therefore paving the
way for replacing second price auctions.
",yan shu,,2018.0,,arXiv,Mihelich2018,True,,arXiv,Not available,k-price auctions and Combination auctions,9e5690462e56356f76d38d42dcf1fcd0,http://arxiv.org/abs/1810.03494v2
16581," We study auctions with severe bounds on the communication allowed: each
bidder may only transmit t bits of information to the auctioneer. We consider
both welfare- and profit-maximizing auctions under this communication
restriction. For both measures, we determine the optimal auction and show that
the loss incurred relative to unconstrained auctions is mild. We prove
non-surprising properties of these kinds of auctions, e.g., that in optimal
mechanisms bidders simply report the interval in which their valuation lies in,
as well as some surprising properties, e.g., that asymmetric auctions are
better than symmetric ones and that multi-round auctions reduce the
communication complexity only by a linear factor.
",l. blumrosen,,2011.0,10.1613/jair.2081,"Journal Of Artificial Intelligence Research, Volume 28, pages
233-266, 2007",Blumrosen2011,True,,arXiv,Not available,Auctions with Severely Bounded Communication,a8b1b18247078cc44d008633f4b04527,http://arxiv.org/abs/1110.2733v1
16582," We study auctions with severe bounds on the communication allowed: each
bidder may only transmit t bits of information to the auctioneer. We consider
both welfare- and profit-maximizing auctions under this communication
restriction. For both measures, we determine the optimal auction and show that
the loss incurred relative to unconstrained auctions is mild. We prove
non-surprising properties of these kinds of auctions, e.g., that in optimal
mechanisms bidders simply report the interval in which their valuation lies in,
as well as some surprising properties, e.g., that asymmetric auctions are
better than symmetric ones and that multi-round auctions reduce the
communication complexity only by a linear factor.
",n. nisan,,2011.0,10.1613/jair.2081,"Journal Of Artificial Intelligence Research, Volume 28, pages
233-266, 2007",Blumrosen2011,True,,arXiv,Not available,Auctions with Severely Bounded Communication,a8b1b18247078cc44d008633f4b04527,http://arxiv.org/abs/1110.2733v1
16583," We study auctions with severe bounds on the communication allowed: each
bidder may only transmit t bits of information to the auctioneer. We consider
both welfare- and profit-maximizing auctions under this communication
restriction. For both measures, we determine the optimal auction and show that
the loss incurred relative to unconstrained auctions is mild. We prove
non-surprising properties of these kinds of auctions, e.g., that in optimal
mechanisms bidders simply report the interval in which their valuation lies in,
as well as some surprising properties, e.g., that asymmetric auctions are
better than symmetric ones and that multi-round auctions reduce the
communication complexity only by a linear factor.
",i. segal,,2011.0,10.1613/jair.2081,"Journal Of Artificial Intelligence Research, Volume 28, pages
233-266, 2007",Blumrosen2011,True,,arXiv,Not available,Auctions with Severely Bounded Communication,a8b1b18247078cc44d008633f4b04527,http://arxiv.org/abs/1110.2733v1
16584," This letter considers the design of an auction mechanism to sell the object
of a seller when the buyers quantize their private value estimates regarding
the object prior to communicating them to the seller. The designed auction
mechanism maximizes the utility of the seller (i.e., the auction is optimal),
prevents buyers from communicating falsified quantized bids (i.e., the auction
is incentive-compatible), and ensures that buyers will participate in the
auction (i.e., the auction is individually-rational). The letter also
investigates the design of the optimal quantization thresholds using which
buyers quantize their private value estimates. Numerical results provide
insights regarding the influence of the quantization thresholds on the auction
mechanism.
",nianxia cao,,2015.0,10.1109/LSP.2016.2604280,arXiv,Cao2015,True,,arXiv,Not available,Optimal Auction Design with Quantized Bids,adde2e9f826851c238afe09601dbe3b2,http://arxiv.org/abs/1509.08496v1
16585," We study pure-strategy Nash equilibria in multi-player concurrent
deterministic games, for a variety of preference relations. We provide a novel
construction, called the suspect game, which transforms a multi-player
concurrent game into a two-player turn-based game which turns Nash equilibria
into winning strategies (for some objective that depends on the preference
relations of the players in the original game). We use that transformation to
design algorithms for computing Nash equilibria in finite games, which in most
cases have optimal worst-case complexity, for large classes of preference
relations. This includes the purely qualitative framework, where each player
has a single omega-regular objective that she wants to satisfy, but also the
larger class of semi-quantitative objectives, where each player has several
omega-regular objectives equipped with a preorder (for instance, a player may
want to satisfy all her objectives, or to maximise the number of objectives
that she achieves.)
",nicolas markey,,2015.0,10.2168/LMCS-11(2:9)2015,"Logical Methods in Computer Science, Volume 11, Issue 2 (June 19,
2015) lmcs:1569",Bouyer2015,True,,arXiv,Not available,Pure Nash Equilibria in Concurrent Deterministic Games,a6b94c317a0af8dc06d94788e7b239f9,http://arxiv.org/abs/1503.06826v2
16586," This letter considers the design of an auction mechanism to sell the object
of a seller when the buyers quantize their private value estimates regarding
the object prior to communicating them to the seller. The designed auction
mechanism maximizes the utility of the seller (i.e., the auction is optimal),
prevents buyers from communicating falsified quantized bids (i.e., the auction
is incentive-compatible), and ensures that buyers will participate in the
auction (i.e., the auction is individually-rational). The letter also
investigates the design of the optimal quantization thresholds using which
buyers quantize their private value estimates. Numerical results provide
insights regarding the influence of the quantization thresholds on the auction
mechanism.
",swastik brahma,,2015.0,10.1109/LSP.2016.2604280,arXiv,Cao2015,True,,arXiv,Not available,Optimal Auction Design with Quantized Bids,adde2e9f826851c238afe09601dbe3b2,http://arxiv.org/abs/1509.08496v1
16587," This letter considers the design of an auction mechanism to sell the object
of a seller when the buyers quantize their private value estimates regarding
the object prior to communicating them to the seller. The designed auction
mechanism maximizes the utility of the seller (i.e., the auction is optimal),
prevents buyers from communicating falsified quantized bids (i.e., the auction
is incentive-compatible), and ensures that buyers will participate in the
auction (i.e., the auction is individually-rational). The letter also
investigates the design of the optimal quantization thresholds using which
buyers quantize their private value estimates. Numerical results provide
insights regarding the influence of the quantization thresholds on the auction
mechanism.
",pramod varshney,,2015.0,10.1109/LSP.2016.2604280,arXiv,Cao2015,True,,arXiv,Not available,Optimal Auction Design with Quantized Bids,adde2e9f826851c238afe09601dbe3b2,http://arxiv.org/abs/1509.08496v1
16588," We introduce a novel characterization of all Walrasian price vectors in terms
of forbidden over- and under demanded sets for monotone gross substitute
combinatorial auctions.
For ascending and descending auctions we suggest a universal framework for
finding the minimum or maximum Walrasian price vectors for monotone gross
substitute combinatorial auctions. An ascending (descending) auction is
guaranteed to find the minimum (maximum) Walrasian if and only if it follows
the suggested framework.
",oren ben-zwi,,2016.0,,arXiv,Ben-Zwi2016,True,,arXiv,Not available,Walrasian's Characterization and a Universal Ascending Auction,5240bf4f1cd8dffe3cb81174390cd88e,http://arxiv.org/abs/1605.03826v1
16589," In auction theory, cryptography has been used to achieve anonymity of the
participants, security and privacy of the bids, secure computation and to
simulate mediator (auctioneer). Auction theory focuses on revenue and
Cryptography focuses on security and privacy. Involving Cryptography at base
level, to enhance revenue gives entirely new perspective and insight to Auction
theory, thereby achieving the core goals of auction theory. In this report, we
try to investigate an interesting field of study in Auction Theory using
Cryptographic primitives.
",amjed shareef,,2012.0,,arXiv,Shareef2012,True,,arXiv,Not available,"Short Report on: Possible directions to Auctions with Cryptographic
pre-play",c04173a7604b7b653f8c49c8d20b09ba,http://arxiv.org/abs/1210.6450v1
16590," We propose a uniform approach for the design and analysis of prior-free
competitive auctions and online auctions. Our philosophy is to view the
benchmark function as a variable parameter of the model and study a broad class
of functions instead of a individual target benchmark. We consider a multitude
of well-studied auction settings, and improve upon a few previous results.
(1) Multi-unit auctions. Given a $\beta$-competitive unlimited supply
auction, the best previously known multi-unit auction is $2\beta$-competitive.
We design a $(1+\beta)$-competitive auction reducing the ratio from $4.84$ to
$3.24$. These results carry over to matroid and position auctions.
(2) General downward-closed environments. We design a $6.5$-competitive
auction improving upon the ratio of $7.5$. Our auction is noticeably simpler
than the previous best one.
(3) Unlimited supply online auctions. Our analysis yields an auction with a
competitive ratio of $4.12$, which significantly narrows the margin of
$[4,4.84]$ previously known for this problem.
A particularly important tool in our analysis is a simple decomposition
lemma, which allows us to bound the competitive ratio against a sum of
benchmark functions. We use this lemma in a ""divide and conquer"" fashion by
dividing the target benchmark into the sum of simpler functions.
",ning chen,,2014.0,,arXiv,Chen2014,True,,arXiv,Not available,Competitive analysis via benchmark decomposition,5861bdcebe37402d5f701d88bd5edf22,http://arxiv.org/abs/1411.2079v1
16591," We propose a uniform approach for the design and analysis of prior-free
competitive auctions and online auctions. Our philosophy is to view the
benchmark function as a variable parameter of the model and study a broad class
of functions instead of a individual target benchmark. We consider a multitude
of well-studied auction settings, and improve upon a few previous results.
(1) Multi-unit auctions. Given a $\beta$-competitive unlimited supply
auction, the best previously known multi-unit auction is $2\beta$-competitive.
We design a $(1+\beta)$-competitive auction reducing the ratio from $4.84$ to
$3.24$. These results carry over to matroid and position auctions.
(2) General downward-closed environments. We design a $6.5$-competitive
auction improving upon the ratio of $7.5$. Our auction is noticeably simpler
than the previous best one.
(3) Unlimited supply online auctions. Our analysis yields an auction with a
competitive ratio of $4.12$, which significantly narrows the margin of
$[4,4.84]$ previously known for this problem.
A particularly important tool in our analysis is a simple decomposition
lemma, which allows us to bound the competitive ratio against a sum of
benchmark functions. We use this lemma in a ""divide and conquer"" fashion by
dividing the target benchmark into the sum of simpler functions.
",nick gravin,,2014.0,,arXiv,Chen2014,True,,arXiv,Not available,Competitive analysis via benchmark decomposition,5861bdcebe37402d5f701d88bd5edf22,http://arxiv.org/abs/1411.2079v1
16592," We propose a uniform approach for the design and analysis of prior-free
competitive auctions and online auctions. Our philosophy is to view the
benchmark function as a variable parameter of the model and study a broad class
of functions instead of a individual target benchmark. We consider a multitude
of well-studied auction settings, and improve upon a few previous results.
(1) Multi-unit auctions. Given a $\beta$-competitive unlimited supply
auction, the best previously known multi-unit auction is $2\beta$-competitive.
We design a $(1+\beta)$-competitive auction reducing the ratio from $4.84$ to
$3.24$. These results carry over to matroid and position auctions.
(2) General downward-closed environments. We design a $6.5$-competitive
auction improving upon the ratio of $7.5$. Our auction is noticeably simpler
than the previous best one.
(3) Unlimited supply online auctions. Our analysis yields an auction with a
competitive ratio of $4.12$, which significantly narrows the margin of
$[4,4.84]$ previously known for this problem.
A particularly important tool in our analysis is a simple decomposition
lemma, which allows us to bound the competitive ratio against a sum of
benchmark functions. We use this lemma in a ""divide and conquer"" fashion by
dividing the target benchmark into the sum of simpler functions.
",pinyan lu,,2014.0,,arXiv,Chen2014,True,,arXiv,Not available,Competitive analysis via benchmark decomposition,5861bdcebe37402d5f701d88bd5edf22,http://arxiv.org/abs/1411.2079v1
16593," The first-price auction is popular in practice for its simplicity and
transparency. Moreover, its potential virtues grow in complex settings where
incentive compatible auctions may generate little or no revenue. Unfortunately,
the first-price auction is poorly understood in theory because equilibrium is
not {\em a priori} a credible predictor of bidder behavior.
We take a dynamic approach to studying first-price auctions: rather than
basing performance guarantees solely on static equilibria, we study the
repeated setting and show that robust performance guarantees may be derived
from simple axioms of bidder behavior. For example, as long as a loser raises
her bid quickly, a standard first-price auction will generate at least as much
revenue as a second-price auction. We generalize this dynamic technique to
complex pay-your-bid auction settings and show that progressively stronger
assumptions about bidder behavior imply progressively stronger guarantees about
the auction's performance.
Along the way, we find that the auctioneer's choice of bidding language is
critical when generalizing beyond the single-item setting, and we propose a
specific construction called the {\em utility-target auction} that performs
well. The utility-target auction includes a bidder's final utility as an
additional parameter, identifying the single dimension along which she wishes
to compete. This auction is closely related to profit-target bidding in
first-price and ascending proxy package auctions and gives strong revenue
guarantees for a variety of complex auction environments. Of particular
interest, the guaranteed existence of a pure-strategy equilibrium in the
utility-target auction shows how Overture might have eliminated the cyclic
behavior in their generalized first-price sponsored search auction if bidders
could have placed more sophisticated bids.
",darrell hoy,,2013.0,,arXiv,Hoy2013,True,,arXiv,Not available,A Dynamic Axiomatic Approach to First-Price Auctions,e3865d4807693c96e9d0ae19f29e5cd7,http://arxiv.org/abs/1304.7718v1
16594," The first-price auction is popular in practice for its simplicity and
transparency. Moreover, its potential virtues grow in complex settings where
incentive compatible auctions may generate little or no revenue. Unfortunately,
the first-price auction is poorly understood in theory because equilibrium is
not {\em a priori} a credible predictor of bidder behavior.
We take a dynamic approach to studying first-price auctions: rather than
basing performance guarantees solely on static equilibria, we study the
repeated setting and show that robust performance guarantees may be derived
from simple axioms of bidder behavior. For example, as long as a loser raises
her bid quickly, a standard first-price auction will generate at least as much
revenue as a second-price auction. We generalize this dynamic technique to
complex pay-your-bid auction settings and show that progressively stronger
assumptions about bidder behavior imply progressively stronger guarantees about
the auction's performance.
Along the way, we find that the auctioneer's choice of bidding language is
critical when generalizing beyond the single-item setting, and we propose a
specific construction called the {\em utility-target auction} that performs
well. The utility-target auction includes a bidder's final utility as an
additional parameter, identifying the single dimension along which she wishes
to compete. This auction is closely related to profit-target bidding in
first-price and ascending proxy package auctions and gives strong revenue
guarantees for a variety of complex auction environments. Of particular
interest, the guaranteed existence of a pure-strategy equilibrium in the
utility-target auction shows how Overture might have eliminated the cyclic
behavior in their generalized first-price sponsored search auction if bidders
could have placed more sophisticated bids.
",kamal jain,,2013.0,,arXiv,Hoy2013,True,,arXiv,Not available,A Dynamic Axiomatic Approach to First-Price Auctions,e3865d4807693c96e9d0ae19f29e5cd7,http://arxiv.org/abs/1304.7718v1
16595," The first-price auction is popular in practice for its simplicity and
transparency. Moreover, its potential virtues grow in complex settings where
incentive compatible auctions may generate little or no revenue. Unfortunately,
the first-price auction is poorly understood in theory because equilibrium is
not {\em a priori} a credible predictor of bidder behavior.
We take a dynamic approach to studying first-price auctions: rather than
basing performance guarantees solely on static equilibria, we study the
repeated setting and show that robust performance guarantees may be derived
from simple axioms of bidder behavior. For example, as long as a loser raises
her bid quickly, a standard first-price auction will generate at least as much
revenue as a second-price auction. We generalize this dynamic technique to
complex pay-your-bid auction settings and show that progressively stronger
assumptions about bidder behavior imply progressively stronger guarantees about
the auction's performance.
Along the way, we find that the auctioneer's choice of bidding language is
critical when generalizing beyond the single-item setting, and we propose a
specific construction called the {\em utility-target auction} that performs
well. The utility-target auction includes a bidder's final utility as an
additional parameter, identifying the single dimension along which she wishes
to compete. This auction is closely related to profit-target bidding in
first-price and ascending proxy package auctions and gives strong revenue
guarantees for a variety of complex auction environments. Of particular
interest, the guaranteed existence of a pure-strategy equilibrium in the
utility-target auction shows how Overture might have eliminated the cyclic
behavior in their generalized first-price sponsored search auction if bidders
could have placed more sophisticated bids.
",christopher wilkens,,2013.0,,arXiv,Hoy2013,True,,arXiv,Not available,A Dynamic Axiomatic Approach to First-Price Auctions,e3865d4807693c96e9d0ae19f29e5cd7,http://arxiv.org/abs/1304.7718v1
16596," We study pure-strategy Nash equilibria in multi-player concurrent
deterministic games, for a variety of preference relations. We provide a novel
construction, called the suspect game, which transforms a multi-player
concurrent game into a two-player turn-based game which turns Nash equilibria
into winning strategies (for some objective that depends on the preference
relations of the players in the original game). We use that transformation to
design algorithms for computing Nash equilibria in finite games, which in most
cases have optimal worst-case complexity, for large classes of preference
relations. This includes the purely qualitative framework, where each player
has a single omega-regular objective that she wants to satisfy, but also the
larger class of semi-quantitative objectives, where each player has several
omega-regular objectives equipped with a preorder (for instance, a player may
want to satisfy all her objectives, or to maximise the number of objectives
that she achieves.)
",michael ummels,,2015.0,10.2168/LMCS-11(2:9)2015,"Logical Methods in Computer Science, Volume 11, Issue 2 (June 19,
2015) lmcs:1569",Bouyer2015,True,,arXiv,Not available,Pure Nash Equilibria in Concurrent Deterministic Games,a6b94c317a0af8dc06d94788e7b239f9,http://arxiv.org/abs/1503.06826v2
16597," Search auctions have become a dominant source of revenue generation on the
Internet. Such auctions have typically used per-click bidding and pricing. We
propose the use of hybrid auctions where an advertiser can make a
per-impression as well as a per-click bid, and the auctioneer then chooses one
of the two as the pricing mechanism. We assume that the advertiser and the
auctioneer both have separate beliefs (called priors) on the click-probability
of an advertisement. We first prove that the hybrid auction is truthful,
assuming that the advertisers are risk-neutral. We then show that this auction
is superior to the existing per-click auction in multiple ways: 1) It takes
into account the risk characteristics of the advertisers. 2) For obscure
keywords, the auctioneer is unlikely to have a very sharp prior on the
click-probabilities. In such situations, the hybrid auction can result in
significantly higher revenue. 3) An advertiser who believes that its
click-probability is much higher than the auctioneer's estimate can use
per-impression bids to correct the auctioneer's prior without incurring any
extra cost. 4) The hybrid auction can allow the advertiser and auctioneer to
implement complex dynamic programming strategies. As Internet commerce matures,
we need more sophisticated pricing models to exploit all the information held
by each of the participants. We believe that hybrid auctions could be an
important step in this direction.
",ashish goel,,2008.0,,arXiv,Goel2008,True,,arXiv,Not available,Hybrid Keyword Search Auctions,77fa375d8a0bf33692dab873d0957bd5,http://arxiv.org/abs/0807.2496v2
16598," Search auctions have become a dominant source of revenue generation on the
Internet. Such auctions have typically used per-click bidding and pricing. We
propose the use of hybrid auctions where an advertiser can make a
per-impression as well as a per-click bid, and the auctioneer then chooses one
of the two as the pricing mechanism. We assume that the advertiser and the
auctioneer both have separate beliefs (called priors) on the click-probability
of an advertisement. We first prove that the hybrid auction is truthful,
assuming that the advertisers are risk-neutral. We then show that this auction
is superior to the existing per-click auction in multiple ways: 1) It takes
into account the risk characteristics of the advertisers. 2) For obscure
keywords, the auctioneer is unlikely to have a very sharp prior on the
click-probabilities. In such situations, the hybrid auction can result in
significantly higher revenue. 3) An advertiser who believes that its
click-probability is much higher than the auctioneer's estimate can use
per-impression bids to correct the auctioneer's prior without incurring any
extra cost. 4) The hybrid auction can allow the advertiser and auctioneer to
implement complex dynamic programming strategies. As Internet commerce matures,
we need more sophisticated pricing models to exploit all the information held
by each of the participants. We believe that hybrid auctions could be an
important step in this direction.
",kamesh munagala,,2008.0,,arXiv,Goel2008,True,,arXiv,Not available,Hybrid Keyword Search Auctions,77fa375d8a0bf33692dab873d0957bd5,http://arxiv.org/abs/0807.2496v2
16599," We discuss bundle auctions within the framework of an integer allocation
problem. We show that for multi-unit auctions, of which bundle auctions are a
special case, market equilibrium and constrained market equilibrium are
equivalent concepts. This equivalence, allows us to obtain a computable
necessary and sufficient condition for the existence of constrained market
equilibrium for bundle auctions. We use this result to obtain a necessary and
sufficient condition for the existence of market equilibrium for multi-unit
auctions. After obtaining the induced bundle auction of a nonnegative TU game,
we show that the existence of market equilibrium implies the existence of a
possibly different market equilibrium as well, which corresponds very naturally
to an outcome in the matching core of the TU game. Consequently we show that
the matching core of the nonnegative TU game is non-empty if and only if the
induced market game has a market equilibrium.
",somdeb lahiri,,2006.0,,arXiv,Lahiri2006,True,,arXiv,Not available,"Market Equilibrium for Bundle Auctions and the Matching Core of
Nonnegative TU Games",76077471140c9fabd94199f3a8422910,http://arxiv.org/abs/cs/0603032v3
16600," Auctions are markets with strict regulations governing the information
available to traders in the market and the possible actions they can take.
Since well designed auctions achieve desirable economic outcomes, they have
been widely used in solving real-world optimization problems, and in
structuring stock or futures exchanges. Auctions also provide a very valuable
testing-ground for economic theory, and they play an important role in
computer-based control systems.
Auction mechanism design aims to manipulate the rules of an auction in order
to achieve specific goals. Economists traditionally use mathematical methods,
mainly game theory, to analyze auctions and design new auction forms. However,
due to the high complexity of auctions, the mathematical models are typically
simplified to obtain results, and this makes it difficult to apply results
derived from such models to market environments in the real world. As a result,
researchers are turning to empirical approaches.
This report aims to survey the theoretical and empirical approaches to
designing auction mechanisms and trading strategies with more weights on
empirical ones, and build the foundation for further research in the field.
",jinzhong niu,,2009.0,,arXiv,Niu2009,True,,arXiv,Not available,An Investigation Report on Auction Mechanism Design,a361692f9e4b8aaa72b2e9d6c12fe02f,http://arxiv.org/abs/0904.1258v2
16601," Auctions are markets with strict regulations governing the information
available to traders in the market and the possible actions they can take.
Since well designed auctions achieve desirable economic outcomes, they have
been widely used in solving real-world optimization problems, and in
structuring stock or futures exchanges. Auctions also provide a very valuable
testing-ground for economic theory, and they play an important role in
computer-based control systems.
Auction mechanism design aims to manipulate the rules of an auction in order
to achieve specific goals. Economists traditionally use mathematical methods,
mainly game theory, to analyze auctions and design new auction forms. However,
due to the high complexity of auctions, the mathematical models are typically
simplified to obtain results, and this makes it difficult to apply results
derived from such models to market environments in the real world. As a result,
researchers are turning to empirical approaches.
This report aims to survey the theoretical and empirical approaches to
designing auction mechanisms and trading strategies with more weights on
empirical ones, and build the foundation for further research in the field.
",simon parsons,,2009.0,,arXiv,Niu2009,True,,arXiv,Not available,An Investigation Report on Auction Mechanism Design,a361692f9e4b8aaa72b2e9d6c12fe02f,http://arxiv.org/abs/0904.1258v2
16602," Most search engines sell slots to place advertisements on the search results
page through keyword auctions. Advertisers offer bids for how much they are
willing to pay when someone enters a search query, sees the search results, and
then clicks on one of their ads. Search engines typically order the
advertisements for a query by a combination of the bids and expected
clickthrough rates for each advertisement. In this paper, we extend a model of
Yahoo's and Google's advertising auctions to include an effect where repeatedly
showing less relevant ads has a persistent impact on all advertising on the
search engine, an impact we designate as the pollution effect. In Monte-Carlo
simulations using distributions fitted to Yahoo data, we show that a modest
pollution effect is sufficient to dramatically change the advertising rank
order that yields the optimal advertising revenue for a search engine. In
addition, if a pollution effect exists, it is possible to maximize revenue
while also increasing advertiser, and publisher utility. Our results suggest
that search engines could benefit from making relevant advertisements less
expensive and irrelevant advertisements more costly for advertisers than is the
current practice.
",greg linden,,2011.0,,arXiv,Linden2011,True,,arXiv,Not available,"The Pollution Effect: Optimizing Keyword Auctions by Favoring Relevant
Advertising",6c19b4146facdccfc27a529889a4c02d,http://arxiv.org/abs/1109.6263v1
16603," Most search engines sell slots to place advertisements on the search results
page through keyword auctions. Advertisers offer bids for how much they are
willing to pay when someone enters a search query, sees the search results, and
then clicks on one of their ads. Search engines typically order the
advertisements for a query by a combination of the bids and expected
clickthrough rates for each advertisement. In this paper, we extend a model of
Yahoo's and Google's advertising auctions to include an effect where repeatedly
showing less relevant ads has a persistent impact on all advertising on the
search engine, an impact we designate as the pollution effect. In Monte-Carlo
simulations using distributions fitted to Yahoo data, we show that a modest
pollution effect is sufficient to dramatically change the advertising rank
order that yields the optimal advertising revenue for a search engine. In
addition, if a pollution effect exists, it is possible to maximize revenue
while also increasing advertiser, and publisher utility. Our results suggest
that search engines could benefit from making relevant advertisements less
expensive and irrelevant advertisements more costly for advertisers than is the
current practice.
",christopher meek,,2011.0,,arXiv,Linden2011,True,,arXiv,Not available,"The Pollution Effect: Optimizing Keyword Auctions by Favoring Relevant
Advertising",6c19b4146facdccfc27a529889a4c02d,http://arxiv.org/abs/1109.6263v1
16604," Most search engines sell slots to place advertisements on the search results
page through keyword auctions. Advertisers offer bids for how much they are
willing to pay when someone enters a search query, sees the search results, and
then clicks on one of their ads. Search engines typically order the
advertisements for a query by a combination of the bids and expected
clickthrough rates for each advertisement. In this paper, we extend a model of
Yahoo's and Google's advertising auctions to include an effect where repeatedly
showing less relevant ads has a persistent impact on all advertising on the
search engine, an impact we designate as the pollution effect. In Monte-Carlo
simulations using distributions fitted to Yahoo data, we show that a modest
pollution effect is sufficient to dramatically change the advertising rank
order that yields the optimal advertising revenue for a search engine. In
addition, if a pollution effect exists, it is possible to maximize revenue
while also increasing advertiser, and publisher utility. Our results suggest
that search engines could benefit from making relevant advertisements less
expensive and irrelevant advertisements more costly for advertisers than is the
current practice.
",max chickering,,2011.0,,arXiv,Linden2011,True,,arXiv,Not available,"The Pollution Effect: Optimizing Keyword Auctions by Favoring Relevant
Advertising",6c19b4146facdccfc27a529889a4c02d,http://arxiv.org/abs/1109.6263v1
16605," Online advertising is the main source of revenue for many Internet firms. A
central component of online advertising is the underlying mechanism that
selects and prices the winning ads for a given ad slot. In this paper we study
designing a mechanism for the Combinatorial Auction with Identical Items (CAII)
in which we are interested in selling $k$ identical items to a group of bidders
each demanding a certain number of items between $1$ and $k$. CAII generalizes
important online advertising scenarios such as image-text and video-pod
auctions [GK14]. In image-text auction we want to fill an advertising slot on a
publisher's web page with either $k$ text-ads or a single image-ad and in
video-pod auction we want to fill an advertising break of $k$ seconds with
video-ads of possibly different durations.
Our goal is to design truthful mechanisms that satisfy Revenue Monotonicity
(RM). RM is a natural constraint which states that the revenue of a mechanism
should not decrease if the number of participants increases or if a participant
increases her bid.
[GK14] showed that no deterministic RM mechanism can attain PoRM of less than
$\ln(k)$ for CAII, i.e., no deterministic mechanism can attain more than
$\frac{1}{\ln(k)}$ fraction of the maximum social welfare. [GK14] also design a
mechanism with PoRM of $O(\ln^2(k))$ for CAII.
In this paper, we seek to overcome the impossibility result of [GK14] for
deterministic mechanisms by using the power of randomization. We show that by
using randomization, one can attain a constant PoRM. In particular, we design a
randomized RM mechanism with PoRM of $3$ for CAII.
",gagan goel,,2015.0,,arXiv,Goel2015,True,,arXiv,Not available,Randomized Revenue Monotone Mechanisms for Online Advertising,a96c6b846126bb9932daee7c56e23962,http://arxiv.org/abs/1507.00130v1
16606," Online advertising is the main source of revenue for many Internet firms. A
central component of online advertising is the underlying mechanism that
selects and prices the winning ads for a given ad slot. In this paper we study
designing a mechanism for the Combinatorial Auction with Identical Items (CAII)
in which we are interested in selling $k$ identical items to a group of bidders
each demanding a certain number of items between $1$ and $k$. CAII generalizes
important online advertising scenarios such as image-text and video-pod
auctions [GK14]. In image-text auction we want to fill an advertising slot on a
publisher's web page with either $k$ text-ads or a single image-ad and in
video-pod auction we want to fill an advertising break of $k$ seconds with
video-ads of possibly different durations.
Our goal is to design truthful mechanisms that satisfy Revenue Monotonicity
(RM). RM is a natural constraint which states that the revenue of a mechanism
should not decrease if the number of participants increases or if a participant
increases her bid.
[GK14] showed that no deterministic RM mechanism can attain PoRM of less than
$\ln(k)$ for CAII, i.e., no deterministic mechanism can attain more than
$\frac{1}{\ln(k)}$ fraction of the maximum social welfare. [GK14] also design a
mechanism with PoRM of $O(\ln^2(k))$ for CAII.
In this paper, we seek to overcome the impossibility result of [GK14] for
deterministic mechanisms by using the power of randomization. We show that by
using randomization, one can attain a constant PoRM. In particular, we design a
randomized RM mechanism with PoRM of $3$ for CAII.
",mohammadtaghi hajiaghayi,,2015.0,,arXiv,Goel2015,True,,arXiv,Not available,Randomized Revenue Monotone Mechanisms for Online Advertising,a96c6b846126bb9932daee7c56e23962,http://arxiv.org/abs/1507.00130v1
16607," We consider sequences of games $\mathcal{G}=\{G_1,G_2,\ldots\}$ where, for
all $n$, $G_n$ has the same set of players. Such sequences arise in the
analysis of running time of players in games, in electronic money systems such
as Bitcoin and in cryptographic protocols. Assuming that one-way functions
exist, we prove that there is a sequence of 2-player zero-sum Bayesian games
$\mathcal{G}$ such that, for all $n$, the size of every action in $G_n$ is
polynomial in $n$, the utility function is polynomial computable in $n$, and
yet there is no polynomial-time Nash equilibrium, where we use a notion of Nash
equilibrium that is tailored to sequences of games. We also demonstrate that
Nash equilibrium may not exist when considering players that are constrained to
perform at most $T$ computational steps in each of the games
$\{G_i\}_{i=1}^{\infty}$. These examples may shed light on competitive settings
where the availability of more running time or faster algorithms lead to a
""computational arms race"", precluding the existence of equilibrium. They also
point to inherent limitations of concepts such ""best response"" and Nash
equilibrium in games with resource-bounded players.
",joseph halpern,,2015.0,,arXiv,Halpern2015,True,,arXiv,Not available,"On the Non-Existence of Nash Equilibrium in Games with Resource-Bounded
Players",a7b96118021b9c713ee13526ce34ce00,http://arxiv.org/abs/1507.01501v2
16608," Online advertising is the main source of revenue for many Internet firms. A
central component of online advertising is the underlying mechanism that
selects and prices the winning ads for a given ad slot. In this paper we study
designing a mechanism for the Combinatorial Auction with Identical Items (CAII)
in which we are interested in selling $k$ identical items to a group of bidders
each demanding a certain number of items between $1$ and $k$. CAII generalizes
important online advertising scenarios such as image-text and video-pod
auctions [GK14]. In image-text auction we want to fill an advertising slot on a
publisher's web page with either $k$ text-ads or a single image-ad and in
video-pod auction we want to fill an advertising break of $k$ seconds with
video-ads of possibly different durations.
Our goal is to design truthful mechanisms that satisfy Revenue Monotonicity
(RM). RM is a natural constraint which states that the revenue of a mechanism
should not decrease if the number of participants increases or if a participant
increases her bid.
[GK14] showed that no deterministic RM mechanism can attain PoRM of less than
$\ln(k)$ for CAII, i.e., no deterministic mechanism can attain more than
$\frac{1}{\ln(k)}$ fraction of the maximum social welfare. [GK14] also design a
mechanism with PoRM of $O(\ln^2(k))$ for CAII.
In this paper, we seek to overcome the impossibility result of [GK14] for
deterministic mechanisms by using the power of randomization. We show that by
using randomization, one can attain a constant PoRM. In particular, we design a
randomized RM mechanism with PoRM of $3$ for CAII.
",mohammad khani,,2015.0,,arXiv,Goel2015,True,,arXiv,Not available,Randomized Revenue Monotone Mechanisms for Online Advertising,a96c6b846126bb9932daee7c56e23962,http://arxiv.org/abs/1507.00130v1
16609," Combinatorial auctions (CA) are a well-studied area in algorithmic mechanism
design. However, contrary to the standard model, empirical studies suggest that
a bidder's valuation often does not depend solely on the goods assigned to him.
For instance, in adwords auctions an advertiser might not want his ads to be
displayed next to his competitors' ads. In this paper, we propose and analyze
several natural graph-theoretic models that incorporate such negative
externalities, in which bidders form a directed conflict graph with maximum
out-degree $\Delta$. We design algorithms and truthful mechanisms for social
welfare maximization that attain approximation ratios depending on $\Delta$.
For CA, our results are twofold: (1) A lottery that eliminates conflicts by
discarding bidders/items independent of the bids. It allows to apply any
truthful $\alpha$-approximation mechanism for conflict-free valuations and
yields an $\mathcal{O}(\alpha\Delta)$-approximation mechanism. (2) For
fractionally sub-additive valuations, we design a rounding algorithm via a
novel combination of a semi-definite program and a linear program, resulting in
a cone program; the approximation ratio is $\mathcal{O}((\Delta \log \log
\Delta)/\log \Delta)$. The ratios are almost optimal given existing hardness
results.
For the prominent application of adwords auctions, we present several
algorithms for the most relevant scenario when the number of items is small. In
particular, we design a truthful mechanism with approximation ratio $o(\Delta)$
when the number of items is only logarithmic in the number of bidders.
",yun cheung,,2015.0,,arXiv,Cheung2015,True,,arXiv,Not available,Combinatorial Auctions with Conflict-Based Externalities,c5539da85665d3850b0cc07a89b1cb79,http://arxiv.org/abs/1509.09147v1
16610," Combinatorial auctions (CA) are a well-studied area in algorithmic mechanism
design. However, contrary to the standard model, empirical studies suggest that
a bidder's valuation often does not depend solely on the goods assigned to him.
For instance, in adwords auctions an advertiser might not want his ads to be
displayed next to his competitors' ads. In this paper, we propose and analyze
several natural graph-theoretic models that incorporate such negative
externalities, in which bidders form a directed conflict graph with maximum
out-degree $\Delta$. We design algorithms and truthful mechanisms for social
welfare maximization that attain approximation ratios depending on $\Delta$.
For CA, our results are twofold: (1) A lottery that eliminates conflicts by
discarding bidders/items independent of the bids. It allows to apply any
truthful $\alpha$-approximation mechanism for conflict-free valuations and
yields an $\mathcal{O}(\alpha\Delta)$-approximation mechanism. (2) For
fractionally sub-additive valuations, we design a rounding algorithm via a
novel combination of a semi-definite program and a linear program, resulting in
a cone program; the approximation ratio is $\mathcal{O}((\Delta \log \log
\Delta)/\log \Delta)$. The ratios are almost optimal given existing hardness
results.
For the prominent application of adwords auctions, we present several
algorithms for the most relevant scenario when the number of items is small. In
particular, we design a truthful mechanism with approximation ratio $o(\Delta)$
when the number of items is only logarithmic in the number of bidders.
",monika henzinger,,2015.0,,arXiv,Cheung2015,True,,arXiv,Not available,Combinatorial Auctions with Conflict-Based Externalities,c5539da85665d3850b0cc07a89b1cb79,http://arxiv.org/abs/1509.09147v1
16611," Combinatorial auctions (CA) are a well-studied area in algorithmic mechanism
design. However, contrary to the standard model, empirical studies suggest that
a bidder's valuation often does not depend solely on the goods assigned to him.
For instance, in adwords auctions an advertiser might not want his ads to be
displayed next to his competitors' ads. In this paper, we propose and analyze
several natural graph-theoretic models that incorporate such negative
externalities, in which bidders form a directed conflict graph with maximum
out-degree $\Delta$. We design algorithms and truthful mechanisms for social
welfare maximization that attain approximation ratios depending on $\Delta$.
For CA, our results are twofold: (1) A lottery that eliminates conflicts by
discarding bidders/items independent of the bids. It allows to apply any
truthful $\alpha$-approximation mechanism for conflict-free valuations and
yields an $\mathcal{O}(\alpha\Delta)$-approximation mechanism. (2) For
fractionally sub-additive valuations, we design a rounding algorithm via a
novel combination of a semi-definite program and a linear program, resulting in
a cone program; the approximation ratio is $\mathcal{O}((\Delta \log \log
\Delta)/\log \Delta)$. The ratios are almost optimal given existing hardness
results.
For the prominent application of adwords auctions, we present several
algorithms for the most relevant scenario when the number of items is small. In
particular, we design a truthful mechanism with approximation ratio $o(\Delta)$
when the number of items is only logarithmic in the number of bidders.
",martin hoefer,,2015.0,,arXiv,Cheung2015,True,,arXiv,Not available,Combinatorial Auctions with Conflict-Based Externalities,c5539da85665d3850b0cc07a89b1cb79,http://arxiv.org/abs/1509.09147v1
16612," Combinatorial auctions (CA) are a well-studied area in algorithmic mechanism
design. However, contrary to the standard model, empirical studies suggest that
a bidder's valuation often does not depend solely on the goods assigned to him.
For instance, in adwords auctions an advertiser might not want his ads to be
displayed next to his competitors' ads. In this paper, we propose and analyze
several natural graph-theoretic models that incorporate such negative
externalities, in which bidders form a directed conflict graph with maximum
out-degree $\Delta$. We design algorithms and truthful mechanisms for social
welfare maximization that attain approximation ratios depending on $\Delta$.
For CA, our results are twofold: (1) A lottery that eliminates conflicts by
discarding bidders/items independent of the bids. It allows to apply any
truthful $\alpha$-approximation mechanism for conflict-free valuations and
yields an $\mathcal{O}(\alpha\Delta)$-approximation mechanism. (2) For
fractionally sub-additive valuations, we design a rounding algorithm via a
novel combination of a semi-definite program and a linear program, resulting in
a cone program; the approximation ratio is $\mathcal{O}((\Delta \log \log
\Delta)/\log \Delta)$. The ratios are almost optimal given existing hardness
results.
For the prominent application of adwords auctions, we present several
algorithms for the most relevant scenario when the number of items is small. In
particular, we design a truthful mechanism with approximation ratio $o(\Delta)$
when the number of items is only logarithmic in the number of bidders.
",martin starnberger,,2015.0,,arXiv,Cheung2015,True,,arXiv,Not available,Combinatorial Auctions with Conflict-Based Externalities,c5539da85665d3850b0cc07a89b1cb79,http://arxiv.org/abs/1509.09147v1
16617," Two classes of distributions that are widely used in the analysis of Bayesian
auctions are the Monotone Hazard Rate (MHR) and Regular distributions. They can
both be characterized in terms of the rate of change of the associated virtual
value functions: for MHR distributions the condition is that for values $v <
v'$, $\phi(v') - \phi(v) \ge v' - v$, and for regular distributions, $\phi(v')
- \phi(v) \ge 0$. Cole and Roughgarden introduced the interpolating class of
$\alpha$-Strongly Regular distributions ($\alpha$-SR distributions for short),
for which $\phi(v') - \phi(v) \ge \alpha(v' - v)$, for $0 \le \alpha \le 1$.
In this paper, we investigate five distinct auction settings for which good
expected revenue bounds are known when the bidders' valuations are given by MHR
distributions. In every case, we show that these bounds degrade gracefully when
extended to $\alpha$-SR distributions. For four of these settings, the auction
mechanism requires knowledge of these distribution(s) (in the other setting,
the distributions are needed only to ensure good bounds on the expected
revenue). In these cases we also investigate what happens when the
distributions are known only approximately via samples, specifically how to
modify the mechanisms so that they remain effective and how the expected
revenue depends on the number of samples.
",richard cole,,2015.0,,arXiv,Cole2015,True,,arXiv,Not available,"Applications of $α$-strongly regular distributions to Bayesian
auctions",ea46af8ba2b2079a599165ba0c9c4b21,http://arxiv.org/abs/1512.02285v2
16618," We consider sequences of games $\mathcal{G}=\{G_1,G_2,\ldots\}$ where, for
all $n$, $G_n$ has the same set of players. Such sequences arise in the
analysis of running time of players in games, in electronic money systems such
as Bitcoin and in cryptographic protocols. Assuming that one-way functions
exist, we prove that there is a sequence of 2-player zero-sum Bayesian games
$\mathcal{G}$ such that, for all $n$, the size of every action in $G_n$ is
polynomial in $n$, the utility function is polynomial computable in $n$, and
yet there is no polynomial-time Nash equilibrium, where we use a notion of Nash
equilibrium that is tailored to sequences of games. We also demonstrate that
Nash equilibrium may not exist when considering players that are constrained to
perform at most $T$ computational steps in each of the games
$\{G_i\}_{i=1}^{\infty}$. These examples may shed light on competitive settings
where the availability of more running time or faster algorithms lead to a
""computational arms race"", precluding the existence of equilibrium. They also
point to inherent limitations of concepts such ""best response"" and Nash
equilibrium in games with resource-bounded players.
",rafael pass,,2015.0,,arXiv,Halpern2015,True,,arXiv,Not available,"On the Non-Existence of Nash Equilibrium in Games with Resource-Bounded
Players",a7b96118021b9c713ee13526ce34ce00,http://arxiv.org/abs/1507.01501v2
16619," Two classes of distributions that are widely used in the analysis of Bayesian
auctions are the Monotone Hazard Rate (MHR) and Regular distributions. They can
both be characterized in terms of the rate of change of the associated virtual
value functions: for MHR distributions the condition is that for values $v <
v'$, $\phi(v') - \phi(v) \ge v' - v$, and for regular distributions, $\phi(v')
- \phi(v) \ge 0$. Cole and Roughgarden introduced the interpolating class of
$\alpha$-Strongly Regular distributions ($\alpha$-SR distributions for short),
for which $\phi(v') - \phi(v) \ge \alpha(v' - v)$, for $0 \le \alpha \le 1$.
In this paper, we investigate five distinct auction settings for which good
expected revenue bounds are known when the bidders' valuations are given by MHR
distributions. In every case, we show that these bounds degrade gracefully when
extended to $\alpha$-SR distributions. For four of these settings, the auction
mechanism requires knowledge of these distribution(s) (in the other setting,
the distributions are needed only to ensure good bounds on the expected
revenue). In these cases we also investigate what happens when the
distributions are known only approximately via samples, specifically how to
modify the mechanisms so that they remain effective and how the expected
revenue depends on the number of samples.
",shravas rao,,2015.0,,arXiv,Cole2015,True,,arXiv,Not available,"Applications of $α$-strongly regular distributions to Bayesian
auctions",ea46af8ba2b2079a599165ba0c9c4b21,http://arxiv.org/abs/1512.02285v2
16620," We consider a monopolist that is selling $n$ items to a single additive
buyer, where the buyer's values for the items are drawn according to
independent distributions $F_1, F_2,\ldots,F_n$ that possibly have unbounded
support. It is well known that - unlike in the single item case - the
revenue-optimal auction (a pricing scheme) may be complex, sometimes requiring
a continuum of menu entries. It is also known that simple auctions with a
finite bounded number of menu entries can extract a constant fraction of the
optimal revenue. Nonetheless, the question of the possibility of extracting an
arbitrarily high fraction of the optimal revenue via a finite menu size
remained open.
In this paper, we give an affirmative answer to this open question, showing
that for every $n$ and for every $\varepsilon>0$, there exists a complexity
bound $C=C(n,\varepsilon)$ such that auctions of menu size at most $C$ suffice
for obtaining a $(1-\varepsilon)$ fraction of the optimal revenue from any
$F_1,\ldots,F_n$. We prove upper and lower bounds on the revenue approximation
complexity $C(n,\varepsilon)$, as well as on the deterministic communication
complexity required to run an auction that achieves such an approximation.
",moshe babaioff,,2016.0,,arXiv,Babaioff2016,True,,arXiv,Not available,The Menu-Size Complexity of Revenue Approximation,5cbae5328b80a92106ea3299bc2da90b,http://arxiv.org/abs/1604.06580v3
16621," We consider a monopolist that is selling $n$ items to a single additive
buyer, where the buyer's values for the items are drawn according to
independent distributions $F_1, F_2,\ldots,F_n$ that possibly have unbounded
support. It is well known that - unlike in the single item case - the
revenue-optimal auction (a pricing scheme) may be complex, sometimes requiring
a continuum of menu entries. It is also known that simple auctions with a
finite bounded number of menu entries can extract a constant fraction of the
optimal revenue. Nonetheless, the question of the possibility of extracting an
arbitrarily high fraction of the optimal revenue via a finite menu size
remained open.
In this paper, we give an affirmative answer to this open question, showing
that for every $n$ and for every $\varepsilon>0$, there exists a complexity
bound $C=C(n,\varepsilon)$ such that auctions of menu size at most $C$ suffice
for obtaining a $(1-\varepsilon)$ fraction of the optimal revenue from any
$F_1,\ldots,F_n$. We prove upper and lower bounds on the revenue approximation
complexity $C(n,\varepsilon)$, as well as on the deterministic communication
complexity required to run an auction that achieves such an approximation.
",yannai gonczarowski,,2016.0,,arXiv,Babaioff2016,True,,arXiv,Not available,The Menu-Size Complexity of Revenue Approximation,5cbae5328b80a92106ea3299bc2da90b,http://arxiv.org/abs/1604.06580v3
16622," We consider a monopolist that is selling $n$ items to a single additive
buyer, where the buyer's values for the items are drawn according to
independent distributions $F_1, F_2,\ldots,F_n$ that possibly have unbounded
support. It is well known that - unlike in the single item case - the
revenue-optimal auction (a pricing scheme) may be complex, sometimes requiring
a continuum of menu entries. It is also known that simple auctions with a
finite bounded number of menu entries can extract a constant fraction of the
optimal revenue. Nonetheless, the question of the possibility of extracting an
arbitrarily high fraction of the optimal revenue via a finite menu size
remained open.
In this paper, we give an affirmative answer to this open question, showing
that for every $n$ and for every $\varepsilon>0$, there exists a complexity
bound $C=C(n,\varepsilon)$ such that auctions of menu size at most $C$ suffice
for obtaining a $(1-\varepsilon)$ fraction of the optimal revenue from any
$F_1,\ldots,F_n$. We prove upper and lower bounds on the revenue approximation
complexity $C(n,\varepsilon)$, as well as on the deterministic communication
complexity required to run an auction that achieves such an approximation.
",noam nisan,,2016.0,,arXiv,Babaioff2016,True,,arXiv,Not available,The Menu-Size Complexity of Revenue Approximation,5cbae5328b80a92106ea3299bc2da90b,http://arxiv.org/abs/1604.06580v3
16623," Motivated by online display ad exchanges, we study a setting in which an
exchange repeatedly interacts with bidders who have quota, making decisions
about which subsets of bidders are called to participate in ad-slot-specific
auctions. A bidder with quota cannot respond to more than a certain number of
calls per second. In practice, random throttling is the principal solution by
which these constraints are enforced. Given the repeated nature of the
interaction with its bidders, the exchange has access to data containing
information about each bidder's segments of interest. This information can be
utilized to design smarter callout mechanisms --- with the potential of
improving the exchange's long-term revenue. In this work, we present a general
framework for evaluating and comparing the performance of various callout
mechanisms using historical auction data only. To measure the impact of a
callout mechanism on long-term revenue, we propose a strategic model that
captures the repeated interaction between the exchange and bidders. Our model
leads us to two metrics for performance: immediate revenue impact and social
welfare. Next we present an empirical framework for estimating these two
metrics from historical data. For the baseline to compare against, we consider
random throttling, as well as a greedy algorithm with certain theoretical
guarantees. We propose several natural callout mechanisms and investigate them
through our framework on both synthetic and real auction data. We characterize
the conditions under which each heuristic performs well and show that, in
addition to being computationally faster, in practice our heuristics
consistently and significantly outperform the baselines.
",hossein azari,,2017.0,,arXiv,Azari2017,True,,arXiv,Not available,"A General Framework for Evaluating Callout Mechanisms in Repeated
Auctions",9f594067a53dda5945498f77b3dc34cf,http://arxiv.org/abs/1702.01803v1
16624," Motivated by online display ad exchanges, we study a setting in which an
exchange repeatedly interacts with bidders who have quota, making decisions
about which subsets of bidders are called to participate in ad-slot-specific
auctions. A bidder with quota cannot respond to more than a certain number of
calls per second. In practice, random throttling is the principal solution by
which these constraints are enforced. Given the repeated nature of the
interaction with its bidders, the exchange has access to data containing
information about each bidder's segments of interest. This information can be
utilized to design smarter callout mechanisms --- with the potential of
improving the exchange's long-term revenue. In this work, we present a general
framework for evaluating and comparing the performance of various callout
mechanisms using historical auction data only. To measure the impact of a
callout mechanism on long-term revenue, we propose a strategic model that
captures the repeated interaction between the exchange and bidders. Our model
leads us to two metrics for performance: immediate revenue impact and social
welfare. Next we present an empirical framework for estimating these two
metrics from historical data. For the baseline to compare against, we consider
random throttling, as well as a greedy algorithm with certain theoretical
guarantees. We propose several natural callout mechanisms and investigate them
through our framework on both synthetic and real auction data. We characterize
the conditions under which each heuristic performs well and show that, in
addition to being computationally faster, in practice our heuristics
consistently and significantly outperform the baselines.
",william heavlin,,2017.0,,arXiv,Azari2017,True,,arXiv,Not available,"A General Framework for Evaluating Callout Mechanisms in Repeated
Auctions",9f594067a53dda5945498f77b3dc34cf,http://arxiv.org/abs/1702.01803v1
16625," Motivated by online display ad exchanges, we study a setting in which an
exchange repeatedly interacts with bidders who have quota, making decisions
about which subsets of bidders are called to participate in ad-slot-specific
auctions. A bidder with quota cannot respond to more than a certain number of
calls per second. In practice, random throttling is the principal solution by
which these constraints are enforced. Given the repeated nature of the
interaction with its bidders, the exchange has access to data containing
information about each bidder's segments of interest. This information can be
utilized to design smarter callout mechanisms --- with the potential of
improving the exchange's long-term revenue. In this work, we present a general
framework for evaluating and comparing the performance of various callout
mechanisms using historical auction data only. To measure the impact of a
callout mechanism on long-term revenue, we propose a strategic model that
captures the repeated interaction between the exchange and bidders. Our model
leads us to two metrics for performance: immediate revenue impact and social
welfare. Next we present an empirical framework for estimating these two
metrics from historical data. For the baseline to compare against, we consider
random throttling, as well as a greedy algorithm with certain theoretical
guarantees. We propose several natural callout mechanisms and investigate them
through our framework on both synthetic and real auction data. We characterize
the conditions under which each heuristic performs well and show that, in
addition to being computationally faster, in practice our heuristics
consistently and significantly outperform the baselines.
",hoda heidari,,2017.0,,arXiv,Azari2017,True,,arXiv,Not available,"A General Framework for Evaluating Callout Mechanisms in Repeated
Auctions",9f594067a53dda5945498f77b3dc34cf,http://arxiv.org/abs/1702.01803v1
16626," Motivated by online display ad exchanges, we study a setting in which an
exchange repeatedly interacts with bidders who have quota, making decisions
about which subsets of bidders are called to participate in ad-slot-specific
auctions. A bidder with quota cannot respond to more than a certain number of
calls per second. In practice, random throttling is the principal solution by
which these constraints are enforced. Given the repeated nature of the
interaction with its bidders, the exchange has access to data containing
information about each bidder's segments of interest. This information can be
utilized to design smarter callout mechanisms --- with the potential of
improving the exchange's long-term revenue. In this work, we present a general
framework for evaluating and comparing the performance of various callout
mechanisms using historical auction data only. To measure the impact of a
callout mechanism on long-term revenue, we propose a strategic model that
captures the repeated interaction between the exchange and bidders. Our model
leads us to two metrics for performance: immediate revenue impact and social
welfare. Next we present an empirical framework for estimating these two
metrics from historical data. For the baseline to compare against, we consider
random throttling, as well as a greedy algorithm with certain theoretical
guarantees. We propose several natural callout mechanisms and investigate them
through our framework on both synthetic and real auction data. We characterize
the conditions under which each heuristic performs well and show that, in
addition to being computationally faster, in practice our heuristics
consistently and significantly outperform the baselines.
",max lin,,2017.0,,arXiv,Azari2017,True,,arXiv,Not available,"A General Framework for Evaluating Callout Mechanisms in Repeated
Auctions",9f594067a53dda5945498f77b3dc34cf,http://arxiv.org/abs/1702.01803v1
16627," Motivated by online display ad exchanges, we study a setting in which an
exchange repeatedly interacts with bidders who have quota, making decisions
about which subsets of bidders are called to participate in ad-slot-specific
auctions. A bidder with quota cannot respond to more than a certain number of
calls per second. In practice, random throttling is the principal solution by
which these constraints are enforced. Given the repeated nature of the
interaction with its bidders, the exchange has access to data containing
information about each bidder's segments of interest. This information can be
utilized to design smarter callout mechanisms --- with the potential of
improving the exchange's long-term revenue. In this work, we present a general
framework for evaluating and comparing the performance of various callout
mechanisms using historical auction data only. To measure the impact of a
callout mechanism on long-term revenue, we propose a strategic model that
captures the repeated interaction between the exchange and bidders. Our model
leads us to two metrics for performance: immediate revenue impact and social
welfare. Next we present an empirical framework for estimating these two
metrics from historical data. For the baseline to compare against, we consider
random throttling, as well as a greedy algorithm with certain theoretical
guarantees. We propose several natural callout mechanisms and investigate them
through our framework on both synthetic and real auction data. We characterize
the conditions under which each heuristic performs well and show that, in
addition to being computationally faster, in practice our heuristics
consistently and significantly outperform the baselines.
",sonia todorova,,2017.0,,arXiv,Azari2017,True,,arXiv,Not available,"A General Framework for Evaluating Callout Mechanisms in Repeated
Auctions",9f594067a53dda5945498f77b3dc34cf,http://arxiv.org/abs/1702.01803v1
16628," Stochastic multi-armed bandit (MAB) mechanisms are widely used in sponsored
search auctions, crowdsourcing, online procurement, etc. Existing stochastic
MAB mechanisms with a deterministic payment rule, proposed in the literature,
necessarily suffer a regret of $\Omega(T^{2/3})$, where $T$ is the number of
time steps. This happens because the existing mechanisms consider the worst
case scenario where the means of the agents' stochastic rewards are separated
by a very small amount that depends on $T$. We make, and, exploit the crucial
observation that in most scenarios, the separation between the agents' rewards
is rarely a function of $T$. Moreover, in the case that the rewards of the arms
are arbitrarily close, the regret contributed by such sub-optimal arms is
minimal. Our idea is to allow the center to indicate the resolution, $\Delta$,
with which the agents must be distinguished. This immediately leads us to
introduce the notion of $\Delta$-Regret. Using sponsored search auctions as a
concrete example (the same idea applies for other applications as well), we
propose a dominant strategy incentive compatible (DSIC) and individually
rational (IR), deterministic MAB mechanism, based on ideas from the Upper
Confidence Bound (UCB) family of MAB algorithms. Remarkably, the proposed
mechanism $\Delta$-UCB achieves a $\Delta$-regret of $O(\log T)$ for the case
of sponsored search auctions. We first establish the results for single slot
sponsored search auctions and then non-trivially extend the results to the case
where multiple slots are to be allocated.
",divya padmanabhan,,2017.0,,arXiv,Padmanabhan2017,True,,arXiv,Not available,"A Dominant Strategy Truthful, Deterministic Multi-Armed Bandit Mechanism
with Logarithmic Regret",2e4918095b4ebc4044db4e94b26df107,http://arxiv.org/abs/1703.00632v1
16629," We consider sequences of games $\mathcal{G}=\{G_1,G_2,\ldots\}$ where, for
all $n$, $G_n$ has the same set of players. Such sequences arise in the
analysis of running time of players in games, in electronic money systems such
as Bitcoin and in cryptographic protocols. Assuming that one-way functions
exist, we prove that there is a sequence of 2-player zero-sum Bayesian games
$\mathcal{G}$ such that, for all $n$, the size of every action in $G_n$ is
polynomial in $n$, the utility function is polynomial computable in $n$, and
yet there is no polynomial-time Nash equilibrium, where we use a notion of Nash
equilibrium that is tailored to sequences of games. We also demonstrate that
Nash equilibrium may not exist when considering players that are constrained to
perform at most $T$ computational steps in each of the games
$\{G_i\}_{i=1}^{\infty}$. These examples may shed light on competitive settings
where the availability of more running time or faster algorithms lead to a
""computational arms race"", precluding the existence of equilibrium. They also
point to inherent limitations of concepts such ""best response"" and Nash
equilibrium in games with resource-bounded players.
",daniel reichman,,2015.0,,arXiv,Halpern2015,True,,arXiv,Not available,"On the Non-Existence of Nash Equilibrium in Games with Resource-Bounded
Players",a7b96118021b9c713ee13526ce34ce00,http://arxiv.org/abs/1507.01501v2
16630," Stochastic multi-armed bandit (MAB) mechanisms are widely used in sponsored
search auctions, crowdsourcing, online procurement, etc. Existing stochastic
MAB mechanisms with a deterministic payment rule, proposed in the literature,
necessarily suffer a regret of $\Omega(T^{2/3})$, where $T$ is the number of
time steps. This happens because the existing mechanisms consider the worst
case scenario where the means of the agents' stochastic rewards are separated
by a very small amount that depends on $T$. We make, and, exploit the crucial
observation that in most scenarios, the separation between the agents' rewards
is rarely a function of $T$. Moreover, in the case that the rewards of the arms
are arbitrarily close, the regret contributed by such sub-optimal arms is
minimal. Our idea is to allow the center to indicate the resolution, $\Delta$,
with which the agents must be distinguished. This immediately leads us to
introduce the notion of $\Delta$-Regret. Using sponsored search auctions as a
concrete example (the same idea applies for other applications as well), we
propose a dominant strategy incentive compatible (DSIC) and individually
rational (IR), deterministic MAB mechanism, based on ideas from the Upper
Confidence Bound (UCB) family of MAB algorithms. Remarkably, the proposed
mechanism $\Delta$-UCB achieves a $\Delta$-regret of $O(\log T)$ for the case
of sponsored search auctions. We first establish the results for single slot
sponsored search auctions and then non-trivially extend the results to the case
where multiple slots are to be allocated.
",satyanath bhat,,2017.0,,arXiv,Padmanabhan2017,True,,arXiv,Not available,"A Dominant Strategy Truthful, Deterministic Multi-Armed Bandit Mechanism
with Logarithmic Regret",2e4918095b4ebc4044db4e94b26df107,http://arxiv.org/abs/1703.00632v1
16631," Stochastic multi-armed bandit (MAB) mechanisms are widely used in sponsored
search auctions, crowdsourcing, online procurement, etc. Existing stochastic
MAB mechanisms with a deterministic payment rule, proposed in the literature,
necessarily suffer a regret of $\Omega(T^{2/3})$, where $T$ is the number of
time steps. This happens because the existing mechanisms consider the worst
case scenario where the means of the agents' stochastic rewards are separated
by a very small amount that depends on $T$. We make, and, exploit the crucial
observation that in most scenarios, the separation between the agents' rewards
is rarely a function of $T$. Moreover, in the case that the rewards of the arms
are arbitrarily close, the regret contributed by such sub-optimal arms is
minimal. Our idea is to allow the center to indicate the resolution, $\Delta$,
with which the agents must be distinguished. This immediately leads us to
introduce the notion of $\Delta$-Regret. Using sponsored search auctions as a
concrete example (the same idea applies for other applications as well), we
propose a dominant strategy incentive compatible (DSIC) and individually
rational (IR), deterministic MAB mechanism, based on ideas from the Upper
Confidence Bound (UCB) family of MAB algorithms. Remarkably, the proposed
mechanism $\Delta$-UCB achieves a $\Delta$-regret of $O(\log T)$ for the case
of sponsored search auctions. We first establish the results for single slot
sponsored search auctions and then non-trivially extend the results to the case
where multiple slots are to be allocated.
",prabuchandran j.,,2017.0,,arXiv,Padmanabhan2017,True,,arXiv,Not available,"A Dominant Strategy Truthful, Deterministic Multi-Armed Bandit Mechanism
with Logarithmic Regret",2e4918095b4ebc4044db4e94b26df107,http://arxiv.org/abs/1703.00632v1
16632," Stochastic multi-armed bandit (MAB) mechanisms are widely used in sponsored
search auctions, crowdsourcing, online procurement, etc. Existing stochastic
MAB mechanisms with a deterministic payment rule, proposed in the literature,
necessarily suffer a regret of $\Omega(T^{2/3})$, where $T$ is the number of
time steps. This happens because the existing mechanisms consider the worst
case scenario where the means of the agents' stochastic rewards are separated
by a very small amount that depends on $T$. We make, and, exploit the crucial
observation that in most scenarios, the separation between the agents' rewards
is rarely a function of $T$. Moreover, in the case that the rewards of the arms
are arbitrarily close, the regret contributed by such sub-optimal arms is
minimal. Our idea is to allow the center to indicate the resolution, $\Delta$,
with which the agents must be distinguished. This immediately leads us to
introduce the notion of $\Delta$-Regret. Using sponsored search auctions as a
concrete example (the same idea applies for other applications as well), we
propose a dominant strategy incentive compatible (DSIC) and individually
rational (IR), deterministic MAB mechanism, based on ideas from the Upper
Confidence Bound (UCB) family of MAB algorithms. Remarkably, the proposed
mechanism $\Delta$-UCB achieves a $\Delta$-regret of $O(\log T)$ for the case
of sponsored search auctions. We first establish the results for single slot
sponsored search auctions and then non-trivially extend the results to the case
where multiple slots are to be allocated.
",shirish shevade,,2017.0,,arXiv,Padmanabhan2017,True,,arXiv,Not available,"A Dominant Strategy Truthful, Deterministic Multi-Armed Bandit Mechanism
with Logarithmic Regret",2e4918095b4ebc4044db4e94b26df107,http://arxiv.org/abs/1703.00632v1
16633," Stochastic multi-armed bandit (MAB) mechanisms are widely used in sponsored
search auctions, crowdsourcing, online procurement, etc. Existing stochastic
MAB mechanisms with a deterministic payment rule, proposed in the literature,
necessarily suffer a regret of $\Omega(T^{2/3})$, where $T$ is the number of
time steps. This happens because the existing mechanisms consider the worst
case scenario where the means of the agents' stochastic rewards are separated
by a very small amount that depends on $T$. We make, and, exploit the crucial
observation that in most scenarios, the separation between the agents' rewards
is rarely a function of $T$. Moreover, in the case that the rewards of the arms
are arbitrarily close, the regret contributed by such sub-optimal arms is
minimal. Our idea is to allow the center to indicate the resolution, $\Delta$,
with which the agents must be distinguished. This immediately leads us to
introduce the notion of $\Delta$-Regret. Using sponsored search auctions as a
concrete example (the same idea applies for other applications as well), we
propose a dominant strategy incentive compatible (DSIC) and individually
rational (IR), deterministic MAB mechanism, based on ideas from the Upper
Confidence Bound (UCB) family of MAB algorithms. Remarkably, the proposed
mechanism $\Delta$-UCB achieves a $\Delta$-regret of $O(\log T)$ for the case
of sponsored search auctions. We first establish the results for single slot
sponsored search auctions and then non-trivially extend the results to the case
where multiple slots are to be allocated.
",y. narahari,,2017.0,,arXiv,Padmanabhan2017,True,,arXiv,Not available,"A Dominant Strategy Truthful, Deterministic Multi-Armed Bandit Mechanism
with Logarithmic Regret",2e4918095b4ebc4044db4e94b26df107,http://arxiv.org/abs/1703.00632v1
16634," The online ads trading platform plays a crucial role in connecting publishers
and advertisers and generates tremendous value in facilitating the convenience
of our lives. It has been evolving into a more and more complicated structure.
In this paper, we consider the problem of maximizing the revenue for the seller
side via utilizing proper reserve price for the auctions in a dynamical way.
Predicting the optimal reserve price for each auction in the repeated auction
marketplaces is a non-trivial problem. However, we were able to come up with an
efficient method of improving the seller revenue by mainly focusing on
adjusting the reserve price for those high-value inventories. Previously, no
dedicated work has been performed from this perspective. Inspired by Paul and
Michael, our model first identifies the value of the inventory by predicting
the top bid price bucket using a cascade of classifiers. The cascade is
essential in significantly reducing the false positive rate of a single
classifier. Based on the output of the first step, we build another cluster of
classifiers to predict the price separations between the top two bids. We
showed that although the high-value auctions are only a small portion of all
the traffic, successfully identifying them and setting correct reserve price
would result in a significant revenue lift. Moreover, our optimization is
compatible with all other reserve price models in the system and does not
impact their performance. In other words, when combined with other models, the
enhancement on exchange revenue will be aggregated. Simulations on randomly
sampled Yahoo ads exchange (YAXR) data showed stable and expected lift after
applying our model.
",zhihui xie,,2017.0,10.1145/3124749.3124760,arXiv,Xie2017,True,,arXiv,Not available,"Optimal Reserve Price for Online Ads Trading Based on Inventory
Identification",c72343c5588fa46d4e816078bf4783cc,http://arxiv.org/abs/1709.10388v1
16635," The online ads trading platform plays a crucial role in connecting publishers
and advertisers and generates tremendous value in facilitating the convenience
of our lives. It has been evolving into a more and more complicated structure.
In this paper, we consider the problem of maximizing the revenue for the seller
side via utilizing proper reserve price for the auctions in a dynamical way.
Predicting the optimal reserve price for each auction in the repeated auction
marketplaces is a non-trivial problem. However, we were able to come up with an
efficient method of improving the seller revenue by mainly focusing on
adjusting the reserve price for those high-value inventories. Previously, no
dedicated work has been performed from this perspective. Inspired by Paul and
Michael, our model first identifies the value of the inventory by predicting
the top bid price bucket using a cascade of classifiers. The cascade is
essential in significantly reducing the false positive rate of a single
classifier. Based on the output of the first step, we build another cluster of
classifiers to predict the price separations between the top two bids. We
showed that although the high-value auctions are only a small portion of all
the traffic, successfully identifying them and setting correct reserve price
would result in a significant revenue lift. Moreover, our optimization is
compatible with all other reserve price models in the system and does not
impact their performance. In other words, when combined with other models, the
enhancement on exchange revenue will be aggregated. Simulations on randomly
sampled Yahoo ads exchange (YAXR) data showed stable and expected lift after
applying our model.
",kuang-chih lee,,2017.0,10.1145/3124749.3124760,arXiv,Xie2017,True,,arXiv,Not available,"Optimal Reserve Price for Online Ads Trading Based on Inventory
Identification",c72343c5588fa46d4e816078bf4783cc,http://arxiv.org/abs/1709.10388v1
16636," The online ads trading platform plays a crucial role in connecting publishers
and advertisers and generates tremendous value in facilitating the convenience
of our lives. It has been evolving into a more and more complicated structure.
In this paper, we consider the problem of maximizing the revenue for the seller
side via utilizing proper reserve price for the auctions in a dynamical way.
Predicting the optimal reserve price for each auction in the repeated auction
marketplaces is a non-trivial problem. However, we were able to come up with an
efficient method of improving the seller revenue by mainly focusing on
adjusting the reserve price for those high-value inventories. Previously, no
dedicated work has been performed from this perspective. Inspired by Paul and
Michael, our model first identifies the value of the inventory by predicting
the top bid price bucket using a cascade of classifiers. The cascade is
essential in significantly reducing the false positive rate of a single
classifier. Based on the output of the first step, we build another cluster of
classifiers to predict the price separations between the top two bids. We
showed that although the high-value auctions are only a small portion of all
the traffic, successfully identifying them and setting correct reserve price
would result in a significant revenue lift. Moreover, our optimization is
compatible with all other reserve price models in the system and does not
impact their performance. In other words, when combined with other models, the
enhancement on exchange revenue will be aggregated. Simulations on randomly
sampled Yahoo ads exchange (YAXR) data showed stable and expected lift after
applying our model.
",liang wang,,2017.0,10.1145/3124749.3124760,arXiv,Xie2017,True,,arXiv,Not available,"Optimal Reserve Price for Online Ads Trading Based on Inventory
Identification",c72343c5588fa46d4e816078bf4783cc,http://arxiv.org/abs/1709.10388v1
16637," We address online learning in complex auction settings, such as sponsored
search auctions, where the value of the bidder is unknown to her, evolving in
an arbitrary manner and observed only if the bidder wins an allocation. We
leverage the structure of the utility of the bidder and the partial feedback
that bidders typically receive in auctions, in order to provide algorithms with
regret rates against the best fixed bid in hindsight, that are exponentially
faster in convergence in terms of dependence on the action space, than what
would have been derived by applying a generic bandit algorithm and almost
equivalent to what would have been achieved in the full information setting.
Our results are enabled by analyzing a new online learning setting with
outcome-based feedback, which generalizes learning with feedback graphs. We
provide an online learning algorithm for this setting, of independent interest,
with regret that grows only logarithmically with the number of actions and
linearly only in the number of potential outcomes (the latter being very small
in most auction settings). Last but not least, we show that our algorithm
outperforms the bandit approach experimentally and that this performance is
robust to dropping some of our theoretical assumptions or introducing noise in
the feedback that the bidder receives.
",zhe feng,,2017.0,,arXiv,Feng2017,True,,arXiv,Not available,Learning to Bid Without Knowing your Value,e837d4fdaab03e7fb0a69489561ed1b8,http://arxiv.org/abs/1711.01333v5
16638," We address online learning in complex auction settings, such as sponsored
search auctions, where the value of the bidder is unknown to her, evolving in
an arbitrary manner and observed only if the bidder wins an allocation. We
leverage the structure of the utility of the bidder and the partial feedback
that bidders typically receive in auctions, in order to provide algorithms with
regret rates against the best fixed bid in hindsight, that are exponentially
faster in convergence in terms of dependence on the action space, than what
would have been derived by applying a generic bandit algorithm and almost
equivalent to what would have been achieved in the full information setting.
Our results are enabled by analyzing a new online learning setting with
outcome-based feedback, which generalizes learning with feedback graphs. We
provide an online learning algorithm for this setting, of independent interest,
with regret that grows only logarithmically with the number of actions and
linearly only in the number of potential outcomes (the latter being very small
in most auction settings). Last but not least, we show that our algorithm
outperforms the bandit approach experimentally and that this performance is
robust to dropping some of our theoretical assumptions or introducing noise in
the feedback that the bidder receives.
",chara podimata,,2017.0,,arXiv,Feng2017,True,,arXiv,Not available,Learning to Bid Without Knowing your Value,e837d4fdaab03e7fb0a69489561ed1b8,http://arxiv.org/abs/1711.01333v5
16639," We address online learning in complex auction settings, such as sponsored
search auctions, where the value of the bidder is unknown to her, evolving in
an arbitrary manner and observed only if the bidder wins an allocation. We
leverage the structure of the utility of the bidder and the partial feedback
that bidders typically receive in auctions, in order to provide algorithms with
regret rates against the best fixed bid in hindsight, that are exponentially
faster in convergence in terms of dependence on the action space, than what
would have been derived by applying a generic bandit algorithm and almost
equivalent to what would have been achieved in the full information setting.
Our results are enabled by analyzing a new online learning setting with
outcome-based feedback, which generalizes learning with feedback graphs. We
provide an online learning algorithm for this setting, of independent interest,
with regret that grows only logarithmically with the number of actions and
linearly only in the number of potential outcomes (the latter being very small
in most auction settings). Last but not least, we show that our algorithm
outperforms the bandit approach experimentally and that this performance is
robust to dropping some of our theoretical assumptions or introducing noise in
the feedback that the bidder receives.
",vasilis syrgkanis,,2017.0,,arXiv,Feng2017,True,,arXiv,Not available,Learning to Bid Without Knowing your Value,e837d4fdaab03e7fb0a69489561ed1b8,http://arxiv.org/abs/1711.01333v5
16640," We consider two-person zero-sum stochastic mean payoff games with perfect
information, or BWR-games, given by a digraph $G = (V, E)$, with local rewards
$r: E \to \ZZ$, and three types of positions: black $V_B$, white $V_W$, and
random $V_R$ forming a partition of $V$. It is a long-standing open question
whether a polynomial time algorithm for BWR-games exists, or not, even when
$|V_R|=0$. In fact, a pseudo-polynomial algorithm for BWR-games would already
imply their polynomial solvability. In this paper, we show that BWR-games with
a constant number of random positions can be solved in pseudo-polynomial time.
More precisely, in any BWR-game with $|V_R|=O(1)$, a saddle point in uniformly
optimal pure stationary strategies can be found in time polynomial in
$|V_W|+|V_B|$, the maximum absolute local reward, and the common denominator of
the transition probabilities.
",endre boros,,2015.0,,arXiv,Boros2015,True,,arXiv,Not available,"A Pseudo-Polynomial Algorithm for Mean Payoff Stochastic Games with
Perfect Information and Few Random Positions",a882f61b79b99f198e4d2f17882c5d33,http://arxiv.org/abs/1508.03431v2
16641," We study an abstract optimal auction problem for a single good or service.
This problem includes environments where agents have budgets, risk preferences,
or multi-dimensional preferences over several possible configurations of the
good (furthermore, it allows an agent's budget and risk preference to be known
only privately to the agent). These are the main challenge areas for auction
theory. A single-agent problem is to optimize a given objective subject to a
constraint on the maximum probability with which each type is allocated,
a.k.a., an allocation rule. Our approach is a reduction from multi-agent
mechanism design problem to collection of single-agent problems. We focus on
maximizing revenue, but our results can be applied to other objectives (e.g.,
welfare).
An optimal multi-agent mechanism can be computed by a linear/convex program
on interim allocation rules by simultaneously optimizing several single-agent
mechanisms subject to joint feasibility of the allocation rules. For
single-unit auctions, Border \citeyearpar{B91} showed that the space of all
jointly feasible interim allocation rules for $n$ agents is a
$\NumTypes$-dimensional convex polytope which can be specified by $2^\NumTypes$
linear constraints, where $\NumTypes$ is the total number of all agents' types.
Consequently, efficiently solving the mechanism design problem requires a
separation oracle for the feasibility conditions and also an algorithm for
ex-post implementation of the interim allocation rules. We show that the
polytope of jointly feasible interim allocation rules is the projection of a
higher dimensional polytope which can be specified by only $O(\NumTypes^2)$
linear constraints. Furthermore, our proof shows that finding a preimage of the
interim allocation rules in the higher dimensional polytope immediately gives
an ex-post implementation.
",saeed alaei,,2012.0,,arXiv,Alaei2012,True,,arXiv,Not available,Bayesian Optimal Auctions via Multi- to Single-agent Reduction,0eb1ad886409a33b765f7296d8b829c5,http://arxiv.org/abs/1203.5099v1
16642," We study an abstract optimal auction problem for a single good or service.
This problem includes environments where agents have budgets, risk preferences,
or multi-dimensional preferences over several possible configurations of the
good (furthermore, it allows an agent's budget and risk preference to be known
only privately to the agent). These are the main challenge areas for auction
theory. A single-agent problem is to optimize a given objective subject to a
constraint on the maximum probability with which each type is allocated,
a.k.a., an allocation rule. Our approach is a reduction from multi-agent
mechanism design problem to collection of single-agent problems. We focus on
maximizing revenue, but our results can be applied to other objectives (e.g.,
welfare).
An optimal multi-agent mechanism can be computed by a linear/convex program
on interim allocation rules by simultaneously optimizing several single-agent
mechanisms subject to joint feasibility of the allocation rules. For
single-unit auctions, Border \citeyearpar{B91} showed that the space of all
jointly feasible interim allocation rules for $n$ agents is a
$\NumTypes$-dimensional convex polytope which can be specified by $2^\NumTypes$
linear constraints, where $\NumTypes$ is the total number of all agents' types.
Consequently, efficiently solving the mechanism design problem requires a
separation oracle for the feasibility conditions and also an algorithm for
ex-post implementation of the interim allocation rules. We show that the
polytope of jointly feasible interim allocation rules is the projection of a
higher dimensional polytope which can be specified by only $O(\NumTypes^2)$
linear constraints. Furthermore, our proof shows that finding a preimage of the
interim allocation rules in the higher dimensional polytope immediately gives
an ex-post implementation.
",hu fu,,2012.0,,arXiv,Alaei2012,True,,arXiv,Not available,Bayesian Optimal Auctions via Multi- to Single-agent Reduction,0eb1ad886409a33b765f7296d8b829c5,http://arxiv.org/abs/1203.5099v1
16643," We study an abstract optimal auction problem for a single good or service.
This problem includes environments where agents have budgets, risk preferences,
or multi-dimensional preferences over several possible configurations of the
good (furthermore, it allows an agent's budget and risk preference to be known
only privately to the agent). These are the main challenge areas for auction
theory. A single-agent problem is to optimize a given objective subject to a
constraint on the maximum probability with which each type is allocated,
a.k.a., an allocation rule. Our approach is a reduction from multi-agent
mechanism design problem to collection of single-agent problems. We focus on
maximizing revenue, but our results can be applied to other objectives (e.g.,
welfare).
An optimal multi-agent mechanism can be computed by a linear/convex program
on interim allocation rules by simultaneously optimizing several single-agent
mechanisms subject to joint feasibility of the allocation rules. For
single-unit auctions, Border \citeyearpar{B91} showed that the space of all
jointly feasible interim allocation rules for $n$ agents is a
$\NumTypes$-dimensional convex polytope which can be specified by $2^\NumTypes$
linear constraints, where $\NumTypes$ is the total number of all agents' types.
Consequently, efficiently solving the mechanism design problem requires a
separation oracle for the feasibility conditions and also an algorithm for
ex-post implementation of the interim allocation rules. We show that the
polytope of jointly feasible interim allocation rules is the projection of a
higher dimensional polytope which can be specified by only $O(\NumTypes^2)$
linear constraints. Furthermore, our proof shows that finding a preimage of the
interim allocation rules in the higher dimensional polytope immediately gives
an ex-post implementation.
",nima haghpanah,,2012.0,,arXiv,Alaei2012,True,,arXiv,Not available,Bayesian Optimal Auctions via Multi- to Single-agent Reduction,0eb1ad886409a33b765f7296d8b829c5,http://arxiv.org/abs/1203.5099v1
16644," We study an abstract optimal auction problem for a single good or service.
This problem includes environments where agents have budgets, risk preferences,
or multi-dimensional preferences over several possible configurations of the
good (furthermore, it allows an agent's budget and risk preference to be known
only privately to the agent). These are the main challenge areas for auction
theory. A single-agent problem is to optimize a given objective subject to a
constraint on the maximum probability with which each type is allocated,
a.k.a., an allocation rule. Our approach is a reduction from multi-agent
mechanism design problem to collection of single-agent problems. We focus on
maximizing revenue, but our results can be applied to other objectives (e.g.,
welfare).
An optimal multi-agent mechanism can be computed by a linear/convex program
on interim allocation rules by simultaneously optimizing several single-agent
mechanisms subject to joint feasibility of the allocation rules. For
single-unit auctions, Border \citeyearpar{B91} showed that the space of all
jointly feasible interim allocation rules for $n$ agents is a
$\NumTypes$-dimensional convex polytope which can be specified by $2^\NumTypes$
linear constraints, where $\NumTypes$ is the total number of all agents' types.
Consequently, efficiently solving the mechanism design problem requires a
separation oracle for the feasibility conditions and also an algorithm for
ex-post implementation of the interim allocation rules. We show that the
polytope of jointly feasible interim allocation rules is the projection of a
higher dimensional polytope which can be specified by only $O(\NumTypes^2)$
linear constraints. Furthermore, our proof shows that finding a preimage of the
interim allocation rules in the higher dimensional polytope immediately gives
an ex-post implementation.
",jason hartline,,2012.0,,arXiv,Alaei2012,True,,arXiv,Not available,Bayesian Optimal Auctions via Multi- to Single-agent Reduction,0eb1ad886409a33b765f7296d8b829c5,http://arxiv.org/abs/1203.5099v1
16645," We study an abstract optimal auction problem for a single good or service.
This problem includes environments where agents have budgets, risk preferences,
or multi-dimensional preferences over several possible configurations of the
good (furthermore, it allows an agent's budget and risk preference to be known
only privately to the agent). These are the main challenge areas for auction
theory. A single-agent problem is to optimize a given objective subject to a
constraint on the maximum probability with which each type is allocated,
a.k.a., an allocation rule. Our approach is a reduction from multi-agent
mechanism design problem to collection of single-agent problems. We focus on
maximizing revenue, but our results can be applied to other objectives (e.g.,
welfare).
An optimal multi-agent mechanism can be computed by a linear/convex program
on interim allocation rules by simultaneously optimizing several single-agent
mechanisms subject to joint feasibility of the allocation rules. For
single-unit auctions, Border \citeyearpar{B91} showed that the space of all
jointly feasible interim allocation rules for $n$ agents is a
$\NumTypes$-dimensional convex polytope which can be specified by $2^\NumTypes$
linear constraints, where $\NumTypes$ is the total number of all agents' types.
Consequently, efficiently solving the mechanism design problem requires a
separation oracle for the feasibility conditions and also an algorithm for
ex-post implementation of the interim allocation rules. We show that the
polytope of jointly feasible interim allocation rules is the projection of a
higher dimensional polytope which can be specified by only $O(\NumTypes^2)$
linear constraints. Furthermore, our proof shows that finding a preimage of the
interim allocation rules in the higher dimensional polytope immediately gives
an ex-post implementation.
",azarakhsh malekian,,2012.0,,arXiv,Alaei2012,True,,arXiv,Not available,Bayesian Optimal Auctions via Multi- to Single-agent Reduction,0eb1ad886409a33b765f7296d8b829c5,http://arxiv.org/abs/1203.5099v1
16646," The first-price auction is popular in practice for its simplicity and
transparency. Moreover, its potential virtues grow in complex settings where
incentive compatible auctions may generate little or no revenue. Unfortunately,
the first-price auction is poorly understood in theory because equilibrium is
not {\em a priori} a credible predictor of bidder behavior.
We take a dynamic approach to studying first-price auctions: rather than
basing performance guarantees solely on static equilibria, we study the
repeated setting and show that robust performance guarantees may be derived
from simple axioms of bidder behavior. For example, as long as a loser raises
her bid quickly, a standard first-price auction will generate at least as much
revenue as a second-price auction. We generalize this dynamic technique to
complex pay-your-bid auction settings and show that progressively stronger
assumptions about bidder behavior imply progressively stronger guarantees about
the auction's performance.
Along the way, we find that the auctioneer's choice of bidding language is
critical when generalizing beyond the single-item setting, and we propose a
specific construction called the {\em utility-target auction} that performs
well. The utility-target auction includes a bidder's final utility as an
additional parameter, identifying the single dimension along which she wishes
to compete. This auction is closely related to profit-target bidding in
first-price and ascending proxy package auctions and gives strong revenue
guarantees for a variety of complex auction environments. Of particular
interest, the guaranteed existence of a pure-strategy equilibrium in the
utility-target auction shows how Overture might have eliminated the cyclic
behavior in their generalized first-price sponsored search auction if bidders
could have placed more sophisticated bids.
",darrell hoy,,2013.0,,arXiv,Hoy2013,True,,arXiv,Not available,A Dynamic Axiomatic Approach to First-Price Auctions,e3865d4807693c96e9d0ae19f29e5cd7,http://arxiv.org/abs/1304.7718v1
16647," The first-price auction is popular in practice for its simplicity and
transparency. Moreover, its potential virtues grow in complex settings where
incentive compatible auctions may generate little or no revenue. Unfortunately,
the first-price auction is poorly understood in theory because equilibrium is
not {\em a priori} a credible predictor of bidder behavior.
We take a dynamic approach to studying first-price auctions: rather than
basing performance guarantees solely on static equilibria, we study the
repeated setting and show that robust performance guarantees may be derived
from simple axioms of bidder behavior. For example, as long as a loser raises
her bid quickly, a standard first-price auction will generate at least as much
revenue as a second-price auction. We generalize this dynamic technique to
complex pay-your-bid auction settings and show that progressively stronger
assumptions about bidder behavior imply progressively stronger guarantees about
the auction's performance.
Along the way, we find that the auctioneer's choice of bidding language is
critical when generalizing beyond the single-item setting, and we propose a
specific construction called the {\em utility-target auction} that performs
well. The utility-target auction includes a bidder's final utility as an
additional parameter, identifying the single dimension along which she wishes
to compete. This auction is closely related to profit-target bidding in
first-price and ascending proxy package auctions and gives strong revenue
guarantees for a variety of complex auction environments. Of particular
interest, the guaranteed existence of a pure-strategy equilibrium in the
utility-target auction shows how Overture might have eliminated the cyclic
behavior in their generalized first-price sponsored search auction if bidders
could have placed more sophisticated bids.
",kamal jain,,2013.0,,arXiv,Hoy2013,True,,arXiv,Not available,A Dynamic Axiomatic Approach to First-Price Auctions,e3865d4807693c96e9d0ae19f29e5cd7,http://arxiv.org/abs/1304.7718v1
16648," The first-price auction is popular in practice for its simplicity and
transparency. Moreover, its potential virtues grow in complex settings where
incentive compatible auctions may generate little or no revenue. Unfortunately,
the first-price auction is poorly understood in theory because equilibrium is
not {\em a priori} a credible predictor of bidder behavior.
We take a dynamic approach to studying first-price auctions: rather than
basing performance guarantees solely on static equilibria, we study the
repeated setting and show that robust performance guarantees may be derived
from simple axioms of bidder behavior. For example, as long as a loser raises
her bid quickly, a standard first-price auction will generate at least as much
revenue as a second-price auction. We generalize this dynamic technique to
complex pay-your-bid auction settings and show that progressively stronger
assumptions about bidder behavior imply progressively stronger guarantees about
the auction's performance.
Along the way, we find that the auctioneer's choice of bidding language is
critical when generalizing beyond the single-item setting, and we propose a
specific construction called the {\em utility-target auction} that performs
well. The utility-target auction includes a bidder's final utility as an
additional parameter, identifying the single dimension along which she wishes
to compete. This auction is closely related to profit-target bidding in
first-price and ascending proxy package auctions and gives strong revenue
guarantees for a variety of complex auction environments. Of particular
interest, the guaranteed existence of a pure-strategy equilibrium in the
utility-target auction shows how Overture might have eliminated the cyclic
behavior in their generalized first-price sponsored search auction if bidders
could have placed more sophisticated bids.
",christopher wilkens,,2013.0,,arXiv,Hoy2013,True,,arXiv,Not available,A Dynamic Axiomatic Approach to First-Price Auctions,e3865d4807693c96e9d0ae19f29e5cd7,http://arxiv.org/abs/1304.7718v1
16649," With the recent growth in the size of cloud computing business, handling the
interactions between customers and cloud providers has become more challenging.
Auction theory has been proposed to model these interactions due to its
simplicity and a good match with real-world scenarios. In this paper, we
consider cloud of clouds networks (CCNs) with different types of servers along
with customers with heterogeneous demands. For each CCN, a CCN manager is
designated to handle the cloud resources. A comprehensive framework is
introduced in which the process of resource gathering and allocation is
addressed via two stages, where the first stage models the interactions between
customers and CCN managers, and the second stage examines the interactions
between CCN managers and private cloud providers (CPs). For the first stage, an
options-based sequential auction (OBSA) is adapted to the examined market,
which is capable of providing truthfulness as the dominant strategy and
resolving the entrance time problem. An analytical foundation for OBSAs is
presented and multiple performance metrics are derived. For the second stage,
two parallel markets are assumed: flat-price and auction-based market. A
theoretical framework for market analysis is provided and the bidding behavior
of CCN managers is described.
",seyyedali hosseinalipour,,2018.0,,arXiv,Hosseinalipour2018,True,,arXiv,Not available,A Two-Stage Auction Mechanism for Cloud Resource Allocation,e6756a574be5d43a55091881d3add347,http://arxiv.org/abs/1807.04214v2
16650," With the recent growth in the size of cloud computing business, handling the
interactions between customers and cloud providers has become more challenging.
Auction theory has been proposed to model these interactions due to its
simplicity and a good match with real-world scenarios. In this paper, we
consider cloud of clouds networks (CCNs) with different types of servers along
with customers with heterogeneous demands. For each CCN, a CCN manager is
designated to handle the cloud resources. A comprehensive framework is
introduced in which the process of resource gathering and allocation is
addressed via two stages, where the first stage models the interactions between
customers and CCN managers, and the second stage examines the interactions
between CCN managers and private cloud providers (CPs). For the first stage, an
options-based sequential auction (OBSA) is adapted to the examined market,
which is capable of providing truthfulness as the dominant strategy and
resolving the entrance time problem. An analytical foundation for OBSAs is
presented and multiple performance metrics are derived. For the second stage,
two parallel markets are assumed: flat-price and auction-based market. A
theoretical framework for market analysis is provided and the bidding behavior
of CCN managers is described.
",huaiyu dai,,2018.0,,arXiv,Hosseinalipour2018,True,,arXiv,Not available,A Two-Stage Auction Mechanism for Cloud Resource Allocation,e6756a574be5d43a55091881d3add347,http://arxiv.org/abs/1807.04214v2
16651," We consider two-person zero-sum stochastic mean payoff games with perfect
information, or BWR-games, given by a digraph $G = (V, E)$, with local rewards
$r: E \to \ZZ$, and three types of positions: black $V_B$, white $V_W$, and
random $V_R$ forming a partition of $V$. It is a long-standing open question
whether a polynomial time algorithm for BWR-games exists, or not, even when
$|V_R|=0$. In fact, a pseudo-polynomial algorithm for BWR-games would already
imply their polynomial solvability. In this paper, we show that BWR-games with
a constant number of random positions can be solved in pseudo-polynomial time.
More precisely, in any BWR-game with $|V_R|=O(1)$, a saddle point in uniformly
optimal pure stationary strategies can be found in time polynomial in
$|V_W|+|V_B|$, the maximum absolute local reward, and the common denominator of
the transition probabilities.
",khaled elbassioni,,2015.0,,arXiv,Boros2015,True,,arXiv,Not available,"A Pseudo-Polynomial Algorithm for Mean Payoff Stochastic Games with
Perfect Information and Few Random Positions",a882f61b79b99f198e4d2f17882c5d33,http://arxiv.org/abs/1508.03431v2
16652," Previous works suggested the use of Branch and Bound techniques for finding
the optimal allocation in (multi-unit) combinatorial auctions. They remarked
that Linear Programming could provide a good upper-bound to the optimal
allocation, but they went on using lighter and less tight upper-bound
heuristics, on the ground that LP was too time-consuming to be used
repetitively to solve large combinatorial auctions. We present the results of
extensive experiments solving large (multi-unit) combinatorial auctions
generated according to distributions proposed by different researchers. Our
surprising conclusion is that Linear Programming is worth using. Investing
almost all of one's computing time in using LP to bound from above the value of
the optimal solution in order to prune aggressively pays off. We present a way
to save on the number of calls to the LP routine and experimental results
comparing different heuristics for choosing the bid to be considered next.
Those results show that the ordering based on the square root of the size of
the bids that was shown to be theoretically optimal in a previous paper by the
authors performs surprisingly better than others in practice. Choosing to deal
first with the bid with largest coefficient (typically 1) in the optimal
solution of the relaxed LP problem, is also a good choice. The gap between the
lower bound provided by greedy heuristics and the upper bound provided by LP is
typically small and pruning is therefore extensive. For most distributions,
auctions of a few hundred goods among a few thousand bids can be solved in
practice. All experiments were run on a PC under Matlab.
",rica gonen,,2002.0,,arXiv,Gonen2002,True,,arXiv,Not available,Linear Programming helps solving large multi-unit combinatorial auctions,f0cec6aca985105d31dfc4b77338322b,http://arxiv.org/abs/cs/0202016v1
16653," Previous works suggested the use of Branch and Bound techniques for finding
the optimal allocation in (multi-unit) combinatorial auctions. They remarked
that Linear Programming could provide a good upper-bound to the optimal
allocation, but they went on using lighter and less tight upper-bound
heuristics, on the ground that LP was too time-consuming to be used
repetitively to solve large combinatorial auctions. We present the results of
extensive experiments solving large (multi-unit) combinatorial auctions
generated according to distributions proposed by different researchers. Our
surprising conclusion is that Linear Programming is worth using. Investing
almost all of one's computing time in using LP to bound from above the value of
the optimal solution in order to prune aggressively pays off. We present a way
to save on the number of calls to the LP routine and experimental results
comparing different heuristics for choosing the bid to be considered next.
Those results show that the ordering based on the square root of the size of
the bids that was shown to be theoretically optimal in a previous paper by the
authors performs surprisingly better than others in practice. Choosing to deal
first with the bid with largest coefficient (typically 1) in the optimal
solution of the relaxed LP problem, is also a good choice. The gap between the
lower bound provided by greedy heuristics and the upper bound provided by LP is
typically small and pruning is therefore extensive. For most distributions,
auctions of a few hundred goods among a few thousand bids can be solved in
practice. All experiments were run on a PC under Matlab.
",daniel lehmann,,2002.0,,arXiv,Gonen2002,True,,arXiv,Not available,Linear Programming helps solving large multi-unit combinatorial auctions,f0cec6aca985105d31dfc4b77338322b,http://arxiv.org/abs/cs/0202016v1
16654," In this paper, we consider the problem of designing incentive compatible
auctions for multiple (homogeneous) units of a good, when bidders have private
valuations and private budget constraints. When only the valuations are private
and the budgets are public, Dobzinski {\em et al} show that the {\em adaptive
clinching} auction is the unique incentive-compatible auction achieving
Pareto-optimality. They further show thatthere is no deterministic
Pareto-optimal auction with private budgets. Our main contribution is to show
the following Budget Monotonicity property of this auction: When there is only
one infinitely divisible good, a bidder cannot improve her utility by reporting
a budget smaller than the truth. This implies that a randomized modification to
the adaptive clinching auction is incentive compatible and Pareto-optimal with
private budgets.
The Budget Monotonicity property also implies other improved results in this
context. For revenue maximization, the same auction improves the best-known
competitive ratio due to Abrams by a factor of 4, and asymptotically approaches
the performance of the optimal single-price auction.
Finally, we consider the problem of revenue maximization (or social welfare)
in a Bayesian setting. We allow the bidders have public size constraints (on
the amount of good they are willing to buy) in addition to private budget
constraints. We show a simple poly-time computable 5.83-approximation to the
optimal Bayesian incentive compatible mechanism, that is implementable in
dominant strategies. Our technique again crucially needs the ability to prevent
bidders from over-reporting budgets via randomization.
",sayan bhattacharya,,2009.0,,arXiv,Bhattacharya2009,True,,arXiv,Not available,Incentive Compatible Budget Elicitation in Multi-unit Auctions,c2f97d5a56bdb96e344b3b7394c84ae3,http://arxiv.org/abs/0904.3501v1
16655," In this paper, we consider the problem of designing incentive compatible
auctions for multiple (homogeneous) units of a good, when bidders have private
valuations and private budget constraints. When only the valuations are private
and the budgets are public, Dobzinski {\em et al} show that the {\em adaptive
clinching} auction is the unique incentive-compatible auction achieving
Pareto-optimality. They further show thatthere is no deterministic
Pareto-optimal auction with private budgets. Our main contribution is to show
the following Budget Monotonicity property of this auction: When there is only
one infinitely divisible good, a bidder cannot improve her utility by reporting
a budget smaller than the truth. This implies that a randomized modification to
the adaptive clinching auction is incentive compatible and Pareto-optimal with
private budgets.
The Budget Monotonicity property also implies other improved results in this
context. For revenue maximization, the same auction improves the best-known
competitive ratio due to Abrams by a factor of 4, and asymptotically approaches
the performance of the optimal single-price auction.
Finally, we consider the problem of revenue maximization (or social welfare)
in a Bayesian setting. We allow the bidders have public size constraints (on
the amount of good they are willing to buy) in addition to private budget
constraints. We show a simple poly-time computable 5.83-approximation to the
optimal Bayesian incentive compatible mechanism, that is implementable in
dominant strategies. Our technique again crucially needs the ability to prevent
bidders from over-reporting budgets via randomization.
",vincent conitzer,,2009.0,,arXiv,Bhattacharya2009,True,,arXiv,Not available,Incentive Compatible Budget Elicitation in Multi-unit Auctions,c2f97d5a56bdb96e344b3b7394c84ae3,http://arxiv.org/abs/0904.3501v1
16656," In this paper, we consider the problem of designing incentive compatible
auctions for multiple (homogeneous) units of a good, when bidders have private
valuations and private budget constraints. When only the valuations are private
and the budgets are public, Dobzinski {\em et al} show that the {\em adaptive
clinching} auction is the unique incentive-compatible auction achieving
Pareto-optimality. They further show thatthere is no deterministic
Pareto-optimal auction with private budgets. Our main contribution is to show
the following Budget Monotonicity property of this auction: When there is only
one infinitely divisible good, a bidder cannot improve her utility by reporting
a budget smaller than the truth. This implies that a randomized modification to
the adaptive clinching auction is incentive compatible and Pareto-optimal with
private budgets.
The Budget Monotonicity property also implies other improved results in this
context. For revenue maximization, the same auction improves the best-known
competitive ratio due to Abrams by a factor of 4, and asymptotically approaches
the performance of the optimal single-price auction.
Finally, we consider the problem of revenue maximization (or social welfare)
in a Bayesian setting. We allow the bidders have public size constraints (on
the amount of good they are willing to buy) in addition to private budget
constraints. We show a simple poly-time computable 5.83-approximation to the
optimal Bayesian incentive compatible mechanism, that is implementable in
dominant strategies. Our technique again crucially needs the ability to prevent
bidders from over-reporting budgets via randomization.
",kamesh munagala,,2009.0,,arXiv,Bhattacharya2009,True,,arXiv,Not available,Incentive Compatible Budget Elicitation in Multi-unit Auctions,c2f97d5a56bdb96e344b3b7394c84ae3,http://arxiv.org/abs/0904.3501v1
16657," In this paper, we consider the problem of designing incentive compatible
auctions for multiple (homogeneous) units of a good, when bidders have private
valuations and private budget constraints. When only the valuations are private
and the budgets are public, Dobzinski {\em et al} show that the {\em adaptive
clinching} auction is the unique incentive-compatible auction achieving
Pareto-optimality. They further show thatthere is no deterministic
Pareto-optimal auction with private budgets. Our main contribution is to show
the following Budget Monotonicity property of this auction: When there is only
one infinitely divisible good, a bidder cannot improve her utility by reporting
a budget smaller than the truth. This implies that a randomized modification to
the adaptive clinching auction is incentive compatible and Pareto-optimal with
private budgets.
The Budget Monotonicity property also implies other improved results in this
context. For revenue maximization, the same auction improves the best-known
competitive ratio due to Abrams by a factor of 4, and asymptotically approaches
the performance of the optimal single-price auction.
Finally, we consider the problem of revenue maximization (or social welfare)
in a Bayesian setting. We allow the bidders have public size constraints (on
the amount of good they are willing to buy) in addition to private budget
constraints. We show a simple poly-time computable 5.83-approximation to the
optimal Bayesian incentive compatible mechanism, that is implementable in
dominant strategies. Our technique again crucially needs the ability to prevent
bidders from over-reporting budgets via randomization.
",lirong xia,,2009.0,,arXiv,Bhattacharya2009,True,,arXiv,Not available,Incentive Compatible Budget Elicitation in Multi-unit Auctions,c2f97d5a56bdb96e344b3b7394c84ae3,http://arxiv.org/abs/0904.3501v1
16658," We study truthful mechanisms for hiring a team of agents in three classes of
set systems: Vertex Cover auctions, k-flow auctions, and cut auctions. For
Vertex Cover auctions, the vertices are owned by selfish and rational agents,
and the auctioneer wants to purchase a vertex cover from them. For k-flow
auctions, the edges are owned by the agents, and the auctioneer wants to
purchase k edge-disjoint s-t paths, for given s and t. In the same setting, for
cut auctions, the auctioneer wants to purchase an s-t cut. Only the agents know
their costs, and the auctioneer needs to select a feasible set and payments
based on bids made by the agents.
We present constant-competitive truthful mechanisms for all three set
systems. That is, the maximum overpayment of the mechanism is within a constant
factor of the maximum overpayment of any truthful mechanism, for every set
system in the class. The mechanism for Vertex Cover is based on scaling each
bid by a multiplier derived from the dominant eigenvector of a certain matrix.
The mechanism for k-flows prunes the graph to be minimally (k+1)-connected, and
then applies the Vertex Cover mechanism. Similarly, the mechanism for cuts
contracts the graph until all s-t paths have length exactly 2, and then applies
the Vertex Cover mechanism.
",david kempe,,2009.0,,arXiv,Kempe2009,True,,arXiv,Not available,"Frugal and Truthful Auctions for Vertex Covers, Flows, and Cuts",c03b2cc6986525592437afb81b43c5e6,http://arxiv.org/abs/0912.3310v2
16659," We study truthful mechanisms for hiring a team of agents in three classes of
set systems: Vertex Cover auctions, k-flow auctions, and cut auctions. For
Vertex Cover auctions, the vertices are owned by selfish and rational agents,
and the auctioneer wants to purchase a vertex cover from them. For k-flow
auctions, the edges are owned by the agents, and the auctioneer wants to
purchase k edge-disjoint s-t paths, for given s and t. In the same setting, for
cut auctions, the auctioneer wants to purchase an s-t cut. Only the agents know
their costs, and the auctioneer needs to select a feasible set and payments
based on bids made by the agents.
We present constant-competitive truthful mechanisms for all three set
systems. That is, the maximum overpayment of the mechanism is within a constant
factor of the maximum overpayment of any truthful mechanism, for every set
system in the class. The mechanism for Vertex Cover is based on scaling each
bid by a multiplier derived from the dominant eigenvector of a certain matrix.
The mechanism for k-flows prunes the graph to be minimally (k+1)-connected, and
then applies the Vertex Cover mechanism. Similarly, the mechanism for cuts
contracts the graph until all s-t paths have length exactly 2, and then applies
the Vertex Cover mechanism.
",mahyar salek,,2009.0,,arXiv,Kempe2009,True,,arXiv,Not available,"Frugal and Truthful Auctions for Vertex Covers, Flows, and Cuts",c03b2cc6986525592437afb81b43c5e6,http://arxiv.org/abs/0912.3310v2
16660," We study truthful mechanisms for hiring a team of agents in three classes of
set systems: Vertex Cover auctions, k-flow auctions, and cut auctions. For
Vertex Cover auctions, the vertices are owned by selfish and rational agents,
and the auctioneer wants to purchase a vertex cover from them. For k-flow
auctions, the edges are owned by the agents, and the auctioneer wants to
purchase k edge-disjoint s-t paths, for given s and t. In the same setting, for
cut auctions, the auctioneer wants to purchase an s-t cut. Only the agents know
their costs, and the auctioneer needs to select a feasible set and payments
based on bids made by the agents.
We present constant-competitive truthful mechanisms for all three set
systems. That is, the maximum overpayment of the mechanism is within a constant
factor of the maximum overpayment of any truthful mechanism, for every set
system in the class. The mechanism for Vertex Cover is based on scaling each
bid by a multiplier derived from the dominant eigenvector of a certain matrix.
The mechanism for k-flows prunes the graph to be minimally (k+1)-connected, and
then applies the Vertex Cover mechanism. Similarly, the mechanism for cuts
contracts the graph until all s-t paths have length exactly 2, and then applies
the Vertex Cover mechanism.
",cristopher moore,,2009.0,,arXiv,Kempe2009,True,,arXiv,Not available,"Frugal and Truthful Auctions for Vertex Covers, Flows, and Cuts",c03b2cc6986525592437afb81b43c5e6,http://arxiv.org/abs/0912.3310v2
16661," In pay-per click sponsored search auctions which are currently extensively
used by search engines, the auction for a keyword involves a certain number of
advertisers (say k) competing for available slots (say m) to display their ads.
This auction is typically conducted for a number of rounds (say T). There are
click probabilities mu_ij associated with each agent-slot pairs. The goal of
the search engine is to maximize social welfare of the advertisers, that is,
the sum of values of the advertisers. The search engine does not know the true
values advertisers have for a click to their respective ads and also does not
know the click probabilities mu_ij s. A key problem for the search engine
therefore is to learn these click probabilities during the T rounds of the
auction and also to ensure that the auction mechanism is truthful. Mechanisms
for addressing such learning and incentives issues have recently been
introduced and are aptly referred to as multi-armed-bandit (MAB) mechanisms.
When m = 1, characterizations for truthful MAB mechanisms are available in the
literature and it has been shown that the regret for such mechanisms will be
O(T^{2/3}). In this paper, we seek to derive a characterization in the
realistic but non-trivial general case when m > 1 and obtain several
interesting results.
",akash sarma,,2010.0,,arXiv,Sarma2010,True,,arXiv,Not available,Multi-Armed Bandit Mechanisms for Multi-Slot Sponsored Search Auctions,b0d933951a71f82f1c5bc09c968178e2,http://arxiv.org/abs/1001.1414v2
16662," We consider two-person zero-sum stochastic mean payoff games with perfect
information, or BWR-games, given by a digraph $G = (V, E)$, with local rewards
$r: E \to \ZZ$, and three types of positions: black $V_B$, white $V_W$, and
random $V_R$ forming a partition of $V$. It is a long-standing open question
whether a polynomial time algorithm for BWR-games exists, or not, even when
$|V_R|=0$. In fact, a pseudo-polynomial algorithm for BWR-games would already
imply their polynomial solvability. In this paper, we show that BWR-games with
a constant number of random positions can be solved in pseudo-polynomial time.
More precisely, in any BWR-game with $|V_R|=O(1)$, a saddle point in uniformly
optimal pure stationary strategies can be found in time polynomial in
$|V_W|+|V_B|$, the maximum absolute local reward, and the common denominator of
the transition probabilities.
",vladimir gurvich,,2015.0,,arXiv,Boros2015,True,,arXiv,Not available,"A Pseudo-Polynomial Algorithm for Mean Payoff Stochastic Games with
Perfect Information and Few Random Positions",a882f61b79b99f198e4d2f17882c5d33,http://arxiv.org/abs/1508.03431v2
16663," In pay-per click sponsored search auctions which are currently extensively
used by search engines, the auction for a keyword involves a certain number of
advertisers (say k) competing for available slots (say m) to display their ads.
This auction is typically conducted for a number of rounds (say T). There are
click probabilities mu_ij associated with each agent-slot pairs. The goal of
the search engine is to maximize social welfare of the advertisers, that is,
the sum of values of the advertisers. The search engine does not know the true
values advertisers have for a click to their respective ads and also does not
know the click probabilities mu_ij s. A key problem for the search engine
therefore is to learn these click probabilities during the T rounds of the
auction and also to ensure that the auction mechanism is truthful. Mechanisms
for addressing such learning and incentives issues have recently been
introduced and are aptly referred to as multi-armed-bandit (MAB) mechanisms.
When m = 1, characterizations for truthful MAB mechanisms are available in the
literature and it has been shown that the regret for such mechanisms will be
O(T^{2/3}). In this paper, we seek to derive a characterization in the
realistic but non-trivial general case when m > 1 and obtain several
interesting results.
",sujit gujar,,2010.0,,arXiv,Sarma2010,True,,arXiv,Not available,Multi-Armed Bandit Mechanisms for Multi-Slot Sponsored Search Auctions,b0d933951a71f82f1c5bc09c968178e2,http://arxiv.org/abs/1001.1414v2
16664," In pay-per click sponsored search auctions which are currently extensively
used by search engines, the auction for a keyword involves a certain number of
advertisers (say k) competing for available slots (say m) to display their ads.
This auction is typically conducted for a number of rounds (say T). There are
click probabilities mu_ij associated with each agent-slot pairs. The goal of
the search engine is to maximize social welfare of the advertisers, that is,
the sum of values of the advertisers. The search engine does not know the true
values advertisers have for a click to their respective ads and also does not
know the click probabilities mu_ij s. A key problem for the search engine
therefore is to learn these click probabilities during the T rounds of the
auction and also to ensure that the auction mechanism is truthful. Mechanisms
for addressing such learning and incentives issues have recently been
introduced and are aptly referred to as multi-armed-bandit (MAB) mechanisms.
When m = 1, characterizations for truthful MAB mechanisms are available in the
literature and it has been shown that the regret for such mechanisms will be
O(T^{2/3}). In this paper, we seek to derive a characterization in the
realistic but non-trivial general case when m > 1 and obtain several
interesting results.
",y. narahari,,2010.0,,arXiv,Sarma2010,True,,arXiv,Not available,Multi-Armed Bandit Mechanisms for Multi-Slot Sponsored Search Auctions,b0d933951a71f82f1c5bc09c968178e2,http://arxiv.org/abs/1001.1414v2
16665," We design algorithms for computing approximately revenue-maximizing {\em
sequential posted-pricing mechanisms (SPM)} in $K$-unit auctions, in a standard
Bayesian model. A seller has $K$ copies of an item to sell, and there are $n$
buyers, each interested in only one copy, who have some value for the item. The
seller must post a price for each buyer, the buyers arrive in a sequence
enforced by the seller, and a buyer buys the item if its value exceeds the
price posted to it. The seller does not know the values of the buyers, but have
Bayesian information about them. An SPM specifies the ordering of buyers and
the posted prices, and may be {\em adaptive} or {\em non-adaptive} in its
behavior.
The goal is to design SPM in polynomial time to maximize expected revenue. We
compare against the expected revenue of optimal SPM, and provide a polynomial
time approximation scheme (PTAS) for both non-adaptive and adaptive SPMs. This
is achieved by two algorithms: an efficient algorithm that gives a
$(1-\frac{1}{\sqrt{2\pi K}})$-approximation (and hence a PTAS for sufficiently
large $K$), and another that is a PTAS for constant $K$. The first algorithm
yields a non-adaptive SPM that yields its approximation guarantees against an
optimal adaptive SPM -- this implies that the {\em adaptivity gap} in SPMs
vanishes as $K$ becomes larger.
",tanmoy chakraborty,,2010.0,,arXiv,Chakraborty2010,True,,arXiv,Not available,"Approximation Schemes for Sequential Posted Pricing in Multi-Unit
Auctions",f5bce1bf0c8a1241e9d02e10eb616ab1,http://arxiv.org/abs/1008.1616v1
16666," We design algorithms for computing approximately revenue-maximizing {\em
sequential posted-pricing mechanisms (SPM)} in $K$-unit auctions, in a standard
Bayesian model. A seller has $K$ copies of an item to sell, and there are $n$
buyers, each interested in only one copy, who have some value for the item. The
seller must post a price for each buyer, the buyers arrive in a sequence
enforced by the seller, and a buyer buys the item if its value exceeds the
price posted to it. The seller does not know the values of the buyers, but have
Bayesian information about them. An SPM specifies the ordering of buyers and
the posted prices, and may be {\em adaptive} or {\em non-adaptive} in its
behavior.
The goal is to design SPM in polynomial time to maximize expected revenue. We
compare against the expected revenue of optimal SPM, and provide a polynomial
time approximation scheme (PTAS) for both non-adaptive and adaptive SPMs. This
is achieved by two algorithms: an efficient algorithm that gives a
$(1-\frac{1}{\sqrt{2\pi K}})$-approximation (and hence a PTAS for sufficiently
large $K$), and another that is a PTAS for constant $K$. The first algorithm
yields a non-adaptive SPM that yields its approximation guarantees against an
optimal adaptive SPM -- this implies that the {\em adaptivity gap} in SPMs
vanishes as $K$ becomes larger.
",eyal even-dar,,2010.0,,arXiv,Chakraborty2010,True,,arXiv,Not available,"Approximation Schemes for Sequential Posted Pricing in Multi-Unit
Auctions",f5bce1bf0c8a1241e9d02e10eb616ab1,http://arxiv.org/abs/1008.1616v1
16667," We design algorithms for computing approximately revenue-maximizing {\em
sequential posted-pricing mechanisms (SPM)} in $K$-unit auctions, in a standard
Bayesian model. A seller has $K$ copies of an item to sell, and there are $n$
buyers, each interested in only one copy, who have some value for the item. The
seller must post a price for each buyer, the buyers arrive in a sequence
enforced by the seller, and a buyer buys the item if its value exceeds the
price posted to it. The seller does not know the values of the buyers, but have
Bayesian information about them. An SPM specifies the ordering of buyers and
the posted prices, and may be {\em adaptive} or {\em non-adaptive} in its
behavior.
The goal is to design SPM in polynomial time to maximize expected revenue. We
compare against the expected revenue of optimal SPM, and provide a polynomial
time approximation scheme (PTAS) for both non-adaptive and adaptive SPMs. This
is achieved by two algorithms: an efficient algorithm that gives a
$(1-\frac{1}{\sqrt{2\pi K}})$-approximation (and hence a PTAS for sufficiently
large $K$), and another that is a PTAS for constant $K$. The first algorithm
yields a non-adaptive SPM that yields its approximation guarantees against an
optimal adaptive SPM -- this implies that the {\em adaptivity gap} in SPMs
vanishes as $K$ becomes larger.
",sudipto guha,,2010.0,,arXiv,Chakraborty2010,True,,arXiv,Not available,"Approximation Schemes for Sequential Posted Pricing in Multi-Unit
Auctions",f5bce1bf0c8a1241e9d02e10eb616ab1,http://arxiv.org/abs/1008.1616v1
16668," We design algorithms for computing approximately revenue-maximizing {\em
sequential posted-pricing mechanisms (SPM)} in $K$-unit auctions, in a standard
Bayesian model. A seller has $K$ copies of an item to sell, and there are $n$
buyers, each interested in only one copy, who have some value for the item. The
seller must post a price for each buyer, the buyers arrive in a sequence
enforced by the seller, and a buyer buys the item if its value exceeds the
price posted to it. The seller does not know the values of the buyers, but have
Bayesian information about them. An SPM specifies the ordering of buyers and
the posted prices, and may be {\em adaptive} or {\em non-adaptive} in its
behavior.
The goal is to design SPM in polynomial time to maximize expected revenue. We
compare against the expected revenue of optimal SPM, and provide a polynomial
time approximation scheme (PTAS) for both non-adaptive and adaptive SPMs. This
is achieved by two algorithms: an efficient algorithm that gives a
$(1-\frac{1}{\sqrt{2\pi K}})$-approximation (and hence a PTAS for sufficiently
large $K$), and another that is a PTAS for constant $K$. The first algorithm
yields a non-adaptive SPM that yields its approximation guarantees against an
optimal adaptive SPM -- this implies that the {\em adaptivity gap} in SPMs
vanishes as $K$ becomes larger.
",yishay mansour,,2010.0,,arXiv,Chakraborty2010,True,,arXiv,Not available,"Approximation Schemes for Sequential Posted Pricing in Multi-Unit
Auctions",f5bce1bf0c8a1241e9d02e10eb616ab1,http://arxiv.org/abs/1008.1616v1
16669," We design algorithms for computing approximately revenue-maximizing {\em
sequential posted-pricing mechanisms (SPM)} in $K$-unit auctions, in a standard
Bayesian model. A seller has $K$ copies of an item to sell, and there are $n$
buyers, each interested in only one copy, who have some value for the item. The
seller must post a price for each buyer, the buyers arrive in a sequence
enforced by the seller, and a buyer buys the item if its value exceeds the
price posted to it. The seller does not know the values of the buyers, but have
Bayesian information about them. An SPM specifies the ordering of buyers and
the posted prices, and may be {\em adaptive} or {\em non-adaptive} in its
behavior.
The goal is to design SPM in polynomial time to maximize expected revenue. We
compare against the expected revenue of optimal SPM, and provide a polynomial
time approximation scheme (PTAS) for both non-adaptive and adaptive SPMs. This
is achieved by two algorithms: an efficient algorithm that gives a
$(1-\frac{1}{\sqrt{2\pi K}})$-approximation (and hence a PTAS for sufficiently
large $K$), and another that is a PTAS for constant $K$. The first algorithm
yields a non-adaptive SPM that yields its approximation guarantees against an
optimal adaptive SPM -- this implies that the {\em adaptivity gap} in SPMs
vanishes as $K$ becomes larger.
",s. muthukrishnan,,2010.0,,arXiv,Chakraborty2010,True,,arXiv,Not available,"Approximation Schemes for Sequential Posted Pricing in Multi-Unit
Auctions",f5bce1bf0c8a1241e9d02e10eb616ab1,http://arxiv.org/abs/1008.1616v1
16670," For Bayesian combinatorial auctions, we present a general framework for
approximately reducing the mechanism design problem for multiple buyers to
single buyer sub-problems. Our framework can be applied to any setting which
roughly satisfies the following assumptions: (i) buyers' types must be
distributed independently (not necessarily identically), (ii) objective
function must be linearly separable over the buyers, and (iii) except for the
supply constraints, there should be no other inter-buyer constraints. Our
framework is general in the sense that it makes no explicit assumption about
buyers' valuations, type distributions, and single buyer constraints (e.g.,
budget, incentive compatibility, etc).
We present two generic multi buyer mechanisms which use single buyer
mechanisms as black boxes; if an $\alpha$-approximate single buyer mechanism
can be constructed for each buyer, and if no buyer requires more than
$\frac{1}{k}$ of all units of each item, then our generic multi buyer
mechanisms are $\gamma_k\alpha$-approximation of the optimal multi buyer
mechanism, where $\gamma_k$ is a constant which is at least
$1-\frac{1}{\sqrt{k+3}}$. Observe that $\gamma_k$ is at least 1/2 (for $k=1$)
and approaches 1 as $k \to \infty$. As a byproduct of our construction, we
present a generalization of prophet inequalities. Furthermore, as applications
of our framework, we present multi buyer mechanisms with improved approximation
factor for several settings from the literature.
",saeed alaei,,2011.0,,arXiv,Alaei2011,True,,arXiv,Not available,"Bayesian Combinatorial Auctions: Expanding Single Buyer Mechanisms to
Many Buyers",7550aef5decfb04112daa0b811f87a11,http://arxiv.org/abs/1106.0961v4
16671," Most search engines sell slots to place advertisements on the search results
page through keyword auctions. Advertisers offer bids for how much they are
willing to pay when someone enters a search query, sees the search results, and
then clicks on one of their ads. Search engines typically order the
advertisements for a query by a combination of the bids and expected
clickthrough rates for each advertisement. In this paper, we extend a model of
Yahoo's and Google's advertising auctions to include an effect where repeatedly
showing less relevant ads has a persistent impact on all advertising on the
search engine, an impact we designate as the pollution effect. In Monte-Carlo
simulations using distributions fitted to Yahoo data, we show that a modest
pollution effect is sufficient to dramatically change the advertising rank
order that yields the optimal advertising revenue for a search engine. In
addition, if a pollution effect exists, it is possible to maximize revenue
while also increasing advertiser, and publisher utility. Our results suggest
that search engines could benefit from making relevant advertisements less
expensive and irrelevant advertisements more costly for advertisers than is the
current practice.
",greg linden,,2011.0,,arXiv,Linden2011,True,,arXiv,Not available,"The Pollution Effect: Optimizing Keyword Auctions by Favoring Relevant
Advertising",6c19b4146facdccfc27a529889a4c02d,http://arxiv.org/abs/1109.6263v1
16672," Most search engines sell slots to place advertisements on the search results
page through keyword auctions. Advertisers offer bids for how much they are
willing to pay when someone enters a search query, sees the search results, and
then clicks on one of their ads. Search engines typically order the
advertisements for a query by a combination of the bids and expected
clickthrough rates for each advertisement. In this paper, we extend a model of
Yahoo's and Google's advertising auctions to include an effect where repeatedly
showing less relevant ads has a persistent impact on all advertising on the
search engine, an impact we designate as the pollution effect. In Monte-Carlo
simulations using distributions fitted to Yahoo data, we show that a modest
pollution effect is sufficient to dramatically change the advertising rank
order that yields the optimal advertising revenue for a search engine. In
addition, if a pollution effect exists, it is possible to maximize revenue
while also increasing advertiser, and publisher utility. Our results suggest
that search engines could benefit from making relevant advertisements less
expensive and irrelevant advertisements more costly for advertisers than is the
current practice.
",christopher meek,,2011.0,,arXiv,Linden2011,True,,arXiv,Not available,"The Pollution Effect: Optimizing Keyword Auctions by Favoring Relevant
Advertising",6c19b4146facdccfc27a529889a4c02d,http://arxiv.org/abs/1109.6263v1
16673," In this paper, we study the distribution and behaviour of internal equilibria
in a $d$-player $n$-strategy random evolutionary game where the game payoff
matrix is generated from normal distributions. The study of this paper reveals
and exploits interesting connections between evolutionary game theory and
random polynomial theory. The main novelties of the paper are some qualitative
and quantitative results on the expected density, $f_{n,d}$, and the expected
number, $E(n,d)$, of (stable) internal equilibria. Firstly, we show that in
multi-player two-strategy games, they behave asymptotically as $\sqrt{d-1}$ as
$d$ is sufficiently large. Secondly, we prove that they are monotone functions
of $d$. We also make a conjecture for games with more than two strategies.
Thirdly, we provide numerical simulations for our analytical results and to
support the conjecture. As consequences of our analysis, some qualitative and
quantitative results on the distribution of zeros of a random Bernstein
polynomial are also obtained.
",the han,,2015.0,,arXiv,Duong2015,True,,arXiv,Not available,"Analysis of the expected density of internal equilibria in random
evolutionary multi-player multi-strategy games",195738357aa8fe703aeba1f2b92817d9,http://arxiv.org/abs/1505.04676v3
16674," We consider two-person zero-sum stochastic mean payoff games with perfect
information, or BWR-games, given by a digraph $G = (V, E)$, with local rewards
$r: E \to \ZZ$, and three types of positions: black $V_B$, white $V_W$, and
random $V_R$ forming a partition of $V$. It is a long-standing open question
whether a polynomial time algorithm for BWR-games exists, or not, even when
$|V_R|=0$. In fact, a pseudo-polynomial algorithm for BWR-games would already
imply their polynomial solvability. In this paper, we show that BWR-games with
a constant number of random positions can be solved in pseudo-polynomial time.
More precisely, in any BWR-game with $|V_R|=O(1)$, a saddle point in uniformly
optimal pure stationary strategies can be found in time polynomial in
$|V_W|+|V_B|$, the maximum absolute local reward, and the common denominator of
the transition probabilities.
",kazuhisa makino,,2015.0,,arXiv,Boros2015,True,,arXiv,Not available,"A Pseudo-Polynomial Algorithm for Mean Payoff Stochastic Games with
Perfect Information and Few Random Positions",a882f61b79b99f198e4d2f17882c5d33,http://arxiv.org/abs/1508.03431v2
16675," Most search engines sell slots to place advertisements on the search results
page through keyword auctions. Advertisers offer bids for how much they are
willing to pay when someone enters a search query, sees the search results, and
then clicks on one of their ads. Search engines typically order the
advertisements for a query by a combination of the bids and expected
clickthrough rates for each advertisement. In this paper, we extend a model of
Yahoo's and Google's advertising auctions to include an effect where repeatedly
showing less relevant ads has a persistent impact on all advertising on the
search engine, an impact we designate as the pollution effect. In Monte-Carlo
simulations using distributions fitted to Yahoo data, we show that a modest
pollution effect is sufficient to dramatically change the advertising rank
order that yields the optimal advertising revenue for a search engine. In
addition, if a pollution effect exists, it is possible to maximize revenue
while also increasing advertiser, and publisher utility. Our results suggest
that search engines could benefit from making relevant advertisements less
expensive and irrelevant advertisements more costly for advertisers than is the
current practice.
",max chickering,,2011.0,,arXiv,Linden2011,True,,arXiv,Not available,"The Pollution Effect: Optimizing Keyword Auctions by Favoring Relevant
Advertising",6c19b4146facdccfc27a529889a4c02d,http://arxiv.org/abs/1109.6263v1
16676," We present a polynomial-time algorithm that, given samples from the unknown
valuation distribution of each bidder, learns an auction that approximately
maximizes the auctioneer's revenue in a variety of single-parameter auction
environments including matroid environments, position environments, and the
public project environment. The valuation distributions may be arbitrary
bounded distributions (in particular, they may be irregular, and may differ for
the various bidders), thus resolving a problem left open by previous papers.
The analysis uses basic tools, is performed in its entirety in value-space, and
simplifies the analysis of previously known results for special cases.
Furthermore, the analysis extends to certain single-parameter auction
environments where precise revenue maximization is known to be intractable,
such as knapsack environments.
",yannai gonczarowski,,2016.0,,arXiv,Gonczarowski2016,True,,arXiv,Not available,"Efficient Empirical Revenue Maximization in Single-Parameter Auction
Environments",c6b5b8f17e72c87f0e1c12b1c26f1f96,http://arxiv.org/abs/1610.09976v2
16677," We present a polynomial-time algorithm that, given samples from the unknown
valuation distribution of each bidder, learns an auction that approximately
maximizes the auctioneer's revenue in a variety of single-parameter auction
environments including matroid environments, position environments, and the
public project environment. The valuation distributions may be arbitrary
bounded distributions (in particular, they may be irregular, and may differ for
the various bidders), thus resolving a problem left open by previous papers.
The analysis uses basic tools, is performed in its entirety in value-space, and
simplifies the analysis of previously known results for special cases.
Furthermore, the analysis extends to certain single-parameter auction
environments where precise revenue maximization is known to be intractable,
such as knapsack environments.
",noam nisan,,2016.0,,arXiv,Gonczarowski2016,True,,arXiv,Not available,"Efficient Empirical Revenue Maximization in Single-Parameter Auction
Environments",c6b5b8f17e72c87f0e1c12b1c26f1f96,http://arxiv.org/abs/1610.09976v2
16681," This paper studies an auction design problem for a seller to sell a commodity
in a social network, where each individual (the seller or a buyer) can only
communicate with her neighbors. The challenge to the seller is to design a
mechanism to incentivize the buyers, who are aware of the auction, to further
propagate the information to their neighbors so that more buyers will
participate in the auction and hence, the seller will be able to make a higher
revenue. We propose a novel auction mechanism, called information diffusion
mechanism (IDM), which incentivizes the buyers to not only truthfully report
their valuations on the commodity to the seller, but also further propagate the
auction information to all their neighbors. In comparison, the direct extension
of the well-known Vickrey-Clarke-Groves (VCG) mechanism in social networks can
also incentivize the information diffusion, but it will decrease the seller's
revenue or even lead to a deficit sometimes. The formalization of the problem
has not yet been addressed in the literature of mechanism design and our
solution is very significant in the presence of large-scale online social
networks.
",bin li,,2017.0,,arXiv,Li2017,True,,arXiv,Not available,Mechanism Design in Social Networks,acd01ad9417afe55302a65cc7981a915,http://arxiv.org/abs/1702.03627v1
16682," This paper studies an auction design problem for a seller to sell a commodity
in a social network, where each individual (the seller or a buyer) can only
communicate with her neighbors. The challenge to the seller is to design a
mechanism to incentivize the buyers, who are aware of the auction, to further
propagate the information to their neighbors so that more buyers will
participate in the auction and hence, the seller will be able to make a higher
revenue. We propose a novel auction mechanism, called information diffusion
mechanism (IDM), which incentivizes the buyers to not only truthfully report
their valuations on the commodity to the seller, but also further propagate the
auction information to all their neighbors. In comparison, the direct extension
of the well-known Vickrey-Clarke-Groves (VCG) mechanism in social networks can
also incentivize the information diffusion, but it will decrease the seller's
revenue or even lead to a deficit sometimes. The formalization of the problem
has not yet been addressed in the literature of mechanism design and our
solution is very significant in the presence of large-scale online social
networks.
",dong hao,,2017.0,,arXiv,Li2017,True,,arXiv,Not available,Mechanism Design in Social Networks,acd01ad9417afe55302a65cc7981a915,http://arxiv.org/abs/1702.03627v1
16683," This paper studies an auction design problem for a seller to sell a commodity
in a social network, where each individual (the seller or a buyer) can only
communicate with her neighbors. The challenge to the seller is to design a
mechanism to incentivize the buyers, who are aware of the auction, to further
propagate the information to their neighbors so that more buyers will
participate in the auction and hence, the seller will be able to make a higher
revenue. We propose a novel auction mechanism, called information diffusion
mechanism (IDM), which incentivizes the buyers to not only truthfully report
their valuations on the commodity to the seller, but also further propagate the
auction information to all their neighbors. In comparison, the direct extension
of the well-known Vickrey-Clarke-Groves (VCG) mechanism in social networks can
also incentivize the information diffusion, but it will decrease the seller's
revenue or even lead to a deficit sometimes. The formalization of the problem
has not yet been addressed in the literature of mechanism design and our
solution is very significant in the presence of large-scale online social
networks.
",dengji zhao,,2017.0,,arXiv,Li2017,True,,arXiv,Not available,Mechanism Design in Social Networks,acd01ad9417afe55302a65cc7981a915,http://arxiv.org/abs/1702.03627v1
16684," This paper studies an auction design problem for a seller to sell a commodity
in a social network, where each individual (the seller or a buyer) can only
communicate with her neighbors. The challenge to the seller is to design a
mechanism to incentivize the buyers, who are aware of the auction, to further
propagate the information to their neighbors so that more buyers will
participate in the auction and hence, the seller will be able to make a higher
revenue. We propose a novel auction mechanism, called information diffusion
mechanism (IDM), which incentivizes the buyers to not only truthfully report
their valuations on the commodity to the seller, but also further propagate the
auction information to all their neighbors. In comparison, the direct extension
of the well-known Vickrey-Clarke-Groves (VCG) mechanism in social networks can
also incentivize the information diffusion, but it will decrease the seller's
revenue or even lead to a deficit sometimes. The formalization of the problem
has not yet been addressed in the literature of mechanism design and our
solution is very significant in the presence of large-scale online social
networks.
",tao zhou,,2017.0,,arXiv,Li2017,True,,arXiv,Not available,Mechanism Design in Social Networks,acd01ad9417afe55302a65cc7981a915,http://arxiv.org/abs/1702.03627v1
16685," We address the question of whether price of stability results (existence of
equilibria with low social cost) are robust to incomplete information. We show
that this is the case in potential games, if the underlying algorithmic social
cost minimization problem admits a constant factor approximation algorithm via
strict cost-sharing schemes. Roughly, if the existence of an
$\alpha$-approximate equilibrium in the complete information setting was proven
via the potential method, then there also exists a $\alpha\cdot
\beta$-approximate Bayes-Nash equilibrium in the incomplete information
setting, where $\beta$ is the approximation factor of the strict-cost sharing
scheme algorithm. We apply our approach to Bayesian versions of the archetypal,
in the price of stability analysis, network design models and show the
existence of $O(\log(n))$-approximate Bayes-Nash equilibria in several games
whose complete information counterparts have been well-studied, such as
undirected network design games, multi-cast games and covering games.
",vasilis syrgkanis,,2015.0,,arXiv,Syrgkanis2015,True,,arXiv,Not available,Price of Stability in Games of Incomplete Information,076f8c69ad1793a554aa6886579bd5a6,http://arxiv.org/abs/1503.03739v1
16686," This paper investigates reverse auctions that involve continuous values of
different types of goods, general nonconvex constraints, and second stage
costs. Our analysis seeks to design the payment rules and conditions under
which coalitions of participants cannot influence the auction outcome in order
to obtain higher collective utility. Under incentive-compatible bidding in the
Vickrey-Clarke-Groves mechanism, coalition-proof outcomes are achieved if the
submitted bids are convex and the constraint sets are of polymatroid-type.
Unfortunately, these conditions do not capture the complexity of the general
class of reverse auctions under consideration. By relaxing the property of
incentive-compatibility, we investigate further payment rules that are
coalition-proof, but without any extra conditions. Among coalition-proof
mechanisms, we select the mechanism that minimizes the participants' abilities
to benefit from strategic manipulations, in order to incentivize truthful
bidding from the participants. Since calculating the payments directly for
these mechanisms is computationally difficult for auctions involving many
participants, we present two computationally efficient methods. Our results are
verified with several case studies based on electricity market data.
",orcun karaca,,2017.0,,arXiv,Karaca2017,True,,arXiv,Not available,Designing Coalition-Proof Reverse Auctions over Continuous Goods,1dd6f12b1b405f84991c2ffae2f33171,http://arxiv.org/abs/1711.06774v3
16687," This paper investigates reverse auctions that involve continuous values of
different types of goods, general nonconvex constraints, and second stage
costs. Our analysis seeks to design the payment rules and conditions under
which coalitions of participants cannot influence the auction outcome in order
to obtain higher collective utility. Under incentive-compatible bidding in the
Vickrey-Clarke-Groves mechanism, coalition-proof outcomes are achieved if the
submitted bids are convex and the constraint sets are of polymatroid-type.
Unfortunately, these conditions do not capture the complexity of the general
class of reverse auctions under consideration. By relaxing the property of
incentive-compatibility, we investigate further payment rules that are
coalition-proof, but without any extra conditions. Among coalition-proof
mechanisms, we select the mechanism that minimizes the participants' abilities
to benefit from strategic manipulations, in order to incentivize truthful
bidding from the participants. Since calculating the payments directly for
these mechanisms is computationally difficult for auctions involving many
participants, we present two computationally efficient methods. Our results are
verified with several case studies based on electricity market data.
",pier sessa,,2017.0,,arXiv,Karaca2017,True,,arXiv,Not available,Designing Coalition-Proof Reverse Auctions over Continuous Goods,1dd6f12b1b405f84991c2ffae2f33171,http://arxiv.org/abs/1711.06774v3
16688," This paper investigates reverse auctions that involve continuous values of
different types of goods, general nonconvex constraints, and second stage
costs. Our analysis seeks to design the payment rules and conditions under
which coalitions of participants cannot influence the auction outcome in order
to obtain higher collective utility. Under incentive-compatible bidding in the
Vickrey-Clarke-Groves mechanism, coalition-proof outcomes are achieved if the
submitted bids are convex and the constraint sets are of polymatroid-type.
Unfortunately, these conditions do not capture the complexity of the general
class of reverse auctions under consideration. By relaxing the property of
incentive-compatibility, we investigate further payment rules that are
coalition-proof, but without any extra conditions. Among coalition-proof
mechanisms, we select the mechanism that minimizes the participants' abilities
to benefit from strategic manipulations, in order to incentivize truthful
bidding from the participants. Since calculating the payments directly for
these mechanisms is computationally difficult for auctions involving many
participants, we present two computationally efficient methods. Our results are
verified with several case studies based on electricity market data.
",neil walton,,2017.0,,arXiv,Karaca2017,True,,arXiv,Not available,Designing Coalition-Proof Reverse Auctions over Continuous Goods,1dd6f12b1b405f84991c2ffae2f33171,http://arxiv.org/abs/1711.06774v3
16689," This paper investigates reverse auctions that involve continuous values of
different types of goods, general nonconvex constraints, and second stage
costs. Our analysis seeks to design the payment rules and conditions under
which coalitions of participants cannot influence the auction outcome in order
to obtain higher collective utility. Under incentive-compatible bidding in the
Vickrey-Clarke-Groves mechanism, coalition-proof outcomes are achieved if the
submitted bids are convex and the constraint sets are of polymatroid-type.
Unfortunately, these conditions do not capture the complexity of the general
class of reverse auctions under consideration. By relaxing the property of
incentive-compatibility, we investigate further payment rules that are
coalition-proof, but without any extra conditions. Among coalition-proof
mechanisms, we select the mechanism that minimizes the participants' abilities
to benefit from strategic manipulations, in order to incentivize truthful
bidding from the participants. Since calculating the payments directly for
these mechanisms is computationally difficult for auctions involving many
participants, we present two computationally efficient methods. Our results are
verified with several case studies based on electricity market data.
",maryam kamgarpour,,2017.0,,arXiv,Karaca2017,True,,arXiv,Not available,Designing Coalition-Proof Reverse Auctions over Continuous Goods,1dd6f12b1b405f84991c2ffae2f33171,http://arxiv.org/abs/1711.06774v3
16690," We describe an algorithm for computing best response strategies in a class of
two-player infinite games of incomplete information, defined by payoffs
piecewise linear in agents' types and actions, conditional on linear
comparisons of agents' actions. We show that this class includes many
well-known games including a variety of auctions and a novel allocation game.
In some cases, the best-response algorithm can be iterated to compute
Bayes-Nash equilibria. We demonstrate the efficiency of our approach on
existing and new games.
",daniel reeves,,2012.0,,arXiv,Reeves2012,True,,arXiv,Not available,"Computing Best-Response Strategies in Infinite Games of Incomplete
Information",067fb14ef316953fb2525cc774bd6388,http://arxiv.org/abs/1207.4171v1
16691," We describe an algorithm for computing best response strategies in a class of
two-player infinite games of incomplete information, defined by payoffs
piecewise linear in agents' types and actions, conditional on linear
comparisons of agents' actions. We show that this class includes many
well-known games including a variety of auctions and a novel allocation game.
In some cases, the best-response algorithm can be iterated to compute
Bayes-Nash equilibria. We demonstrate the efficiency of our approach on
existing and new games.
",michael wellman,,2012.0,,arXiv,Reeves2012,True,,arXiv,Not available,"Computing Best-Response Strategies in Infinite Games of Incomplete
Information",067fb14ef316953fb2525cc774bd6388,http://arxiv.org/abs/1207.4171v1
16692," The design of the best economic mechanism for Sponsored Search Auctions
(SSAs) is a central task in computational mechanism design/game theory. Two
open questions concern the adoption of user models more accurate than that one
currently used and the choice between Generalized Second Price auction (GSP)
and Vickrey-Clark-Groves mechanism (VCG). In this paper, we provide some
contributions to answer these questions. We study Price of Anarchy (PoA) and
Price of Stability (PoS) over social welfare and auctioneer's revenue of GSP
w.r.t. the VCG when the users follow the famous cascade model. Furthermore, we
provide exact, randomized, and approximate algorithms, showing that in
real-world settings (Yahoo! Webscope A3 dataset, 10 available slots) optimal
allocations can be found in less than 1s with up to 1000 ads, and can be
approximated in less than 20ms even with more than 1000 ads with an average
accuracy greater than 99%.
",gabriele farina,,2015.0,,arXiv,Farina2015,True,,arXiv,Not available,Ad auctions and cascade model: GSP inefficiency and algorithms,64ab42f22153cb9c458d6fd93d4cc0a7,http://arxiv.org/abs/1511.07397v1
16693," The design of the best economic mechanism for Sponsored Search Auctions
(SSAs) is a central task in computational mechanism design/game theory. Two
open questions concern the adoption of user models more accurate than that one
currently used and the choice between Generalized Second Price auction (GSP)
and Vickrey-Clark-Groves mechanism (VCG). In this paper, we provide some
contributions to answer these questions. We study Price of Anarchy (PoA) and
Price of Stability (PoS) over social welfare and auctioneer's revenue of GSP
w.r.t. the VCG when the users follow the famous cascade model. Furthermore, we
provide exact, randomized, and approximate algorithms, showing that in
real-world settings (Yahoo! Webscope A3 dataset, 10 available slots) optimal
allocations can be found in less than 1s with up to 1000 ads, and can be
approximated in less than 20ms even with more than 1000 ads with an average
accuracy greater than 99%.
",nicola gatti,,2015.0,,arXiv,Farina2015,True,,arXiv,Not available,Ad auctions and cascade model: GSP inefficiency and algorithms,64ab42f22153cb9c458d6fd93d4cc0a7,http://arxiv.org/abs/1511.07397v1
16694," We introduce a measure for the level of stability against coalitional
deviations, called \emph{stability scores}, which generalizes widely used
notions of stability in non-cooperative games. We use the proposed measure to
compare various Nash equilibria in congestion games, and to quantify the effect
of game parameters on coalitional stability. For our main results, we apply
stability scores to analyze and compare the Generalized Second Price (GSP) and
Vickrey-Clarke-Groves (VCG) ad auctions. We show that while a central result of
the ad auctions literature is that the GSP and VCG auctions implement the same
outcome in one of the equilibria of GSP, the GSP outcome is far more stable.
Finally, a modified version of VCG is introduced, which is group
strategy-proof, and thereby achieves the highest possible stability score.
",michal feldman,,2011.0,,arXiv,Feldman2011,True,,arXiv,Not available,Stability Scores: Measuring Coalitional Stability,590f42d2a826c022640221821f5e351f,http://arxiv.org/abs/1105.5983v2
16695," We introduce a measure for the level of stability against coalitional
deviations, called \emph{stability scores}, which generalizes widely used
notions of stability in non-cooperative games. We use the proposed measure to
compare various Nash equilibria in congestion games, and to quantify the effect
of game parameters on coalitional stability. For our main results, we apply
stability scores to analyze and compare the Generalized Second Price (GSP) and
Vickrey-Clarke-Groves (VCG) ad auctions. We show that while a central result of
the ad auctions literature is that the GSP and VCG auctions implement the same
outcome in one of the equilibria of GSP, the GSP outcome is far more stable.
Finally, a modified version of VCG is introduced, which is group
strategy-proof, and thereby achieves the highest possible stability score.
",reshef meir,,2011.0,,arXiv,Feldman2011,True,,arXiv,Not available,Stability Scores: Measuring Coalitional Stability,590f42d2a826c022640221821f5e351f,http://arxiv.org/abs/1105.5983v2
16696," The achievement of common goals through voluntary efforts of members of a
group can be challenged by the high temptation of individual defection. Here,
two-person one-goal assurance games are generalized to N-person, M-goal
achievement games in which group members can have different motivations with
respect to the achievement of the different goals. The theoretical performance
of groups faced with the challenge of multiple simultaneous goals is analyzed
mathematically and computationally. For two-goal scenarios one finds that
""polarized"" as well as ""biased"" groups perform well in the presence of
defectors. A special case, called individual purpose games (N-person, N-goal
achievements games where there is a one-to-one mapping between actors and goals
for which they have a high achievement motivation) is analyzed in more detail
in form of the ""importance of being different theorem"". It is shown that in
some individual purpose games, groups of size N can successfully accomplish N
goals, such that each group member is highly motivated towards the achievement
of one unique goal. The game-theoretic results suggest that multiple goals as
well as differences in motivations can, in some cases, correspond to highly
effective groups.
",eckart bindewald,,2015.0,,arXiv,Bindewald2015,True,,arXiv,Not available,Achieving Multiple Goals via Voluntary Efforts and Motivation Asymmetry,e841d3e55c4cee5c8327c6b49a94e0db,http://arxiv.org/abs/1503.05908v1
16697," We introduce a measure for the level of stability against coalitional
deviations, called \emph{stability scores}, which generalizes widely used
notions of stability in non-cooperative games. We use the proposed measure to
compare various Nash equilibria in congestion games, and to quantify the effect
of game parameters on coalitional stability. For our main results, we apply
stability scores to analyze and compare the Generalized Second Price (GSP) and
Vickrey-Clarke-Groves (VCG) ad auctions. We show that while a central result of
the ad auctions literature is that the GSP and VCG auctions implement the same
outcome in one of the equilibria of GSP, the GSP outcome is far more stable.
Finally, a modified version of VCG is introduced, which is group
strategy-proof, and thereby achieves the highest possible stability score.
",moshe tennenholtz,,2011.0,,arXiv,Feldman2011,True,,arXiv,Not available,Stability Scores: Measuring Coalitional Stability,590f42d2a826c022640221821f5e351f,http://arxiv.org/abs/1105.5983v2
16698," Inspired by Internet ad auction applications, we study the problem of
allocating a single item via an auction when bidders place very different
values on the item. We formulate this as the problem of prior-free auction and
focus on designing a simple mechanism that always allocates the item. Rather
than designing sophisticated pricing methods like prior literature, we design
better allocation methods. In particular, we propose quasi-proportional
allocation methods in which the probability that an item is allocated to a
bidder depends (quasi-proportionally) on the bids.
We prove that corresponding games for both all-pay and winners-pay
quasi-proportional mechanisms admit pure Nash equilibria and this equilibrium
is unique. We also give an algorithm to compute this equilibrium in polynomial
time. Further, we show that the revenue of the auctioneer is promisingly high
compared to the ultimate, i.e., the highest value of any of the bidders, and
show bounds on the revenue of equilibria both analytically, as well as using
experiments for specific quasi-proportional functions. This is the first known
revenue analysis for these natural mechanisms (including the special case of
proportional mechanism which is common in network resource allocation
problems).
",vahab mirrokni,,2009.0,,arXiv,Mirrokni2009,True,,arXiv,Not available,Quasi-Proportional Mechanisms: Prior-free Revenue Maximization,4377d5a52b2abdc1056975e6046afe2c,http://arxiv.org/abs/0909.5365v1
16699," Inspired by Internet ad auction applications, we study the problem of
allocating a single item via an auction when bidders place very different
values on the item. We formulate this as the problem of prior-free auction and
focus on designing a simple mechanism that always allocates the item. Rather
than designing sophisticated pricing methods like prior literature, we design
better allocation methods. In particular, we propose quasi-proportional
allocation methods in which the probability that an item is allocated to a
bidder depends (quasi-proportionally) on the bids.
We prove that corresponding games for both all-pay and winners-pay
quasi-proportional mechanisms admit pure Nash equilibria and this equilibrium
is unique. We also give an algorithm to compute this equilibrium in polynomial
time. Further, we show that the revenue of the auctioneer is promisingly high
compared to the ultimate, i.e., the highest value of any of the bidders, and
show bounds on the revenue of equilibria both analytically, as well as using
experiments for specific quasi-proportional functions. This is the first known
revenue analysis for these natural mechanisms (including the special case of
proportional mechanism which is common in network resource allocation
problems).
",s. muthukrishnan,,2009.0,,arXiv,Mirrokni2009,True,,arXiv,Not available,Quasi-Proportional Mechanisms: Prior-free Revenue Maximization,4377d5a52b2abdc1056975e6046afe2c,http://arxiv.org/abs/0909.5365v1
16700," Inspired by Internet ad auction applications, we study the problem of
allocating a single item via an auction when bidders place very different
values on the item. We formulate this as the problem of prior-free auction and
focus on designing a simple mechanism that always allocates the item. Rather
than designing sophisticated pricing methods like prior literature, we design
better allocation methods. In particular, we propose quasi-proportional
allocation methods in which the probability that an item is allocated to a
bidder depends (quasi-proportionally) on the bids.
We prove that corresponding games for both all-pay and winners-pay
quasi-proportional mechanisms admit pure Nash equilibria and this equilibrium
is unique. We also give an algorithm to compute this equilibrium in polynomial
time. Further, we show that the revenue of the auctioneer is promisingly high
compared to the ultimate, i.e., the highest value of any of the bidders, and
show bounds on the revenue of equilibria both analytically, as well as using
experiments for specific quasi-proportional functions. This is the first known
revenue analysis for these natural mechanisms (including the special case of
proportional mechanism which is common in network resource allocation
problems).
",uri nadav,,2009.0,,arXiv,Mirrokni2009,True,,arXiv,Not available,Quasi-Proportional Mechanisms: Prior-free Revenue Maximization,4377d5a52b2abdc1056975e6046afe2c,http://arxiv.org/abs/0909.5365v1
16701," In this paper, we analyze Nash equilibria between electricity producers
selling their production on an electricity market and buying CO2 emission
allowances on an auction carbon market. The producers' strategies integrate the
coupling of the two markets via the cost functions of the electricity
production. We set out a clear Nash equilibrium on the power market that can be
used to compute equilibrium prices on both markets as well as the related
electricity produced and CO2 emissions released.
",mireille bossy,,2014.0,,arXiv,Bossy2014,True,,arXiv,Not available,"Game theory analysis for carbon auction market through electricity
market coupling",8d4970acb4f369b65d06c76f0eeb47d2,http://arxiv.org/abs/1408.6122v3
16702," In this paper, we analyze Nash equilibria between electricity producers
selling their production on an electricity market and buying CO2 emission
allowances on an auction carbon market. The producers' strategies integrate the
coupling of the two markets via the cost functions of the electricity
production. We set out a clear Nash equilibrium on the power market that can be
used to compute equilibrium prices on both markets as well as the related
electricity produced and CO2 emissions released.
",nadia maizi,,2014.0,,arXiv,Bossy2014,True,,arXiv,Not available,"Game theory analysis for carbon auction market through electricity
market coupling",8d4970acb4f369b65d06c76f0eeb47d2,http://arxiv.org/abs/1408.6122v3
16703," In this paper, we analyze Nash equilibria between electricity producers
selling their production on an electricity market and buying CO2 emission
allowances on an auction carbon market. The producers' strategies integrate the
coupling of the two markets via the cost functions of the electricity
production. We set out a clear Nash equilibrium on the power market that can be
used to compute equilibrium prices on both markets as well as the related
electricity produced and CO2 emissions released.
",odile pourtallier,,2014.0,,arXiv,Bossy2014,True,,arXiv,Not available,"Game theory analysis for carbon auction market through electricity
market coupling",8d4970acb4f369b65d06c76f0eeb47d2,http://arxiv.org/abs/1408.6122v3
16704," Proper incentive mechanisms are critical for mobile crowdsensing systems to
motivate people to actively and persistently participate. This article provides
an exposition of design principles of six incentive mechanisms, drawing special
attention to the sustainability issue. We cover three primary classes of
incentive mechanisms: auctions, lotteries, and trust and reputation systems, as
well as three other frameworks of promising potential: bargaining games,
contract theory, and market-driven mechanisms.
",tony luo,,2017.0,,arXiv,Luo2017,True,,arXiv,Not available,"Sustainable Incentives for Mobile Crowdsensing: Auctions, Lotteries, and
Trust and Reputation Systems",6fe660b8000686cfb8441c0530941686,http://arxiv.org/abs/1701.00248v2
16705," Proper incentive mechanisms are critical for mobile crowdsensing systems to
motivate people to actively and persistently participate. This article provides
an exposition of design principles of six incentive mechanisms, drawing special
attention to the sustainability issue. We cover three primary classes of
incentive mechanisms: auctions, lotteries, and trust and reputation systems, as
well as three other frameworks of promising potential: bargaining games,
contract theory, and market-driven mechanisms.
",salil kanhere,,2017.0,,arXiv,Luo2017,True,,arXiv,Not available,"Sustainable Incentives for Mobile Crowdsensing: Auctions, Lotteries, and
Trust and Reputation Systems",6fe660b8000686cfb8441c0530941686,http://arxiv.org/abs/1701.00248v2
16706," Proper incentive mechanisms are critical for mobile crowdsensing systems to
motivate people to actively and persistently participate. This article provides
an exposition of design principles of six incentive mechanisms, drawing special
attention to the sustainability issue. We cover three primary classes of
incentive mechanisms: auctions, lotteries, and trust and reputation systems, as
well as three other frameworks of promising potential: bargaining games,
contract theory, and market-driven mechanisms.
",jianwei huang,,2017.0,,arXiv,Luo2017,True,,arXiv,Not available,"Sustainable Incentives for Mobile Crowdsensing: Auctions, Lotteries, and
Trust and Reputation Systems",6fe660b8000686cfb8441c0530941686,http://arxiv.org/abs/1701.00248v2
16707," In a pursuit-evasion game, a team of pursuers attempt to capture an evader.
The players alternate turns, move with equal speed, and have full information
about the state of the game. We consider the most restictive capture condition:
a pursuer must become colocated with the evader to win the game. We prove two
general results about pursuit-evasion games in topological spaces. First, we
show that one pursuer has a winning strategy in any CAT(0) space under this
restrictive capture criterion. This complements a result of Alexander, Bishop
and Ghrist, who provide a winning strategy for a game with positive capture
radius. Second, we consider the game played in a compact domain in Euclidean
two-space with piecewise analytic boundary and arbitrary Euler characteristic.
We show that three pursuers always have a winning strategy by extending recent
work of Bhadauria, Klein, Isler and Suri from polygonal environments to our
more general setting.
",andrew beveridge,,2015.0,,arXiv,Beveridge2015,True,,arXiv,Not available,"Two-Dimensional Pursuit-Evasion in a Compact Domain with Piecewise
Analytic Boundary",3efcc3db9f88aa581cf77f4cc92cd232,http://arxiv.org/abs/1505.00297v1
16708," Proper incentive mechanisms are critical for mobile crowdsensing systems to
motivate people to actively and persistently participate. This article provides
an exposition of design principles of six incentive mechanisms, drawing special
attention to the sustainability issue. We cover three primary classes of
incentive mechanisms: auctions, lotteries, and trust and reputation systems, as
well as three other frameworks of promising potential: bargaining games,
contract theory, and market-driven mechanisms.
",sajal das,,2017.0,,arXiv,Luo2017,True,,arXiv,Not available,"Sustainable Incentives for Mobile Crowdsensing: Auctions, Lotteries, and
Trust and Reputation Systems",6fe660b8000686cfb8441c0530941686,http://arxiv.org/abs/1701.00248v2
16709," Proper incentive mechanisms are critical for mobile crowdsensing systems to
motivate people to actively and persistently participate. This article provides
an exposition of design principles of six incentive mechanisms, drawing special
attention to the sustainability issue. We cover three primary classes of
incentive mechanisms: auctions, lotteries, and trust and reputation systems, as
well as three other frameworks of promising potential: bargaining games,
contract theory, and market-driven mechanisms.
",fan wu,,2017.0,,arXiv,Luo2017,True,,arXiv,Not available,"Sustainable Incentives for Mobile Crowdsensing: Auctions, Lotteries, and
Trust and Reputation Systems",6fe660b8000686cfb8441c0530941686,http://arxiv.org/abs/1701.00248v2
16710," MiBoard (Multiplayer Interactive Board Game) is an online, turn-based board
game, which is a supplement of the iSTART (Interactive Strategy Training for
Active Reading and Thinking) application. MiBoard is developed to test the
hypothesis that integrating game characteristics (point rewards, game-like
interaction, and peer feedback) into the iSTART trainer will significantly
improve its effectiveness on students' learning. It was shown by M. Rowe that a
physical board game did in fact enhance students' performance. MiBoard is a
computer-based version of Rowe's board game that eliminates constraints on
locality while retaining the crucial practice components that were the game's
objective. MiBoard gives incentives for participation and provides a more
enjoyable and social practice environment compared to the online individual
practice component of the original trainer
",justin brunelle,,2010.0,,arXiv,Brunelle2010,True,,arXiv,Not available,MiBoard: iSTART Metacognitive Training through Gaming,16561da228a951a4517293c90958e798,http://arxiv.org/abs/1009.2205v1
16711," MiBoard (Multiplayer Interactive Board Game) is an online, turn-based board
game, which is a supplement of the iSTART (Interactive Strategy Training for
Active Reading and Thinking) application. MiBoard is developed to test the
hypothesis that integrating game characteristics (point rewards, game-like
interaction, and peer feedback) into the iSTART trainer will significantly
improve its effectiveness on students' learning. It was shown by M. Rowe that a
physical board game did in fact enhance students' performance. MiBoard is a
computer-based version of Rowe's board game that eliminates constraints on
locality while retaining the crucial practice components that were the game's
objective. MiBoard gives incentives for participation and provides a more
enjoyable and social practice environment compared to the online individual
practice component of the original trainer
",kyle dempsey,,2010.0,,arXiv,Brunelle2010,True,,arXiv,Not available,MiBoard: iSTART Metacognitive Training through Gaming,16561da228a951a4517293c90958e798,http://arxiv.org/abs/1009.2205v1
16712," MiBoard (Multiplayer Interactive Board Game) is an online, turn-based board
game, which is a supplement of the iSTART (Interactive Strategy Training for
Active Reading and Thinking) application. MiBoard is developed to test the
hypothesis that integrating game characteristics (point rewards, game-like
interaction, and peer feedback) into the iSTART trainer will significantly
improve its effectiveness on students' learning. It was shown by M. Rowe that a
physical board game did in fact enhance students' performance. MiBoard is a
computer-based version of Rowe's board game that eliminates constraints on
locality while retaining the crucial practice components that were the game's
objective. MiBoard gives incentives for participation and provides a more
enjoyable and social practice environment compared to the online individual
practice component of the original trainer
",g. jackson,,2010.0,,arXiv,Brunelle2010,True,,arXiv,Not available,MiBoard: iSTART Metacognitive Training through Gaming,16561da228a951a4517293c90958e798,http://arxiv.org/abs/1009.2205v1
16713," MiBoard (Multiplayer Interactive Board Game) is an online, turn-based board
game, which is a supplement of the iSTART (Interactive Strategy Training for
Active Reading and Thinking) application. MiBoard is developed to test the
hypothesis that integrating game characteristics (point rewards, game-like
interaction, and peer feedback) into the iSTART trainer will significantly
improve its effectiveness on students' learning. It was shown by M. Rowe that a
physical board game did in fact enhance students' performance. MiBoard is a
computer-based version of Rowe's board game that eliminates constraints on
locality while retaining the crucial practice components that were the game's
objective. MiBoard gives incentives for participation and provides a more
enjoyable and social practice environment compared to the online individual
practice component of the original trainer
",chutima boonthum,,2010.0,,arXiv,Brunelle2010,True,,arXiv,Not available,MiBoard: iSTART Metacognitive Training through Gaming,16561da228a951a4517293c90958e798,http://arxiv.org/abs/1009.2205v1
16714," MiBoard (Multiplayer Interactive Board Game) is an online, turn-based board
game, which is a supplement of the iSTART (Interactive Strategy Training for
Active Reading and Thinking) application. MiBoard is developed to test the
hypothesis that integrating game characteristics (point rewards, game-like
interaction, and peer feedback) into the iSTART trainer will significantly
improve its effectiveness on students' learning. It was shown by M. Rowe that a
physical board game did in fact enhance students' performance. MiBoard is a
computer-based version of Rowe's board game that eliminates constraints on
locality while retaining the crucial practice components that were the game's
objective. MiBoard gives incentives for participation and provides a more
enjoyable and social practice environment compared to the online individual
practice component of the original trainer
",irwin levinstein,,2010.0,,arXiv,Brunelle2010,True,,arXiv,Not available,MiBoard: iSTART Metacognitive Training through Gaming,16561da228a951a4517293c90958e798,http://arxiv.org/abs/1009.2205v1
16715," MiBoard (Multiplayer Interactive Board Game) is an online, turn-based board
game, which is a supplement of the iSTART (Interactive Strategy Training for
Active Reading and Thinking) application. MiBoard is developed to test the
hypothesis that integrating game characteristics (point rewards, game-like
interaction, and peer feedback) into the iSTART trainer will significantly
improve its effectiveness on students' learning. It was shown by M. Rowe that a
physical board game did in fact enhance students' performance. MiBoard is a
computer-based version of Rowe's board game that eliminates constraints on
locality while retaining the crucial practice components that were the game's
objective. MiBoard gives incentives for participation and provides a more
enjoyable and social practice environment compared to the online individual
practice component of the original trainer
",danielle mcnamara,,2010.0,,arXiv,Brunelle2010,True,,arXiv,Not available,MiBoard: iSTART Metacognitive Training through Gaming,16561da228a951a4517293c90958e798,http://arxiv.org/abs/1009.2205v1
16716," Increasing user engagement is constant challenge for Intelligent Tutoring
Systems researchers. A current trend in the ITS field is to increase engagement
of proven learning systems by integrating them within games, or adding in game
like components. Incorporating proven learning methods within a game based
environment is expected to add to the overall experience without detracting
from the original goals, however, the current study demonstrates two important
issues with regard to ITS design. First, effective designs from the physical
world do not always translate into the digital world. Second, games do not
necessarily improve engagement, and in some cases, they may have the opposite
effect. The current study discusses the development and a brief assessment of
MiBoard a multiplayer collaborative online board game designed to closely
emulate a previously developed physical board game, iSTART: The Board Game.
",kyle dempsey,,2010.0,,"FLAIRS-23, May 22-23 2010",Dempsey2010,True,,arXiv,Not available,MiBoard: A Digital Game from a Physical World,47e9f9f2c54aba479e8a7360ece007a3,http://arxiv.org/abs/1009.2207v1
16717," Increasing user engagement is constant challenge for Intelligent Tutoring
Systems researchers. A current trend in the ITS field is to increase engagement
of proven learning systems by integrating them within games, or adding in game
like components. Incorporating proven learning methods within a game based
environment is expected to add to the overall experience without detracting
from the original goals, however, the current study demonstrates two important
issues with regard to ITS design. First, effective designs from the physical
world do not always translate into the digital world. Second, games do not
necessarily improve engagement, and in some cases, they may have the opposite
effect. The current study discusses the development and a brief assessment of
MiBoard a multiplayer collaborative online board game designed to closely
emulate a previously developed physical board game, iSTART: The Board Game.
",g. jackson,,2010.0,,"FLAIRS-23, May 22-23 2010",Dempsey2010,True,,arXiv,Not available,MiBoard: A Digital Game from a Physical World,47e9f9f2c54aba479e8a7360ece007a3,http://arxiv.org/abs/1009.2207v1
16718," In a pursuit-evasion game, a team of pursuers attempt to capture an evader.
The players alternate turns, move with equal speed, and have full information
about the state of the game. We consider the most restictive capture condition:
a pursuer must become colocated with the evader to win the game. We prove two
general results about pursuit-evasion games in topological spaces. First, we
show that one pursuer has a winning strategy in any CAT(0) space under this
restrictive capture criterion. This complements a result of Alexander, Bishop
and Ghrist, who provide a winning strategy for a game with positive capture
radius. Second, we consider the game played in a compact domain in Euclidean
two-space with piecewise analytic boundary and arbitrary Euler characteristic.
We show that three pursuers always have a winning strategy by extending recent
work of Bhadauria, Klein, Isler and Suri from polygonal environments to our
more general setting.
",yiqing cai,,2015.0,,arXiv,Beveridge2015,True,,arXiv,Not available,"Two-Dimensional Pursuit-Evasion in a Compact Domain with Piecewise
Analytic Boundary",3efcc3db9f88aa581cf77f4cc92cd232,http://arxiv.org/abs/1505.00297v1
16719," Increasing user engagement is constant challenge for Intelligent Tutoring
Systems researchers. A current trend in the ITS field is to increase engagement
of proven learning systems by integrating them within games, or adding in game
like components. Incorporating proven learning methods within a game based
environment is expected to add to the overall experience without detracting
from the original goals, however, the current study demonstrates two important
issues with regard to ITS design. First, effective designs from the physical
world do not always translate into the digital world. Second, games do not
necessarily improve engagement, and in some cases, they may have the opposite
effect. The current study discusses the development and a brief assessment of
MiBoard a multiplayer collaborative online board game designed to closely
emulate a previously developed physical board game, iSTART: The Board Game.
",justin brunelle,,2010.0,,"FLAIRS-23, May 22-23 2010",Dempsey2010,True,,arXiv,Not available,MiBoard: A Digital Game from a Physical World,47e9f9f2c54aba479e8a7360ece007a3,http://arxiv.org/abs/1009.2207v1
16720," Increasing user engagement is constant challenge for Intelligent Tutoring
Systems researchers. A current trend in the ITS field is to increase engagement
of proven learning systems by integrating them within games, or adding in game
like components. Incorporating proven learning methods within a game based
environment is expected to add to the overall experience without detracting
from the original goals, however, the current study demonstrates two important
issues with regard to ITS design. First, effective designs from the physical
world do not always translate into the digital world. Second, games do not
necessarily improve engagement, and in some cases, they may have the opposite
effect. The current study discusses the development and a brief assessment of
MiBoard a multiplayer collaborative online board game designed to closely
emulate a previously developed physical board game, iSTART: The Board Game.
",michael rowe,,2010.0,,"FLAIRS-23, May 22-23 2010",Dempsey2010,True,,arXiv,Not available,MiBoard: A Digital Game from a Physical World,47e9f9f2c54aba479e8a7360ece007a3,http://arxiv.org/abs/1009.2207v1
16721," Increasing user engagement is constant challenge for Intelligent Tutoring
Systems researchers. A current trend in the ITS field is to increase engagement
of proven learning systems by integrating them within games, or adding in game
like components. Incorporating proven learning methods within a game based
environment is expected to add to the overall experience without detracting
from the original goals, however, the current study demonstrates two important
issues with regard to ITS design. First, effective designs from the physical
world do not always translate into the digital world. Second, games do not
necessarily improve engagement, and in some cases, they may have the opposite
effect. The current study discusses the development and a brief assessment of
MiBoard a multiplayer collaborative online board game designed to closely
emulate a previously developed physical board game, iSTART: The Board Game.
",danielle mcnamara,,2010.0,,"FLAIRS-23, May 22-23 2010",Dempsey2010,True,,arXiv,Not available,MiBoard: A Digital Game from a Physical World,47e9f9f2c54aba479e8a7360ece007a3,http://arxiv.org/abs/1009.2207v1
16722," This short note demonstrates how one can define a transformation of a
non-zero sum game into a zero sum, so that the optimal mixed strategy achieving
equilibrium always exists. The transformation is equivalent to introduction of
a passive player into a game (a player with a singleton set of pure
strategies), whose payoff depends on the actions of the active players, and it
is justified by the law of conservation of utility in a game. In a transformed
game, each participant plays against all other players, including the passive
player. The advantage of this approach is that the transformed game is zero-sum
and has an equilibrium solution. The optimal strategy and the value of the new
game, however, can be different from strategies that are rational in the
original game. We demonstrate the principle using the Prisoner's Dilemma
example.
",roman belavkin,,2010.0,,arXiv,Belavkin2010,True,,arXiv,Not available,Conservation Law of Utility and Equilibria in Non-Zero Sum Games,40499c637af8b1a9903fdfc327062d73,http://arxiv.org/abs/1010.2439v2
16723," We consider perfect-information reachability stochastic games for 2 players
on infinite graphs. We identify a subclass of such games, and prove two
interesting properties of it: first, Player Max always has optimal strategies
in games from this subclass, and second, these games are strongly determined.
The subclass is defined by the property that the set of all values can only
have one accumulation point -- 0. Our results nicely mirror recent results for
finitely-branching games, where, on the contrary, Player Min always has optimal
strategies. However, our proof methods are substantially different, because the
roles of the players are not symmetric. We also do not restrict the branching
of the games. Finally, we apply our results in the context of recently studied
One-Counter stochastic games.
",vaclav brozek,,2011.0,10.4204/EPTCS.54.5,"EPTCS 54, 2011, pp. 60-73",Brožek2011,True,,arXiv,Not available,Optimal Strategies in Infinite-state Stochastic Reachability Games,98c7395f9804a89fefd473ac6e206747,http://arxiv.org/abs/1103.1065v3
16724," Congestion games constitute an important class of games in which computing an
exact or even approximate pure Nash equilibrium is in general {\sf
PLS}-complete. We present a surprisingly simple polynomial-time algorithm that
computes O(1)-approximate Nash equilibria in these games. In particular, for
congestion games with linear latency functions, our algorithm computes
$(2+\epsilon)$-approximate pure Nash equilibria in time polynomial in the
number of players, the number of resources and $1/\epsilon$. It also applies to
games with polynomial latency functions with constant maximum degree $d$;
there, the approximation guarantee is $d^{O(d)}$. The algorithm essentially
identifies a polynomially long sequence of best-response moves that lead to an
approximate equilibrium; the existence of such short sequences is interesting
in itself. These are the first positive algorithmic results for approximate
equilibria in non-symmetric congestion games. We strengthen them further by
proving that, for congestion games that deviate from our mild assumptions,
computing $\rho$-approximate equilibria is {\sf PLS}-complete for any
polynomial-time computable $\rho$.
",ioannis caragiannis,,2011.0,,arXiv,Caragiannis2011,True,,arXiv,Not available,"Efficient computation of approximate pure Nash equilibria in congestion
games",ee25b4201dfc95a6c8a6daf9d7915cb8,http://arxiv.org/abs/1104.2690v2
16725," Congestion games constitute an important class of games in which computing an
exact or even approximate pure Nash equilibrium is in general {\sf
PLS}-complete. We present a surprisingly simple polynomial-time algorithm that
computes O(1)-approximate Nash equilibria in these games. In particular, for
congestion games with linear latency functions, our algorithm computes
$(2+\epsilon)$-approximate pure Nash equilibria in time polynomial in the
number of players, the number of resources and $1/\epsilon$. It also applies to
games with polynomial latency functions with constant maximum degree $d$;
there, the approximation guarantee is $d^{O(d)}$. The algorithm essentially
identifies a polynomially long sequence of best-response moves that lead to an
approximate equilibrium; the existence of such short sequences is interesting
in itself. These are the first positive algorithmic results for approximate
equilibria in non-symmetric congestion games. We strengthen them further by
proving that, for congestion games that deviate from our mild assumptions,
computing $\rho$-approximate equilibria is {\sf PLS}-complete for any
polynomial-time computable $\rho$.
",angelo fanelli,,2011.0,,arXiv,Caragiannis2011,True,,arXiv,Not available,"Efficient computation of approximate pure Nash equilibria in congestion
games",ee25b4201dfc95a6c8a6daf9d7915cb8,http://arxiv.org/abs/1104.2690v2
16726," Congestion games constitute an important class of games in which computing an
exact or even approximate pure Nash equilibrium is in general {\sf
PLS}-complete. We present a surprisingly simple polynomial-time algorithm that
computes O(1)-approximate Nash equilibria in these games. In particular, for
congestion games with linear latency functions, our algorithm computes
$(2+\epsilon)$-approximate pure Nash equilibria in time polynomial in the
number of players, the number of resources and $1/\epsilon$. It also applies to
games with polynomial latency functions with constant maximum degree $d$;
there, the approximation guarantee is $d^{O(d)}$. The algorithm essentially
identifies a polynomially long sequence of best-response moves that lead to an
approximate equilibrium; the existence of such short sequences is interesting
in itself. These are the first positive algorithmic results for approximate
equilibria in non-symmetric congestion games. We strengthen them further by
proving that, for congestion games that deviate from our mild assumptions,
computing $\rho$-approximate equilibria is {\sf PLS}-complete for any
polynomial-time computable $\rho$.
",nick gravin,,2011.0,,arXiv,Caragiannis2011,True,,arXiv,Not available,"Efficient computation of approximate pure Nash equilibria in congestion
games",ee25b4201dfc95a6c8a6daf9d7915cb8,http://arxiv.org/abs/1104.2690v2
16727," Congestion games constitute an important class of games in which computing an
exact or even approximate pure Nash equilibrium is in general {\sf
PLS}-complete. We present a surprisingly simple polynomial-time algorithm that
computes O(1)-approximate Nash equilibria in these games. In particular, for
congestion games with linear latency functions, our algorithm computes
$(2+\epsilon)$-approximate pure Nash equilibria in time polynomial in the
number of players, the number of resources and $1/\epsilon$. It also applies to
games with polynomial latency functions with constant maximum degree $d$;
there, the approximation guarantee is $d^{O(d)}$. The algorithm essentially
identifies a polynomially long sequence of best-response moves that lead to an
approximate equilibrium; the existence of such short sequences is interesting
in itself. These are the first positive algorithmic results for approximate
equilibria in non-symmetric congestion games. We strengthen them further by
proving that, for congestion games that deviate from our mild assumptions,
computing $\rho$-approximate equilibria is {\sf PLS}-complete for any
polynomial-time computable $\rho$.
",alexander skopalik,,2011.0,,arXiv,Caragiannis2011,True,,arXiv,Not available,"Efficient computation of approximate pure Nash equilibria in congestion
games",ee25b4201dfc95a6c8a6daf9d7915cb8,http://arxiv.org/abs/1104.2690v2
16728," The class of weakly acyclic games, which includes potential games and
dominance-solvable games, captures many practical application domains. In a
weakly acyclic game, from any starting state, there is a sequence of
better-response moves that leads to a pure Nash equilibrium; informally, these
are games in which natural distributed dynamics, such as better-response
dynamics, cannot enter inescapable oscillations. We establish a novel link
between such games and the existence of pure Nash equilibria in subgames.
Specifically, we show that the existence of a unique pure Nash equilibrium in
every subgame implies the weak acyclicity of a game. In contrast, the possible
existence of multiple pure Nash equilibria in every subgame is insufficient for
weak acyclicity in general; here, we also systematically identify the special
cases (in terms of the number of players and strategies) for which this is
sufficient to guarantee weak acyclicity.
",alex fabrikant,,2011.0,,arXiv,Fabrikant2011,True,,arXiv,Not available,On the Structure of Weakly Acyclic Games,037e21f1117b699b86f6e24ae451f172,http://arxiv.org/abs/1108.2092v1
16729," Celebrity games, a new model of network creation games is introduced. The
specific features of this model are that players have different celebrity
weights and that a critical distance is taken into consideration. The aim of
any player is to be close (at distance less than critical) to the others,
mainly to those with high celebrity weights. The cost of each player depends on
the cost of establishing direct links to other players and on the sum of the
weights of those players at a distance greater than the critical distance. We
show that celebrity games always have pure Nash equilibria and we characterize
the family of subgames having connected Nash equilibria, the so called star
celebrity games. We provide exact bounds for the PoA of celebrity games.
The PoA can be tightened when restricted to particular classes of Nash
equilibria graphs, in particular for trees.
",carme alvarez,,2015.0,,arXiv,Àlvarez2015,True,,arXiv,Not available,Stars and Celebrities: A Network Creation Game,e26aa61ae9943c4649b57de2ea95cdcf,http://arxiv.org/abs/1505.03718v3
16730," The class of weakly acyclic games, which includes potential games and
dominance-solvable games, captures many practical application domains. In a
weakly acyclic game, from any starting state, there is a sequence of
better-response moves that leads to a pure Nash equilibrium; informally, these
are games in which natural distributed dynamics, such as better-response
dynamics, cannot enter inescapable oscillations. We establish a novel link
between such games and the existence of pure Nash equilibria in subgames.
Specifically, we show that the existence of a unique pure Nash equilibrium in
every subgame implies the weak acyclicity of a game. In contrast, the possible
existence of multiple pure Nash equilibria in every subgame is insufficient for
weak acyclicity in general; here, we also systematically identify the special
cases (in terms of the number of players and strategies) for which this is
sufficient to guarantee weak acyclicity.
",aaron jaggard,,2011.0,,arXiv,Fabrikant2011,True,,arXiv,Not available,On the Structure of Weakly Acyclic Games,037e21f1117b699b86f6e24ae451f172,http://arxiv.org/abs/1108.2092v1
16731," The class of weakly acyclic games, which includes potential games and
dominance-solvable games, captures many practical application domains. In a
weakly acyclic game, from any starting state, there is a sequence of
better-response moves that leads to a pure Nash equilibrium; informally, these
are games in which natural distributed dynamics, such as better-response
dynamics, cannot enter inescapable oscillations. We establish a novel link
between such games and the existence of pure Nash equilibria in subgames.
Specifically, we show that the existence of a unique pure Nash equilibrium in
every subgame implies the weak acyclicity of a game. In contrast, the possible
existence of multiple pure Nash equilibria in every subgame is insufficient for
weak acyclicity in general; here, we also systematically identify the special
cases (in terms of the number of players and strategies) for which this is
sufficient to guarantee weak acyclicity.
",michael schapira,,2011.0,,arXiv,Fabrikant2011,True,,arXiv,Not available,On the Structure of Weakly Acyclic Games,037e21f1117b699b86f6e24ae451f172,http://arxiv.org/abs/1108.2092v1
16732," We consider an extension of strategic normal form games with a phase of
negotiations before the actual play of the game, where players can make binding
offers for transfer of utilities to other players after the play of the game,
in order to provide additional incentives for each other to play designated
strategies. Such offers are conditional on the recipients playing the specified
strategies and they effect transformations of the payoff matrix of the game by
accordingly transferring payoffs between players. We introduce and analyze
solution concepts for 2-player normal form games with such preplay offers under
various assumptions for the preplay negotiation phase and obtain results for
existence of efficient negotiation strategies of the players. Then we extend
the framework to coalitional preplay offers in N-player games, as well as to
extensive form games with inter-play offers for side payments.
",valentin goranko,,2012.0,,arXiv,Goranko2012,True,,arXiv,Not available,Non-cooperative games with preplay negotiations,74eaf391f08680429a40ac53097cc105,http://arxiv.org/abs/1208.1718v4
16733," We consider an extension of strategic normal form games with a phase of
negotiations before the actual play of the game, where players can make binding
offers for transfer of utilities to other players after the play of the game,
in order to provide additional incentives for each other to play designated
strategies. Such offers are conditional on the recipients playing the specified
strategies and they effect transformations of the payoff matrix of the game by
accordingly transferring payoffs between players. We introduce and analyze
solution concepts for 2-player normal form games with such preplay offers under
various assumptions for the preplay negotiation phase and obtain results for
existence of efficient negotiation strategies of the players. Then we extend
the framework to coalitional preplay offers in N-player games, as well as to
extensive form games with inter-play offers for side payments.
",paolo turrini,,2012.0,,arXiv,Goranko2012,True,,arXiv,Not available,Non-cooperative games with preplay negotiations,74eaf391f08680429a40ac53097cc105,http://arxiv.org/abs/1208.1718v4
16734," We propose interdependent defense (IDD) games, a computational game-theoretic
framework to study aspects of the interdependence of risk and security in
multi-agent systems under deliberate external attacks. Our model builds upon
interdependent security (IDS) games, a model due to Heal and Kunreuther that
considers the source of the risk to be the result of a fixed
randomizedstrategy. We adapt IDS games to model the attacker's deliberate
behavior. We define the attacker's pure-strategy space and utility function and
derive appropriate cost functions for the defenders. We provide a complete
characterization of mixed-strategy Nash equilibria (MSNE), and design a simple
polynomial-time algorithm for computing all of them, for an important subclass
of IDD games. In addition, we propose a randominstance generator of (general)
IDD games based on a version of the real-world Internet-derived Autonomous
Systems (AS) graph (with around 27K nodes and 100K edges), and present
promising empirical results using a simple learning heuristics to compute
(approximate) MSNE in such games.
",hau chan,,2012.0,,arXiv,Chan2012,True,,arXiv,Not available,"Interdependent Defense Games: Modeling Interdependent Security under
Deliberate Attacks",0f6eba0d26c8332e0d78821b02dbcde6,http://arxiv.org/abs/1210.4838v1
16735," We propose interdependent defense (IDD) games, a computational game-theoretic
framework to study aspects of the interdependence of risk and security in
multi-agent systems under deliberate external attacks. Our model builds upon
interdependent security (IDS) games, a model due to Heal and Kunreuther that
considers the source of the risk to be the result of a fixed
randomizedstrategy. We adapt IDS games to model the attacker's deliberate
behavior. We define the attacker's pure-strategy space and utility function and
derive appropriate cost functions for the defenders. We provide a complete
characterization of mixed-strategy Nash equilibria (MSNE), and design a simple
polynomial-time algorithm for computing all of them, for an important subclass
of IDD games. In addition, we propose a randominstance generator of (general)
IDD games based on a version of the real-world Internet-derived Autonomous
Systems (AS) graph (with around 27K nodes and 100K edges), and present
promising empirical results using a simple learning heuristics to compute
(approximate) MSNE in such games.
",michael ceyko,,2012.0,,arXiv,Chan2012,True,,arXiv,Not available,"Interdependent Defense Games: Modeling Interdependent Security under
Deliberate Attacks",0f6eba0d26c8332e0d78821b02dbcde6,http://arxiv.org/abs/1210.4838v1
16736," We propose interdependent defense (IDD) games, a computational game-theoretic
framework to study aspects of the interdependence of risk and security in
multi-agent systems under deliberate external attacks. Our model builds upon
interdependent security (IDS) games, a model due to Heal and Kunreuther that
considers the source of the risk to be the result of a fixed
randomizedstrategy. We adapt IDS games to model the attacker's deliberate
behavior. We define the attacker's pure-strategy space and utility function and
derive appropriate cost functions for the defenders. We provide a complete
characterization of mixed-strategy Nash equilibria (MSNE), and design a simple
polynomial-time algorithm for computing all of them, for an important subclass
of IDD games. In addition, we propose a randominstance generator of (general)
IDD games based on a version of the real-world Internet-derived Autonomous
Systems (AS) graph (with around 27K nodes and 100K edges), and present
promising empirical results using a simple learning heuristics to compute
(approximate) MSNE in such games.
",luis ortiz,,2012.0,,arXiv,Chan2012,True,,arXiv,Not available,"Interdependent Defense Games: Modeling Interdependent Security under
Deliberate Attacks",0f6eba0d26c8332e0d78821b02dbcde6,http://arxiv.org/abs/1210.4838v1
16737," We consider an extension of strategic normal form games with a phase before
the actual play of the game, where players can make binding offers for transfer
of utilities to other players after the play of the game, contingent on the
recipient playing the strategy indicated in the offer. Such offers transform
the payoff matrix of the original game but preserve its non-cooperative nature.
The type of offers we focus on here are conditional on a suggested 'matching
offer' of the same kind made in return by the receiver. Players can exchange a
series of such offers, thus engaging in a bargaining process before a strategic
normal form game is played.
In this paper we study and analyze solution concepts for two-player normal
form games with such preplay negotiation phase, under several assumptions for
the bargaining power of the players, such as the possibility of withdrawing
previously made offers and opting out from the negotiation process, as well as
the value of time for the players in such negotiations. We obtain results
describing the possible solutions of such bargaining games and analyze the
degrees of efficiency and fairness that can be achieved in such negotiation
process.
",valentin goranko,,2013.0,,arXiv,Goranko2013,True,,arXiv,Not available,Two-player preplay negotiation games with conditional offers,18fed306496485dc468a4090c204ad93,http://arxiv.org/abs/1304.2161v2
16738," We consider an extension of strategic normal form games with a phase before
the actual play of the game, where players can make binding offers for transfer
of utilities to other players after the play of the game, contingent on the
recipient playing the strategy indicated in the offer. Such offers transform
the payoff matrix of the original game but preserve its non-cooperative nature.
The type of offers we focus on here are conditional on a suggested 'matching
offer' of the same kind made in return by the receiver. Players can exchange a
series of such offers, thus engaging in a bargaining process before a strategic
normal form game is played.
In this paper we study and analyze solution concepts for two-player normal
form games with such preplay negotiation phase, under several assumptions for
the bargaining power of the players, such as the possibility of withdrawing
previously made offers and opting out from the negotiation process, as well as
the value of time for the players in such negotiations. We obtain results
describing the possible solutions of such bargaining games and analyze the
degrees of efficiency and fairness that can be achieved in such negotiation
process.
",paolo turrini,,2013.0,,arXiv,Goranko2013,True,,arXiv,Not available,Two-player preplay negotiation games with conditional offers,18fed306496485dc468a4090c204ad93,http://arxiv.org/abs/1304.2161v2
16739," We study Monte Carlo tree search (MCTS) in zero-sum extensive-form games with
perfect information and simultaneous moves. We present a general template of
MCTS algorithms for these games, which can be instantiated by various selection
methods. We formally prove that if a selection method is $\epsilon$-Hannan
consistent in a matrix game and satisfies additional requirements on
exploration, then the MCTS algorithm eventually converges to an approximate
Nash equilibrium (NE) of the extensive-form game. We empirically evaluate this
claim using regret matching and Exp3 as the selection methods on randomly
generated games and empirically selected worst case games. We confirm the
formal result and show that additional MCTS variants also converge to
approximate NE on the evaluated games.
",viliam lisy,,2013.0,,"Advances in Neural Information Processing Systems 26, pp
2112-2120, 2013",Lisý2013,True,,arXiv,Not available,Convergence of Monte Carlo Tree Search in Simultaneous Move Games,1d8d700fa10087efa7dd2292aafe93cb,http://arxiv.org/abs/1310.8613v2
16740," Celebrity games, a new model of network creation games is introduced. The
specific features of this model are that players have different celebrity
weights and that a critical distance is taken into consideration. The aim of
any player is to be close (at distance less than critical) to the others,
mainly to those with high celebrity weights. The cost of each player depends on
the cost of establishing direct links to other players and on the sum of the
weights of those players at a distance greater than the critical distance. We
show that celebrity games always have pure Nash equilibria and we characterize
the family of subgames having connected Nash equilibria, the so called star
celebrity games. We provide exact bounds for the PoA of celebrity games.
The PoA can be tightened when restricted to particular classes of Nash
equilibria graphs, in particular for trees.
",maria blesa,,2015.0,,arXiv,Àlvarez2015,True,,arXiv,Not available,Stars and Celebrities: A Network Creation Game,e26aa61ae9943c4649b57de2ea95cdcf,http://arxiv.org/abs/1505.03718v3
16741," We study Monte Carlo tree search (MCTS) in zero-sum extensive-form games with
perfect information and simultaneous moves. We present a general template of
MCTS algorithms for these games, which can be instantiated by various selection
methods. We formally prove that if a selection method is $\epsilon$-Hannan
consistent in a matrix game and satisfies additional requirements on
exploration, then the MCTS algorithm eventually converges to an approximate
Nash equilibrium (NE) of the extensive-form game. We empirically evaluate this
claim using regret matching and Exp3 as the selection methods on randomly
generated games and empirically selected worst case games. We confirm the
formal result and show that additional MCTS variants also converge to
approximate NE on the evaluated games.
",vojtech kovarik,,2013.0,,"Advances in Neural Information Processing Systems 26, pp
2112-2120, 2013",Lisý2013,True,,arXiv,Not available,Convergence of Monte Carlo Tree Search in Simultaneous Move Games,1d8d700fa10087efa7dd2292aafe93cb,http://arxiv.org/abs/1310.8613v2
16742," We study Monte Carlo tree search (MCTS) in zero-sum extensive-form games with
perfect information and simultaneous moves. We present a general template of
MCTS algorithms for these games, which can be instantiated by various selection
methods. We formally prove that if a selection method is $\epsilon$-Hannan
consistent in a matrix game and satisfies additional requirements on
exploration, then the MCTS algorithm eventually converges to an approximate
Nash equilibrium (NE) of the extensive-form game. We empirically evaluate this
claim using regret matching and Exp3 as the selection methods on randomly
generated games and empirically selected worst case games. We confirm the
formal result and show that additional MCTS variants also converge to
approximate NE on the evaluated games.
",marc lanctot,,2013.0,,"Advances in Neural Information Processing Systems 26, pp
2112-2120, 2013",Lisý2013,True,,arXiv,Not available,Convergence of Monte Carlo Tree Search in Simultaneous Move Games,1d8d700fa10087efa7dd2292aafe93cb,http://arxiv.org/abs/1310.8613v2
16743," We study Monte Carlo tree search (MCTS) in zero-sum extensive-form games with
perfect information and simultaneous moves. We present a general template of
MCTS algorithms for these games, which can be instantiated by various selection
methods. We formally prove that if a selection method is $\epsilon$-Hannan
consistent in a matrix game and satisfies additional requirements on
exploration, then the MCTS algorithm eventually converges to an approximate
Nash equilibrium (NE) of the extensive-form game. We empirically evaluate this
claim using regret matching and Exp3 as the selection methods on randomly
generated games and empirically selected worst case games. We confirm the
formal result and show that additional MCTS variants also converge to
approximate NE on the evaluated games.
",branislav bosansky,,2013.0,,"Advances in Neural Information Processing Systems 26, pp
2112-2120, 2013",Lisý2013,True,,arXiv,Not available,Convergence of Monte Carlo Tree Search in Simultaneous Move Games,1d8d700fa10087efa7dd2292aafe93cb,http://arxiv.org/abs/1310.8613v2
16744," Recently, Dohrau et al. studied a zero-player game on switch graphs and
proved that deciding the termination of the game is in NP $\cap$ coNP. In this
short paper, we show that the search version of this game on switch graphs,
i.e., the task of finding a witness of termination (or of non-termination) is
in PLS.
",karthik s.,,2016.0,,arXiv,S.2016,True,,arXiv,Not available,Did the Train Reach its Destination: The Complexity of Finding a Witness,13c67e82fbf0ffbcc38ea3a4b1b13f6d,http://arxiv.org/abs/1609.03840v2
16745," We study an independent best-response dynamics on network games in which the
nodes (players) decide to revise their strategies independently with some
probability. We are interested in the convergence time to the equilibrium as a
function of this probability, the degree of the network, and the potential of
the underlying games.
",paolo penna,,2016.0,,arXiv,Penna2016,True,,arXiv,Not available,Independent lazy better-response dynamics on network games,e884e0994fd3d1f3cf33145c94d73aef,http://arxiv.org/abs/1609.08953v1
16746," We study an independent best-response dynamics on network games in which the
nodes (players) decide to revise their strategies independently with some
probability. We are interested in the convergence time to the equilibrium as a
function of this probability, the degree of the network, and the potential of
the underlying games.
",laurent viennot,,2016.0,,arXiv,Penna2016,True,,arXiv,Not available,Independent lazy better-response dynamics on network games,e884e0994fd3d1f3cf33145c94d73aef,http://arxiv.org/abs/1609.08953v1
16747," We study some mathematical aspects of the Mahjong game. In particular, we use
combinatorial theory and write a Python program to study some special features
of the game. The results confirm some folklore concerning the game, and expose
some unexpected results. Related results and possible future research in
connection to artificial intelligence are mentioned.
",yuan cheng,,2017.0,,arXiv,Cheng2017,True,,arXiv,Not available,"Mathematical aspect of the combinatorial game ""Mahjong""",0b5ccb0e0371c19b0a8f8aefff6d2869,http://arxiv.org/abs/1707.07345v1
16748," We study some mathematical aspects of the Mahjong game. In particular, we use
combinatorial theory and write a Python program to study some special features
of the game. The results confirm some folklore concerning the game, and expose
some unexpected results. Related results and possible future research in
connection to artificial intelligence are mentioned.
",chi-kwong li,,2017.0,,arXiv,Cheng2017,True,,arXiv,Not available,"Mathematical aspect of the combinatorial game ""Mahjong""",0b5ccb0e0371c19b0a8f8aefff6d2869,http://arxiv.org/abs/1707.07345v1
16749," We study some mathematical aspects of the Mahjong game. In particular, we use
combinatorial theory and write a Python program to study some special features
of the game. The results confirm some folklore concerning the game, and expose
some unexpected results. Related results and possible future research in
connection to artificial intelligence are mentioned.
",sharon li,,2017.0,,arXiv,Cheng2017,True,,arXiv,Not available,"Mathematical aspect of the combinatorial game ""Mahjong""",0b5ccb0e0371c19b0a8f8aefff6d2869,http://arxiv.org/abs/1707.07345v1
16750," We derive zero-determinant strategies for general multi-player multi-action
repeated incomplete-information games. By formulating zero-determinant strategy
in terms of linear algebra, we prove that linear payoff relations assigned by
players always have solutions. An example for zero-determinant strategy in a
repeated incomplete-information game is also provided.
",masahiko ueda,,2018.0,,arXiv,Ueda2018,True,,arXiv,Not available,"Zero-determinant strategies in repeated incomplete-information games:
Consistency of payoff relations",33f04963cb027e393ee12661d7f0093a,http://arxiv.org/abs/1807.00472v1
16751," Celebrity games, a new model of network creation games is introduced. The
specific features of this model are that players have different celebrity
weights and that a critical distance is taken into consideration. The aim of
any player is to be close (at distance less than critical) to the others,
mainly to those with high celebrity weights. The cost of each player depends on
the cost of establishing direct links to other players and on the sum of the
weights of those players at a distance greater than the critical distance. We
show that celebrity games always have pure Nash equilibria and we characterize
the family of subgames having connected Nash equilibria, the so called star
celebrity games. We provide exact bounds for the PoA of celebrity games.
The PoA can be tightened when restricted to particular classes of Nash
equilibria graphs, in particular for trees.
",amalia duch,,2015.0,,arXiv,Àlvarez2015,True,,arXiv,Not available,Stars and Celebrities: A Network Creation Game,e26aa61ae9943c4649b57de2ea95cdcf,http://arxiv.org/abs/1505.03718v3
16752," We derive zero-determinant strategies for general multi-player multi-action
repeated incomplete-information games. By formulating zero-determinant strategy
in terms of linear algebra, we prove that linear payoff relations assigned by
players always have solutions. An example for zero-determinant strategy in a
repeated incomplete-information game is also provided.
",toshiyuki tanaka,,2018.0,,arXiv,Ueda2018,True,,arXiv,Not available,"Zero-determinant strategies in repeated incomplete-information games:
Consistency of payoff relations",33f04963cb027e393ee12661d7f0093a,http://arxiv.org/abs/1807.00472v1
16753," The problem of the existence of Berge equilibria in the sense of Zhukovskii
in normal form finite games in pure and in mixed strategies is studied. The
example of a three-player game that has Berge equilibrium neither in pure nor
in mixed strategies is given.
",jaroslaw pykacz,,2018.0,,arXiv,Pykacz2018,True,,arXiv,Not available,Example of a finite game with no Berge equilibria at all,2bcd3cc8e17652599d65d29f2dfcfe40,http://arxiv.org/abs/1807.05821v1
16754," The problem of the existence of Berge equilibria in the sense of Zhukovskii
in normal form finite games in pure and in mixed strategies is studied. The
example of a three-player game that has Berge equilibrium neither in pure nor
in mixed strategies is given.
",pawel bytner,,2018.0,,arXiv,Pykacz2018,True,,arXiv,Not available,Example of a finite game with no Berge equilibria at all,2bcd3cc8e17652599d65d29f2dfcfe40,http://arxiv.org/abs/1807.05821v1
16755," The problem of the existence of Berge equilibria in the sense of Zhukovskii
in normal form finite games in pure and in mixed strategies is studied. The
example of a three-player game that has Berge equilibrium neither in pure nor
in mixed strategies is given.
",piotr frackiewicz,,2018.0,,arXiv,Pykacz2018,True,,arXiv,Not available,Example of a finite game with no Berge equilibria at all,2bcd3cc8e17652599d65d29f2dfcfe40,http://arxiv.org/abs/1807.05821v1
16756," We establish, for the first time, a connection between stochastic games and
multi-parameter eigenvalue problems, using the theory developed by Shapley and
Snow (1950) in the context of matrix games. This connection provides new
results, new proofs, and new tools for studying stochastic games.
",luc attia,,2018.0,,arXiv,Attia2018,True,,arXiv,Not available,Shapley-Snow kernels in zero-sum stochastic games,03f70d5b89e55b7aeca0797190b6989b,http://arxiv.org/abs/1810.08798v1
16757," We establish, for the first time, a connection between stochastic games and
multi-parameter eigenvalue problems, using the theory developed by Shapley and
Snow (1950) in the context of matrix games. This connection provides new
results, new proofs, and new tools for studying stochastic games.
",miquel oliu-barton,,2018.0,,arXiv,Attia2018,True,,arXiv,Not available,Shapley-Snow kernels in zero-sum stochastic games,03f70d5b89e55b7aeca0797190b6989b,http://arxiv.org/abs/1810.08798v1
16758," In evolutionary game theory, repeated two-player games are used to study
strategy evolution in a population under natural selection. As the evolution
greatly depends on the interaction structure, there has been growing interests
in studying the games on graphs. In this setting, players occupy the vertices
of a graph and play the game only with their immediate neighbours. Various
evolutionary dynamics have been studied in this setting for different games.
Due to the complexity of the analysis, however, most of the work in this area
is experimental. This paper aims to contribute to a more complete
understanding, by providing rigorous analysis. We study the imitation dynamics
on two classes of graph: cycles and complete graphs. We focus on three well
known social dilemmas, namely the Prisoner's Dilemma, the Stag Hunt and the
Snowdrift Game. We also consider, for completeness, the so-called Harmony Game.
Our analysis shows that, on the cycle, all four games converge fast, either to
total cooperation or total defection. On the complete graph, all but the
Snowdrift game converge fast, either to cooperation or defection. The Snowdrift
game reaches a metastable state fast, where cooperators and defectors coexist.
It will converge to cooperation or defection only after spending time in this
state which is exponential in the size, n, of the graph. In exceptional cases,
it will remain in this state indefinitely. Our theoretical results are
supported by experimental investigations.
",colin cooper,,2011.0,,arXiv,Cooper2011,True,,arXiv,Not available,On the Imitation Strategy for Games on Graphs,d2fad0d686c969110a23bf37b97d97f3,http://arxiv.org/abs/1102.3879v1
16759," In evolutionary game theory, repeated two-player games are used to study
strategy evolution in a population under natural selection. As the evolution
greatly depends on the interaction structure, there has been growing interests
in studying the games on graphs. In this setting, players occupy the vertices
of a graph and play the game only with their immediate neighbours. Various
evolutionary dynamics have been studied in this setting for different games.
Due to the complexity of the analysis, however, most of the work in this area
is experimental. This paper aims to contribute to a more complete
understanding, by providing rigorous analysis. We study the imitation dynamics
on two classes of graph: cycles and complete graphs. We focus on three well
known social dilemmas, namely the Prisoner's Dilemma, the Stag Hunt and the
Snowdrift Game. We also consider, for completeness, the so-called Harmony Game.
Our analysis shows that, on the cycle, all four games converge fast, either to
total cooperation or total defection. On the complete graph, all but the
Snowdrift game converge fast, either to cooperation or defection. The Snowdrift
game reaches a metastable state fast, where cooperators and defectors coexist.
It will converge to cooperation or defection only after spending time in this
state which is exponential in the size, n, of the graph. In exceptional cases,
it will remain in this state indefinitely. Our theoretical results are
supported by experimental investigations.
",martin dyer,,2011.0,,arXiv,Cooper2011,True,,arXiv,Not available,On the Imitation Strategy for Games on Graphs,d2fad0d686c969110a23bf37b97d97f3,http://arxiv.org/abs/1102.3879v1
16760," In evolutionary game theory, repeated two-player games are used to study
strategy evolution in a population under natural selection. As the evolution
greatly depends on the interaction structure, there has been growing interests
in studying the games on graphs. In this setting, players occupy the vertices
of a graph and play the game only with their immediate neighbours. Various
evolutionary dynamics have been studied in this setting for different games.
Due to the complexity of the analysis, however, most of the work in this area
is experimental. This paper aims to contribute to a more complete
understanding, by providing rigorous analysis. We study the imitation dynamics
on two classes of graph: cycles and complete graphs. We focus on three well
known social dilemmas, namely the Prisoner's Dilemma, the Stag Hunt and the
Snowdrift Game. We also consider, for completeness, the so-called Harmony Game.
Our analysis shows that, on the cycle, all four games converge fast, either to
total cooperation or total defection. On the complete graph, all but the
Snowdrift game converge fast, either to cooperation or defection. The Snowdrift
game reaches a metastable state fast, where cooperators and defectors coexist.
It will converge to cooperation or defection only after spending time in this
state which is exponential in the size, n, of the graph. In exceptional cases,
it will remain in this state indefinitely. Our theoretical results are
supported by experimental investigations.
",velumailum mohanaraj,,2011.0,,arXiv,Cooper2011,True,,arXiv,Not available,On the Imitation Strategy for Games on Graphs,d2fad0d686c969110a23bf37b97d97f3,http://arxiv.org/abs/1102.3879v1
16761," We investigate a routing game that allows for the creation of coalitions,
within the framework of cooperative game theory. Specifically, we describe the
cost of each coalition as its maximin value. This represents the performance
that the coalition can guarantee itself, under any (including worst)
conditions. We then investigate fundamental solution concepts of the considered
cooperative game, namely the core and a variant of the min-max fair nucleolus.
We consider two types of routing games based on the agents' Performance
Objectives, namely bottleneck routing games and additive routing games. For
bottleneck games we establish that the core includes all system-optimal flow
profiles and that the nucleolus is system-optimal or disadvantageous for the
smallest agent in the system. Moreover, we describe an interesting set of
scenarios for which the nucleolus is always system-optimal. For additive games,
we focus on the fundamental load balancing game of routing over parallel links.
We establish that, in contrary to bottleneck games, not all system-optimal flow
profiles lie in the core. However, we describe a specific system-optimal flow
profile that does lie in the core and, under assumptions of symmetry, is equal
to the nucleolus.
",gideon blocq,,2013.0,,arXiv,Blocq2013,True,,arXiv,Not available,Coalitions in Routing Games: A Worst-Case Perspective,ba511757fcec77d71419f5a829f0259e,http://arxiv.org/abs/1310.3487v4
16762," Celebrity games, a new model of network creation games is introduced. The
specific features of this model are that players have different celebrity
weights and that a critical distance is taken into consideration. The aim of
any player is to be close (at distance less than critical) to the others,
mainly to those with high celebrity weights. The cost of each player depends on
the cost of establishing direct links to other players and on the sum of the
weights of those players at a distance greater than the critical distance. We
show that celebrity games always have pure Nash equilibria and we characterize
the family of subgames having connected Nash equilibria, the so called star
celebrity games. We provide exact bounds for the PoA of celebrity games.
The PoA can be tightened when restricted to particular classes of Nash
equilibria graphs, in particular for trees.
",arnau messegue,,2015.0,,arXiv,Àlvarez2015,True,,arXiv,Not available,Stars and Celebrities: A Network Creation Game,e26aa61ae9943c4649b57de2ea95cdcf,http://arxiv.org/abs/1505.03718v3
16763," We investigate a routing game that allows for the creation of coalitions,
within the framework of cooperative game theory. Specifically, we describe the
cost of each coalition as its maximin value. This represents the performance
that the coalition can guarantee itself, under any (including worst)
conditions. We then investigate fundamental solution concepts of the considered
cooperative game, namely the core and a variant of the min-max fair nucleolus.
We consider two types of routing games based on the agents' Performance
Objectives, namely bottleneck routing games and additive routing games. For
bottleneck games we establish that the core includes all system-optimal flow
profiles and that the nucleolus is system-optimal or disadvantageous for the
smallest agent in the system. Moreover, we describe an interesting set of
scenarios for which the nucleolus is always system-optimal. For additive games,
we focus on the fundamental load balancing game of routing over parallel links.
We establish that, in contrary to bottleneck games, not all system-optimal flow
profiles lie in the core. However, we describe a specific system-optimal flow
profile that does lie in the core and, under assumptions of symmetry, is equal
to the nucleolus.
",ariel orda,,2013.0,,arXiv,Blocq2013,True,,arXiv,Not available,Coalitions in Routing Games: A Worst-Case Perspective,ba511757fcec77d71419f5a829f0259e,http://arxiv.org/abs/1310.3487v4
16764," Energy games are a well-studied class of 2-player turn-based games on a
finite graph where transitions are labeled with integer vectors which represent
changes in a multidimensional resource (the energy). One player tries to keep
the cumulative changes non-negative in every component while the other tries to
frustrate this. We consider generalized energy games played on infinite game
graphs induced by pushdown automata (modelling recursion) or their subclass of
one-counter automata. Our main result is that energy games are decidable in the
case where the game graph is induced by a one-counter automaton and the energy
is one-dimensional. On the other hand, every further generalization is
undecidable: Energy games on one-counter automata with a 2-dimensional energy
are undecidable, and energy games on pushdown automata are undecidable even if
the energy is one-dimensional. Furthermore, we show that energy games and
simulation games are inter-reducible, and thus we additionally obtain several
new (un)decidability results for the problem of checking simulation preorder
between pushdown automata and vector addition systems.
",parosh abdulla,,2014.0,,"Full version (including proofs) of material presented at CSL-LICS
2014 (Vienna, Austria)",Abdulla2014,True,,arXiv,Not available,Infinite-State Energy Games,d8a0b7f60f4c2ce85ccf3e0b1dc2480c,http://arxiv.org/abs/1405.0628v1
16765," Energy games are a well-studied class of 2-player turn-based games on a
finite graph where transitions are labeled with integer vectors which represent
changes in a multidimensional resource (the energy). One player tries to keep
the cumulative changes non-negative in every component while the other tries to
frustrate this. We consider generalized energy games played on infinite game
graphs induced by pushdown automata (modelling recursion) or their subclass of
one-counter automata. Our main result is that energy games are decidable in the
case where the game graph is induced by a one-counter automaton and the energy
is one-dimensional. On the other hand, every further generalization is
undecidable: Energy games on one-counter automata with a 2-dimensional energy
are undecidable, and energy games on pushdown automata are undecidable even if
the energy is one-dimensional. Furthermore, we show that energy games and
simulation games are inter-reducible, and thus we additionally obtain several
new (un)decidability results for the problem of checking simulation preorder
between pushdown automata and vector addition systems.
",mohamed atig,,2014.0,,"Full version (including proofs) of material presented at CSL-LICS
2014 (Vienna, Austria)",Abdulla2014,True,,arXiv,Not available,Infinite-State Energy Games,d8a0b7f60f4c2ce85ccf3e0b1dc2480c,http://arxiv.org/abs/1405.0628v1
16766," Energy games are a well-studied class of 2-player turn-based games on a
finite graph where transitions are labeled with integer vectors which represent
changes in a multidimensional resource (the energy). One player tries to keep
the cumulative changes non-negative in every component while the other tries to
frustrate this. We consider generalized energy games played on infinite game
graphs induced by pushdown automata (modelling recursion) or their subclass of
one-counter automata. Our main result is that energy games are decidable in the
case where the game graph is induced by a one-counter automaton and the energy
is one-dimensional. On the other hand, every further generalization is
undecidable: Energy games on one-counter automata with a 2-dimensional energy
are undecidable, and energy games on pushdown automata are undecidable even if
the energy is one-dimensional. Furthermore, we show that energy games and
simulation games are inter-reducible, and thus we additionally obtain several
new (un)decidability results for the problem of checking simulation preorder
between pushdown automata and vector addition systems.
",piotr hofman,,2014.0,,"Full version (including proofs) of material presented at CSL-LICS
2014 (Vienna, Austria)",Abdulla2014,True,,arXiv,Not available,Infinite-State Energy Games,d8a0b7f60f4c2ce85ccf3e0b1dc2480c,http://arxiv.org/abs/1405.0628v1
16767," Energy games are a well-studied class of 2-player turn-based games on a
finite graph where transitions are labeled with integer vectors which represent
changes in a multidimensional resource (the energy). One player tries to keep
the cumulative changes non-negative in every component while the other tries to
frustrate this. We consider generalized energy games played on infinite game
graphs induced by pushdown automata (modelling recursion) or their subclass of
one-counter automata. Our main result is that energy games are decidable in the
case where the game graph is induced by a one-counter automaton and the energy
is one-dimensional. On the other hand, every further generalization is
undecidable: Energy games on one-counter automata with a 2-dimensional energy
are undecidable, and energy games on pushdown automata are undecidable even if
the energy is one-dimensional. Furthermore, we show that energy games and
simulation games are inter-reducible, and thus we additionally obtain several
new (un)decidability results for the problem of checking simulation preorder
between pushdown automata and vector addition systems.
",richard mayr,,2014.0,,"Full version (including proofs) of material presented at CSL-LICS
2014 (Vienna, Austria)",Abdulla2014,True,,arXiv,Not available,Infinite-State Energy Games,d8a0b7f60f4c2ce85ccf3e0b1dc2480c,http://arxiv.org/abs/1405.0628v1
16768," Energy games are a well-studied class of 2-player turn-based games on a
finite graph where transitions are labeled with integer vectors which represent
changes in a multidimensional resource (the energy). One player tries to keep
the cumulative changes non-negative in every component while the other tries to
frustrate this. We consider generalized energy games played on infinite game
graphs induced by pushdown automata (modelling recursion) or their subclass of
one-counter automata. Our main result is that energy games are decidable in the
case where the game graph is induced by a one-counter automaton and the energy
is one-dimensional. On the other hand, every further generalization is
undecidable: Energy games on one-counter automata with a 2-dimensional energy
are undecidable, and energy games on pushdown automata are undecidable even if
the energy is one-dimensional. Furthermore, we show that energy games and
simulation games are inter-reducible, and thus we additionally obtain several
new (un)decidability results for the problem of checking simulation preorder
between pushdown automata and vector addition systems.
",k. kumar,,2014.0,,"Full version (including proofs) of material presented at CSL-LICS
2014 (Vienna, Austria)",Abdulla2014,True,,arXiv,Not available,Infinite-State Energy Games,d8a0b7f60f4c2ce85ccf3e0b1dc2480c,http://arxiv.org/abs/1405.0628v1
16769," Energy games are a well-studied class of 2-player turn-based games on a
finite graph where transitions are labeled with integer vectors which represent
changes in a multidimensional resource (the energy). One player tries to keep
the cumulative changes non-negative in every component while the other tries to
frustrate this. We consider generalized energy games played on infinite game
graphs induced by pushdown automata (modelling recursion) or their subclass of
one-counter automata. Our main result is that energy games are decidable in the
case where the game graph is induced by a one-counter automaton and the energy
is one-dimensional. On the other hand, every further generalization is
undecidable: Energy games on one-counter automata with a 2-dimensional energy
are undecidable, and energy games on pushdown automata are undecidable even if
the energy is one-dimensional. Furthermore, we show that energy games and
simulation games are inter-reducible, and thus we additionally obtain several
new (un)decidability results for the problem of checking simulation preorder
between pushdown automata and vector addition systems.
",patrick totzke,,2014.0,,"Full version (including proofs) of material presented at CSL-LICS
2014 (Vienna, Austria)",Abdulla2014,True,,arXiv,Not available,Infinite-State Energy Games,d8a0b7f60f4c2ce85ccf3e0b1dc2480c,http://arxiv.org/abs/1405.0628v1
16770," Cyber literacy merits serious research attention because it addresses a
confluence of specialization and generalization; cybersecurity is often
conceived of as approachable only by a technological intelligentsia, yet its
interdependent nature demands education for a broad population. Therefore,
educational tools should lead participants to discover technical knowledge in
an accessible and attractive framework. In this paper, we present Protection
and Deception (P&G), a novel two-player board game. P&G has three main
contributions. First, it builds cyber literacy by giving participants
""hands-on"" experience with game pieces that have the capabilities of
cyber-attacks such as worms, masquerading attacks/spoofs, replay attacks, and
Trojans. Second, P&G teaches the important game-theoretic concepts of
asymmetric information and resource allocation implicitly and non-obtrusively
through its game play. Finally, it strives for the important objective of
security education for underrepresented minorities and people without explicit
technical experience. We tested P&G at a community center in Manhattan with
middle- and high school students, and observed enjoyment and increased cyber
literacy along with suggestions for improvement of the game. Together with
these results, our paper also presents images of the attractive board design
and 3D printed game pieces, together with a Monte-Carlo analysis that we used
to ensure a balanced gaming experience.
",saboor zahir,,2015.0,,arXiv,Zahir2015,True,,arXiv,Not available,"Protection and Deception: Discovering Game Theory and Cyber Literacy
through a Novel Board Game Experience",17309a8c6f23a2357eccd8291570904e,http://arxiv.org/abs/1505.05570v1
16771," Cyber literacy merits serious research attention because it addresses a
confluence of specialization and generalization; cybersecurity is often
conceived of as approachable only by a technological intelligentsia, yet its
interdependent nature demands education for a broad population. Therefore,
educational tools should lead participants to discover technical knowledge in
an accessible and attractive framework. In this paper, we present Protection
and Deception (P&G), a novel two-player board game. P&G has three main
contributions. First, it builds cyber literacy by giving participants
""hands-on"" experience with game pieces that have the capabilities of
cyber-attacks such as worms, masquerading attacks/spoofs, replay attacks, and
Trojans. Second, P&G teaches the important game-theoretic concepts of
asymmetric information and resource allocation implicitly and non-obtrusively
through its game play. Finally, it strives for the important objective of
security education for underrepresented minorities and people without explicit
technical experience. We tested P&G at a community center in Manhattan with
middle- and high school students, and observed enjoyment and increased cyber
literacy along with suggestions for improvement of the game. Together with
these results, our paper also presents images of the attractive board design
and 3D printed game pieces, together with a Monte-Carlo analysis that we used
to ensure a balanced gaming experience.
",john pak,,2015.0,,arXiv,Zahir2015,True,,arXiv,Not available,"Protection and Deception: Discovering Game Theory and Cyber Literacy
through a Novel Board Game Experience",17309a8c6f23a2357eccd8291570904e,http://arxiv.org/abs/1505.05570v1
16772," Cyber literacy merits serious research attention because it addresses a
confluence of specialization and generalization; cybersecurity is often
conceived of as approachable only by a technological intelligentsia, yet its
interdependent nature demands education for a broad population. Therefore,
educational tools should lead participants to discover technical knowledge in
an accessible and attractive framework. In this paper, we present Protection
and Deception (P&G), a novel two-player board game. P&G has three main
contributions. First, it builds cyber literacy by giving participants
""hands-on"" experience with game pieces that have the capabilities of
cyber-attacks such as worms, masquerading attacks/spoofs, replay attacks, and
Trojans. Second, P&G teaches the important game-theoretic concepts of
asymmetric information and resource allocation implicitly and non-obtrusively
through its game play. Finally, it strives for the important objective of
security education for underrepresented minorities and people without explicit
technical experience. We tested P&G at a community center in Manhattan with
middle- and high school students, and observed enjoyment and increased cyber
literacy along with suggestions for improvement of the game. Together with
these results, our paper also presents images of the attractive board design
and 3D printed game pieces, together with a Monte-Carlo analysis that we used
to ensure a balanced gaming experience.
",jatinder singh,,2015.0,,arXiv,Zahir2015,True,,arXiv,Not available,"Protection and Deception: Discovering Game Theory and Cyber Literacy
through a Novel Board Game Experience",17309a8c6f23a2357eccd8291570904e,http://arxiv.org/abs/1505.05570v1
16773," Celebrity games, a new model of network creation games is introduced. The
specific features of this model are that players have different celebrity
weights and that a critical distance is taken into consideration. The aim of
any player is to be close (at distance less than critical) to the others,
mainly to those with high celebrity weights. The cost of each player depends on
the cost of establishing direct links to other players and on the sum of the
weights of those players at a distance greater than the critical distance. We
show that celebrity games always have pure Nash equilibria and we characterize
the family of subgames having connected Nash equilibria, the so called star
celebrity games. We provide exact bounds for the PoA of celebrity games.
The PoA can be tightened when restricted to particular classes of Nash
equilibria graphs, in particular for trees.
",maria serna,,2015.0,,arXiv,Àlvarez2015,True,,arXiv,Not available,Stars and Celebrities: A Network Creation Game,e26aa61ae9943c4649b57de2ea95cdcf,http://arxiv.org/abs/1505.03718v3
16774," Cyber literacy merits serious research attention because it addresses a
confluence of specialization and generalization; cybersecurity is often
conceived of as approachable only by a technological intelligentsia, yet its
interdependent nature demands education for a broad population. Therefore,
educational tools should lead participants to discover technical knowledge in
an accessible and attractive framework. In this paper, we present Protection
and Deception (P&G), a novel two-player board game. P&G has three main
contributions. First, it builds cyber literacy by giving participants
""hands-on"" experience with game pieces that have the capabilities of
cyber-attacks such as worms, masquerading attacks/spoofs, replay attacks, and
Trojans. Second, P&G teaches the important game-theoretic concepts of
asymmetric information and resource allocation implicitly and non-obtrusively
through its game play. Finally, it strives for the important objective of
security education for underrepresented minorities and people without explicit
technical experience. We tested P&G at a community center in Manhattan with
middle- and high school students, and observed enjoyment and increased cyber
literacy along with suggestions for improvement of the game. Together with
these results, our paper also presents images of the attractive board design
and 3D printed game pieces, together with a Monte-Carlo analysis that we used
to ensure a balanced gaming experience.
",jeffrey pawlick,,2015.0,,arXiv,Zahir2015,True,,arXiv,Not available,"Protection and Deception: Discovering Game Theory and Cyber Literacy
through a Novel Board Game Experience",17309a8c6f23a2357eccd8291570904e,http://arxiv.org/abs/1505.05570v1
16775," Cyber literacy merits serious research attention because it addresses a
confluence of specialization and generalization; cybersecurity is often
conceived of as approachable only by a technological intelligentsia, yet its
interdependent nature demands education for a broad population. Therefore,
educational tools should lead participants to discover technical knowledge in
an accessible and attractive framework. In this paper, we present Protection
and Deception (P&G), a novel two-player board game. P&G has three main
contributions. First, it builds cyber literacy by giving participants
""hands-on"" experience with game pieces that have the capabilities of
cyber-attacks such as worms, masquerading attacks/spoofs, replay attacks, and
Trojans. Second, P&G teaches the important game-theoretic concepts of
asymmetric information and resource allocation implicitly and non-obtrusively
through its game play. Finally, it strives for the important objective of
security education for underrepresented minorities and people without explicit
technical experience. We tested P&G at a community center in Manhattan with
middle- and high school students, and observed enjoyment and increased cyber
literacy along with suggestions for improvement of the game. Together with
these results, our paper also presents images of the attractive board design
and 3D printed game pieces, together with a Monte-Carlo analysis that we used
to ensure a balanced gaming experience.
",quanyan zhu,,2015.0,,arXiv,Zahir2015,True,,arXiv,Not available,"Protection and Deception: Discovering Game Theory and Cyber Literacy
through a Novel Board Game Experience",17309a8c6f23a2357eccd8291570904e,http://arxiv.org/abs/1505.05570v1
16776," Cloudlet deployment and resource allocation for mobile users (MUs) have been
extensively studied in existing works for computation resource scarcity.
However, most of them failed to jointly consider the two techniques together,
and the selfishness of cloudlet and access point (AP) are ignored. Inspired by
the group-buying mechanism, this paper proposes three-stage auction schemes by
combining cloudlet placement and resource assignment, to improve the social
welfare subject to the economic properties. We first divide all MUs into some
small groups according to the associated APs. Then the MUs in same group can
trade with cloudlets in a group-buying way through the APs. Finally, the MUs
pay for the cloudlets if they are the winners in the auction scheme. We prove
that our auction schemes can work in polynomial time. We also provide the
proofs for economic properties in theory. For the purpose of performance
comparison, we compare the proposed schemes with HAF, which is a centralized
cloudlet placement scheme without auction. Numerical results confirm the
correctness and efficiency of the proposed schemes.
",gangqiang zhou,,2018.0,,arXiv,Zhou2018,True,,arXiv,Not available,"Efficient Three-stage Auction Schemes for Cloudlets Deployment in
Wireless Access Network",3be0f0a6b751b671460b7e0665bc29ef,http://arxiv.org/abs/1804.01512v1
16777," Cloudlet deployment and resource allocation for mobile users (MUs) have been
extensively studied in existing works for computation resource scarcity.
However, most of them failed to jointly consider the two techniques together,
and the selfishness of cloudlet and access point (AP) are ignored. Inspired by
the group-buying mechanism, this paper proposes three-stage auction schemes by
combining cloudlet placement and resource assignment, to improve the social
welfare subject to the economic properties. We first divide all MUs into some
small groups according to the associated APs. Then the MUs in same group can
trade with cloudlets in a group-buying way through the APs. Finally, the MUs
pay for the cloudlets if they are the winners in the auction scheme. We prove
that our auction schemes can work in polynomial time. We also provide the
proofs for economic properties in theory. For the purpose of performance
comparison, we compare the proposed schemes with HAF, which is a centralized
cloudlet placement scheme without auction. Numerical results confirm the
correctness and efficiency of the proposed schemes.
",jigang wu,,2018.0,,arXiv,Zhou2018,True,,arXiv,Not available,"Efficient Three-stage Auction Schemes for Cloudlets Deployment in
Wireless Access Network",3be0f0a6b751b671460b7e0665bc29ef,http://arxiv.org/abs/1804.01512v1
16778," Cloudlet deployment and resource allocation for mobile users (MUs) have been
extensively studied in existing works for computation resource scarcity.
However, most of them failed to jointly consider the two techniques together,
and the selfishness of cloudlet and access point (AP) are ignored. Inspired by
the group-buying mechanism, this paper proposes three-stage auction schemes by
combining cloudlet placement and resource assignment, to improve the social
welfare subject to the economic properties. We first divide all MUs into some
small groups according to the associated APs. Then the MUs in same group can
trade with cloudlets in a group-buying way through the APs. Finally, the MUs
pay for the cloudlets if they are the winners in the auction scheme. We prove
that our auction schemes can work in polynomial time. We also provide the
proofs for economic properties in theory. For the purpose of performance
comparison, we compare the proposed schemes with HAF, which is a centralized
cloudlet placement scheme without auction. Numerical results confirm the
correctness and efficiency of the proposed schemes.
",long chen,,2018.0,,arXiv,Zhou2018,True,,arXiv,Not available,"Efficient Three-stage Auction Schemes for Cloudlets Deployment in
Wireless Access Network",3be0f0a6b751b671460b7e0665bc29ef,http://arxiv.org/abs/1804.01512v1
16779," Cloudlet deployment and resource allocation for mobile users (MUs) have been
extensively studied in existing works for computation resource scarcity.
However, most of them failed to jointly consider the two techniques together,
and the selfishness of cloudlet and access point (AP) are ignored. Inspired by
the group-buying mechanism, this paper proposes three-stage auction schemes by
combining cloudlet placement and resource assignment, to improve the social
welfare subject to the economic properties. We first divide all MUs into some
small groups according to the associated APs. Then the MUs in same group can
trade with cloudlets in a group-buying way through the APs. Finally, the MUs
pay for the cloudlets if they are the winners in the auction scheme. We prove
that our auction schemes can work in polynomial time. We also provide the
proofs for economic properties in theory. For the purpose of performance
comparison, we compare the proposed schemes with HAF, which is a centralized
cloudlet placement scheme without auction. Numerical results confirm the
correctness and efficiency of the proposed schemes.
",guiyuan jiang,,2018.0,,arXiv,Zhou2018,True,,arXiv,Not available,"Efficient Three-stage Auction Schemes for Cloudlets Deployment in
Wireless Access Network",3be0f0a6b751b671460b7e0665bc29ef,http://arxiv.org/abs/1804.01512v1
16780," Cloudlet deployment and resource allocation for mobile users (MUs) have been
extensively studied in existing works for computation resource scarcity.
However, most of them failed to jointly consider the two techniques together,
and the selfishness of cloudlet and access point (AP) are ignored. Inspired by
the group-buying mechanism, this paper proposes three-stage auction schemes by
combining cloudlet placement and resource assignment, to improve the social
welfare subject to the economic properties. We first divide all MUs into some
small groups according to the associated APs. Then the MUs in same group can
trade with cloudlets in a group-buying way through the APs. Finally, the MUs
pay for the cloudlets if they are the winners in the auction scheme. We prove
that our auction schemes can work in polynomial time. We also provide the
proofs for economic properties in theory. For the purpose of performance
comparison, we compare the proposed schemes with HAF, which is a centralized
cloudlet placement scheme without auction. Numerical results confirm the
correctness and efficiency of the proposed schemes.
",siew-kei lam,,2018.0,,arXiv,Zhou2018,True,,arXiv,Not available,"Efficient Three-stage Auction Schemes for Cloudlets Deployment in
Wireless Access Network",3be0f0a6b751b671460b7e0665bc29ef,http://arxiv.org/abs/1804.01512v1
16781," We develop a novel optimization model to maximize the profit of a Demand-Side
Platform (DSP) while ensuring that the budget utilization preferences of the
DSP's advertiser clients are adequately met. Our model is highly flexible and
can be applied in a Real-Time Bidding environment (RTB) with arbitrary auction
types, e.g., both first and second price auctions. Our proposed formulation
leads to a non-convex optimization problem due to the joint optimization over
both impression allocation and bid price decisions. Using Fenchel duality
theory, we construct a dual problem that is convex and can be solved
efficiently to obtain feasible bidding prices and allocation variables that can
be deployed in a RTB setting. With a few minimal additional assumptions on the
properties of the auctions, we demonstrate theoretically that our
computationally efficient procedure based on convex optimization principles is
guaranteed to deliver a globally optimal solution. We conduct experiments using
data from a real DSP to validate our theoretical findings and to demonstrate
that our method successfully trades off between DSP profitability and budget
utilization in a simulated online environment.
",alfonso lobos,,2018.0,,arXiv,Lobos2018,True,,arXiv,Not available,"Optimal Bidding, Allocation and Budget Spending for a Demand Side
Platform Under Many Auction Types",337486461d7ca1a9c4f51c638cb6f80f,http://arxiv.org/abs/1805.11645v1
16782," We develop a novel optimization model to maximize the profit of a Demand-Side
Platform (DSP) while ensuring that the budget utilization preferences of the
DSP's advertiser clients are adequately met. Our model is highly flexible and
can be applied in a Real-Time Bidding environment (RTB) with arbitrary auction
types, e.g., both first and second price auctions. Our proposed formulation
leads to a non-convex optimization problem due to the joint optimization over
both impression allocation and bid price decisions. Using Fenchel duality
theory, we construct a dual problem that is convex and can be solved
efficiently to obtain feasible bidding prices and allocation variables that can
be deployed in a RTB setting. With a few minimal additional assumptions on the
properties of the auctions, we demonstrate theoretically that our
computationally efficient procedure based on convex optimization principles is
guaranteed to deliver a globally optimal solution. We conduct experiments using
data from a real DSP to validate our theoretical findings and to demonstrate
that our method successfully trades off between DSP profitability and budget
utilization in a simulated online environment.
",paul grigas,,2018.0,,arXiv,Lobos2018,True,,arXiv,Not available,"Optimal Bidding, Allocation and Budget Spending for a Demand Side
Platform Under Many Auction Types",337486461d7ca1a9c4f51c638cb6f80f,http://arxiv.org/abs/1805.11645v1
16783," We develop a novel optimization model to maximize the profit of a Demand-Side
Platform (DSP) while ensuring that the budget utilization preferences of the
DSP's advertiser clients are adequately met. Our model is highly flexible and
can be applied in a Real-Time Bidding environment (RTB) with arbitrary auction
types, e.g., both first and second price auctions. Our proposed formulation
leads to a non-convex optimization problem due to the joint optimization over
both impression allocation and bid price decisions. Using Fenchel duality
theory, we construct a dual problem that is convex and can be solved
efficiently to obtain feasible bidding prices and allocation variables that can
be deployed in a RTB setting. With a few minimal additional assumptions on the
properties of the auctions, we demonstrate theoretically that our
computationally efficient procedure based on convex optimization principles is
guaranteed to deliver a globally optimal solution. We conduct experiments using
data from a real DSP to validate our theoretical findings and to demonstrate
that our method successfully trades off between DSP profitability and budget
utilization in a simulated online environment.
",zheng wen,,2018.0,,arXiv,Lobos2018,True,,arXiv,Not available,"Optimal Bidding, Allocation and Budget Spending for a Demand Side
Platform Under Many Auction Types",337486461d7ca1a9c4f51c638cb6f80f,http://arxiv.org/abs/1805.11645v1
16784," We present a new form of a Parrondo game using discrete-time quantum walk on
a line. The two players A and B with different quantum coins operators,
individually losing the game can develop a strategy to emerge as joint winners
by using their coins alternatively, or in combination for each step of the
quantum walk evolution. We also present a strategy for a player A (B) to have a
winning probability more than player B (A). Significance of the game strategy
in information theory and physical applications are also discussed.
",c. chandrashekar,,2010.0,10.1016/j.physleta.2011.02.071,"Physics Letters A 375 (2011), pp. 1553-1558",Chandrashekar2010,True,,arXiv,Not available,Parrondo's game using a discrete-time quantum walk,ee29402383b7e8d780d57f3d0e386dae,http://arxiv.org/abs/1008.5121v2
16785," Subgame perfect equilibria are specific Nash equilibria in perfect
information games in extensive form. They are important because they relate to
the rationality of the players. They always exist in infinite games with
continuous real-valued payoffs, but may fail to exist even in simple games with
slightly discontinuous payoffs. This article considers only games whose outcome
functions are measurable in the Hausdorff difference hierarchy of the open sets
(\textit{i.e.} $\Delta^0_2$ when in the Baire space), and it characterizes the
families of linear preferences such that every game using these preferences has
a subgame perfect equilibrium: the preferences without infinite ascending
chains (of course), and such that for all players $a$ and $b$ and outcomes
$x,y,z$ we have $\neg(z <_a y <_a x \,\wedge\, x <_b z <_b y)$. Moreover at
each node of the game, the equilibrium constructed for the proof is
Pareto-optimal among all the outcomes occurring in the subgame. Additional
results for non-linear preferences are presented.
",stephane roux,,2015.0,,arXiv,Roux2015,True,,arXiv,Not available,"Infinite subgame perfect equilibrium in the Hausdorff difference
hierarchy",136741611066cee7b1c5532417026c9d,http://arxiv.org/abs/1505.06320v2
16786," We develop a novel optimization model to maximize the profit of a Demand-Side
Platform (DSP) while ensuring that the budget utilization preferences of the
DSP's advertiser clients are adequately met. Our model is highly flexible and
can be applied in a Real-Time Bidding environment (RTB) with arbitrary auction
types, e.g., both first and second price auctions. Our proposed formulation
leads to a non-convex optimization problem due to the joint optimization over
both impression allocation and bid price decisions. Using Fenchel duality
theory, we construct a dual problem that is convex and can be solved
efficiently to obtain feasible bidding prices and allocation variables that can
be deployed in a RTB setting. With a few minimal additional assumptions on the
properties of the auctions, we demonstrate theoretically that our
computationally efficient procedure based on convex optimization principles is
guaranteed to deliver a globally optimal solution. We conduct experiments using
data from a real DSP to validate our theoretical findings and to demonstrate
that our method successfully trades off between DSP profitability and budget
utilization in a simulated online environment.
",kuang-chih lee,,2018.0,,arXiv,Lobos2018,True,,arXiv,Not available,"Optimal Bidding, Allocation and Budget Spending for a Demand Side
Platform Under Many Auction Types",337486461d7ca1a9c4f51c638cb6f80f,http://arxiv.org/abs/1805.11645v1
16787," We investigate \emph{bi-valued} auctions in the digital good setting and
construct an explicit polynomial time deterministic auction. We prove an
unconditional tight lower bound which holds even for random superpolynomial
auctions. The analysis of the construction uses the adoption of the finer lens
of \emph{general competitiveness} which considers additive losses on top of
multiplicative ones. The result implies that general competitiveness is the
right notion to use in this setting, as this optimal auction is uncompetitive
with respect to competitive measures which do not consider additive losses.
",oren ben-zwi,,2011.0,,arXiv,Ben-Zwi2011,True,,arXiv,Not available,Optimal Bi-Valued Auctions,db71c80df15fd91bc32930ed2524c0bd,http://arxiv.org/abs/1106.4677v1
16788," We investigate \emph{bi-valued} auctions in the digital good setting and
construct an explicit polynomial time deterministic auction. We prove an
unconditional tight lower bound which holds even for random superpolynomial
auctions. The analysis of the construction uses the adoption of the finer lens
of \emph{general competitiveness} which considers additive losses on top of
multiplicative ones. The result implies that general competitiveness is the
right notion to use in this setting, as this optimal auction is uncompetitive
with respect to competitive measures which do not consider additive losses.
",ilan newman,,2011.0,,arXiv,Ben-Zwi2011,True,,arXiv,Not available,Optimal Bi-Valued Auctions,db71c80df15fd91bc32930ed2524c0bd,http://arxiv.org/abs/1106.4677v1
16789," Online auctions are fast gaining popularity in today's electronic commerce.
Relative to offline auctions, there is a greater degree of multiple bidding and
late bidding in online auctions, an empirical finding by some recent research.
These two behaviors (multiple bidding and late bidding) are of ``strategic''
importance to online auctions and hence important to investigate. In this
article we empirically measure the distribution of bid timings and the extent
of multiple bidding in a large set of online auctions, using bidder experience
as a mediating variable. We use data from the popular auction site
\url{www.eBay.com} to investigate more than 10,000 auctions from 15 consumer
product categories. We estimate the distribution of late bidding and multiple
bidding, which allows us to place these product categories along a continuum of
these metrics (the extent of late bidding and the extent of multiple bidding).
Interestingly, the results of the analysis distinguish most of the product
categories from one another with respect to these metrics, implying that
product categories, after controlling for bidder experience, differ in the
extent of multiple bidding and late bidding observed in them. We also find a
nonmonotonic impact of bidder experience on the timing of bid placements.
Experienced bidders are ``more'' active either toward the close of auction or
toward the start of auction. The impact of experience on the extent of multiple
bidding, though, is monotonic across the auction interval; more experienced
bidders tend to indulge ``less'' in multiple bidding.
",sharad borle,,2006.0,10.1214/088342306000000123,"Statistical Science 2006, Vol. 21, No. 2, 194-205",Borle2006,True,,arXiv,Not available,"The Timing of Bid Placement and Extent of Multiple Bidding: An Empirical
Investigation Using eBay Online Auctions",65ad48b34aa3c7b9fd170c7dbe7389a6,http://arxiv.org/abs/math/0609194v1
16790," Online auctions are fast gaining popularity in today's electronic commerce.
Relative to offline auctions, there is a greater degree of multiple bidding and
late bidding in online auctions, an empirical finding by some recent research.
These two behaviors (multiple bidding and late bidding) are of ``strategic''
importance to online auctions and hence important to investigate. In this
article we empirically measure the distribution of bid timings and the extent
of multiple bidding in a large set of online auctions, using bidder experience
as a mediating variable. We use data from the popular auction site
\url{www.eBay.com} to investigate more than 10,000 auctions from 15 consumer
product categories. We estimate the distribution of late bidding and multiple
bidding, which allows us to place these product categories along a continuum of
these metrics (the extent of late bidding and the extent of multiple bidding).
Interestingly, the results of the analysis distinguish most of the product
categories from one another with respect to these metrics, implying that
product categories, after controlling for bidder experience, differ in the
extent of multiple bidding and late bidding observed in them. We also find a
nonmonotonic impact of bidder experience on the timing of bid placements.
Experienced bidders are ``more'' active either toward the close of auction or
toward the start of auction. The impact of experience on the extent of multiple
bidding, though, is monotonic across the auction interval; more experienced
bidders tend to indulge ``less'' in multiple bidding.
",peter boatwright,,2006.0,10.1214/088342306000000123,"Statistical Science 2006, Vol. 21, No. 2, 194-205",Borle2006,True,,arXiv,Not available,"The Timing of Bid Placement and Extent of Multiple Bidding: An Empirical
Investigation Using eBay Online Auctions",65ad48b34aa3c7b9fd170c7dbe7389a6,http://arxiv.org/abs/math/0609194v1
16791," Online auctions are fast gaining popularity in today's electronic commerce.
Relative to offline auctions, there is a greater degree of multiple bidding and
late bidding in online auctions, an empirical finding by some recent research.
These two behaviors (multiple bidding and late bidding) are of ``strategic''
importance to online auctions and hence important to investigate. In this
article we empirically measure the distribution of bid timings and the extent
of multiple bidding in a large set of online auctions, using bidder experience
as a mediating variable. We use data from the popular auction site
\url{www.eBay.com} to investigate more than 10,000 auctions from 15 consumer
product categories. We estimate the distribution of late bidding and multiple
bidding, which allows us to place these product categories along a continuum of
these metrics (the extent of late bidding and the extent of multiple bidding).
Interestingly, the results of the analysis distinguish most of the product
categories from one another with respect to these metrics, implying that
product categories, after controlling for bidder experience, differ in the
extent of multiple bidding and late bidding observed in them. We also find a
nonmonotonic impact of bidder experience on the timing of bid placements.
Experienced bidders are ``more'' active either toward the close of auction or
toward the start of auction. The impact of experience on the extent of multiple
bidding, though, is monotonic across the auction interval; more experienced
bidders tend to indulge ``less'' in multiple bidding.
",joseph kadane,,2006.0,10.1214/088342306000000123,"Statistical Science 2006, Vol. 21, No. 2, 194-205",Borle2006,True,,arXiv,Not available,"The Timing of Bid Placement and Extent of Multiple Bidding: An Empirical
Investigation Using eBay Online Auctions",65ad48b34aa3c7b9fd170c7dbe7389a6,http://arxiv.org/abs/math/0609194v1
16792," In this article we consider combinatorial markets with valuations only for
singletons and pairs of buy/sell-orders for swapping two items in equal
quantity. We provide an algorithm that permits polynomial time market-clearing
and -pricing. The results are presented in the context of our main application:
the futures opening auction problem.
Futures contracts are an important tool to mitigate market risk and
counterparty credit risk. In futures markets these contracts can be traded with
varying expiration dates and underlyings. A common hedging strategy is to roll
positions forward into the next expiration date, however this strategy comes
with significant operational risk. To address this risk, exchanges started to
offer so-called futures contract combinations, which allow the traders for
swapping two futures contracts with different expiration dates or for swapping
two futures contracts with different underlyings. In theory, the price is in
both cases the difference of the two involved futures contracts. However, in
particular in the opening auctions price inefficiencies often occur due to
suboptimal clearing, leading to potential arbitrage opportunities.
We present a minimum cost flow formulation of the futures opening auction
problem that guarantees consistent prices. The core ideas are to model orders
as arcs in a network, to enforce the equilibrium conditions with the help of
two hierarchical objectives, and to combine these objectives into a single
weighted objective while preserving the price information of dual optimal
solutions. The resulting optimization problem can be solved in polynomial time
and computational tests establish an empirical performance suitable for
production environments.
",johannes muller,,2014.0,10.1007/s00186-016-0555-z,"Mathematical Methods of Operations Research, April 2017, Volume
85, Issue 2, pp 155-177",Müller2014,True,,arXiv,Not available,"Pricing and clearing combinatorial markets with singleton and swap
orders: Efficient algorithms for the futures opening auction problem",1d79bd6e44e20ec399ef5d1ac3d5c5c2,http://arxiv.org/abs/1404.6546v3
16793," In this article we consider combinatorial markets with valuations only for
singletons and pairs of buy/sell-orders for swapping two items in equal
quantity. We provide an algorithm that permits polynomial time market-clearing
and -pricing. The results are presented in the context of our main application:
the futures opening auction problem.
Futures contracts are an important tool to mitigate market risk and
counterparty credit risk. In futures markets these contracts can be traded with
varying expiration dates and underlyings. A common hedging strategy is to roll
positions forward into the next expiration date, however this strategy comes
with significant operational risk. To address this risk, exchanges started to
offer so-called futures contract combinations, which allow the traders for
swapping two futures contracts with different expiration dates or for swapping
two futures contracts with different underlyings. In theory, the price is in
both cases the difference of the two involved futures contracts. However, in
particular in the opening auctions price inefficiencies often occur due to
suboptimal clearing, leading to potential arbitrage opportunities.
We present a minimum cost flow formulation of the futures opening auction
problem that guarantees consistent prices. The core ideas are to model orders
as arcs in a network, to enforce the equilibrium conditions with the help of
two hierarchical objectives, and to combine these objectives into a single
weighted objective while preserving the price information of dual optimal
solutions. The resulting optimization problem can be solved in polynomial time
and computational tests establish an empirical performance suitable for
production environments.
",sebastian pokutta,,2014.0,10.1007/s00186-016-0555-z,"Mathematical Methods of Operations Research, April 2017, Volume
85, Issue 2, pp 155-177",Müller2014,True,,arXiv,Not available,"Pricing and clearing combinatorial markets with singleton and swap
orders: Efficient algorithms for the futures opening auction problem",1d79bd6e44e20ec399ef5d1ac3d5c5c2,http://arxiv.org/abs/1404.6546v3
16794," In this article we consider combinatorial markets with valuations only for
singletons and pairs of buy/sell-orders for swapping two items in equal
quantity. We provide an algorithm that permits polynomial time market-clearing
and -pricing. The results are presented in the context of our main application:
the futures opening auction problem.
Futures contracts are an important tool to mitigate market risk and
counterparty credit risk. In futures markets these contracts can be traded with
varying expiration dates and underlyings. A common hedging strategy is to roll
positions forward into the next expiration date, however this strategy comes
with significant operational risk. To address this risk, exchanges started to
offer so-called futures contract combinations, which allow the traders for
swapping two futures contracts with different expiration dates or for swapping
two futures contracts with different underlyings. In theory, the price is in
both cases the difference of the two involved futures contracts. However, in
particular in the opening auctions price inefficiencies often occur due to
suboptimal clearing, leading to potential arbitrage opportunities.
We present a minimum cost flow formulation of the futures opening auction
problem that guarantees consistent prices. The core ideas are to model orders
as arcs in a network, to enforce the equilibrium conditions with the help of
two hierarchical objectives, and to combine these objectives into a single
weighted objective while preserving the price information of dual optimal
solutions. The resulting optimization problem can be solved in polynomial time
and computational tests establish an empirical performance suitable for
production environments.
",alexander martin,,2014.0,10.1007/s00186-016-0555-z,"Mathematical Methods of Operations Research, April 2017, Volume
85, Issue 2, pp 155-177",Müller2014,True,,arXiv,Not available,"Pricing and clearing combinatorial markets with singleton and swap
orders: Efficient algorithms for the futures opening auction problem",1d79bd6e44e20ec399ef5d1ac3d5c5c2,http://arxiv.org/abs/1404.6546v3
16795," In this article we consider combinatorial markets with valuations only for
singletons and pairs of buy/sell-orders for swapping two items in equal
quantity. We provide an algorithm that permits polynomial time market-clearing
and -pricing. The results are presented in the context of our main application:
the futures opening auction problem.
Futures contracts are an important tool to mitigate market risk and
counterparty credit risk. In futures markets these contracts can be traded with
varying expiration dates and underlyings. A common hedging strategy is to roll
positions forward into the next expiration date, however this strategy comes
with significant operational risk. To address this risk, exchanges started to
offer so-called futures contract combinations, which allow the traders for
swapping two futures contracts with different expiration dates or for swapping
two futures contracts with different underlyings. In theory, the price is in
both cases the difference of the two involved futures contracts. However, in
particular in the opening auctions price inefficiencies often occur due to
suboptimal clearing, leading to potential arbitrage opportunities.
We present a minimum cost flow formulation of the futures opening auction
problem that guarantees consistent prices. The core ideas are to model orders
as arcs in a network, to enforce the equilibrium conditions with the help of
two hierarchical objectives, and to combine these objectives into a single
weighted objective while preserving the price information of dual optimal
solutions. The resulting optimization problem can be solved in polynomial time
and computational tests establish an empirical performance suitable for
production environments.
",susanne pape,,2014.0,10.1007/s00186-016-0555-z,"Mathematical Methods of Operations Research, April 2017, Volume
85, Issue 2, pp 155-177",Müller2014,True,,arXiv,Not available,"Pricing and clearing combinatorial markets with singleton and swap
orders: Efficient algorithms for the futures opening auction problem",1d79bd6e44e20ec399ef5d1ac3d5c5c2,http://arxiv.org/abs/1404.6546v3
16796," The interaction of competing agents is described by classical game theory. It
is now well known that this can be extended to the quantum domain, where agents
obey the rules of quantum mechanics. This is of emerging interest for exploring
quantum foundations, quantum protocols, quantum auctions, quantum cryptography,
and the dynamics of quantum cryptocurrency, for example. In this paper, we
investigate two-player games in which a strategy pair can exist as a Nash
equilibrium when the games obey the rules of quantum mechanics. Using a
generalized Einstein-Podolsky-Rosen (EPR) setting for two-player quantum games,
and considering a particular strategy pair, we identify sets of games for which
the pair can exist as a Nash equilibrium only when Bell's inequality is
violated. We thus determine specific games for which the Nash inequality
becomes equivalent to Bell's inequality for the considered strategy pair.
",azhar iqbal,,2015.0,,"Physics Letters A, Vol. 382, Issue 40, pp 2908-2913 (2018)",Iqbal2015,True,,arXiv,Not available,"The equivalence of Bell's inequality and the Nash inequality in a
quantum game-theoretic setting",27c0cd284e04078d495498e1b9b21cbc,http://arxiv.org/abs/1507.07341v6
16797," In this article we consider combinatorial markets with valuations only for
singletons and pairs of buy/sell-orders for swapping two items in equal
quantity. We provide an algorithm that permits polynomial time market-clearing
and -pricing. The results are presented in the context of our main application:
the futures opening auction problem.
Futures contracts are an important tool to mitigate market risk and
counterparty credit risk. In futures markets these contracts can be traded with
varying expiration dates and underlyings. A common hedging strategy is to roll
positions forward into the next expiration date, however this strategy comes
with significant operational risk. To address this risk, exchanges started to
offer so-called futures contract combinations, which allow the traders for
swapping two futures contracts with different expiration dates or for swapping
two futures contracts with different underlyings. In theory, the price is in
both cases the difference of the two involved futures contracts. However, in
particular in the opening auctions price inefficiencies often occur due to
suboptimal clearing, leading to potential arbitrage opportunities.
We present a minimum cost flow formulation of the futures opening auction
problem that guarantees consistent prices. The core ideas are to model orders
as arcs in a network, to enforce the equilibrium conditions with the help of
two hierarchical objectives, and to combine these objectives into a single
weighted objective while preserving the price information of dual optimal
solutions. The resulting optimization problem can be solved in polynomial time
and computational tests establish an empirical performance suitable for
production environments.
",andrea peter,,2014.0,10.1007/s00186-016-0555-z,"Mathematical Methods of Operations Research, April 2017, Volume
85, Issue 2, pp 155-177",Müller2014,True,,arXiv,Not available,"Pricing and clearing combinatorial markets with singleton and swap
orders: Efficient algorithms for the futures opening auction problem",1d79bd6e44e20ec399ef5d1ac3d5c5c2,http://arxiv.org/abs/1404.6546v3
16798," In this article we consider combinatorial markets with valuations only for
singletons and pairs of buy/sell-orders for swapping two items in equal
quantity. We provide an algorithm that permits polynomial time market-clearing
and -pricing. The results are presented in the context of our main application:
the futures opening auction problem.
Futures contracts are an important tool to mitigate market risk and
counterparty credit risk. In futures markets these contracts can be traded with
varying expiration dates and underlyings. A common hedging strategy is to roll
positions forward into the next expiration date, however this strategy comes
with significant operational risk. To address this risk, exchanges started to
offer so-called futures contract combinations, which allow the traders for
swapping two futures contracts with different expiration dates or for swapping
two futures contracts with different underlyings. In theory, the price is in
both cases the difference of the two involved futures contracts. However, in
particular in the opening auctions price inefficiencies often occur due to
suboptimal clearing, leading to potential arbitrage opportunities.
We present a minimum cost flow formulation of the futures opening auction
problem that guarantees consistent prices. The core ideas are to model orders
as arcs in a network, to enforce the equilibrium conditions with the help of
two hierarchical objectives, and to combine these objectives into a single
weighted objective while preserving the price information of dual optimal
solutions. The resulting optimization problem can be solved in polynomial time
and computational tests establish an empirical performance suitable for
production environments.
",thomas winter,,2014.0,10.1007/s00186-016-0555-z,"Mathematical Methods of Operations Research, April 2017, Volume
85, Issue 2, pp 155-177",Müller2014,True,,arXiv,Not available,"Pricing and clearing combinatorial markets with singleton and swap
orders: Efficient algorithms for the futures opening auction problem",1d79bd6e44e20ec399ef5d1ac3d5c5c2,http://arxiv.org/abs/1404.6546v3
16799," Combinatorial auctions are formulated as frustrated lattice gases on sparse
random graphs, allowing the determination of the optimal revenue by methods of
statistical physics. Transitions between computationally easy and hard regimes
are found and interpreted in terms of the geometric structure of the space of
solutions. We introduce an iterative algorithm to solve intermediate and large
instances, and discuss competing states of optimal revenue and maximal number
of satisfied bidders. The algorithm can be generalized to the hard phase and to
more sophisticated auction protocols.
",tobias galla,,2006.0,10.1103/PhysRevLett.97.128701,"Phys. Rev. Lett. 97, 128701 (2006)",Galla2006,True,,arXiv,Not available,Statistical mechanics of combinatorial auctions,4f7bd033fa047b2ea97dd2a6f3792a55,http://arxiv.org/abs/cond-mat/0605623v2
16800," Combinatorial auctions are formulated as frustrated lattice gases on sparse
random graphs, allowing the determination of the optimal revenue by methods of
statistical physics. Transitions between computationally easy and hard regimes
are found and interpreted in terms of the geometric structure of the space of
solutions. We introduce an iterative algorithm to solve intermediate and large
instances, and discuss competing states of optimal revenue and maximal number
of satisfied bidders. The algorithm can be generalized to the hard phase and to
more sophisticated auction protocols.
",michele leone,,2006.0,10.1103/PhysRevLett.97.128701,"Phys. Rev. Lett. 97, 128701 (2006)",Galla2006,True,,arXiv,Not available,Statistical mechanics of combinatorial auctions,4f7bd033fa047b2ea97dd2a6f3792a55,http://arxiv.org/abs/cond-mat/0605623v2
16801," Combinatorial auctions are formulated as frustrated lattice gases on sparse
random graphs, allowing the determination of the optimal revenue by methods of
statistical physics. Transitions between computationally easy and hard regimes
are found and interpreted in terms of the geometric structure of the space of
solutions. We introduce an iterative algorithm to solve intermediate and large
instances, and discuss competing states of optimal revenue and maximal number
of satisfied bidders. The algorithm can be generalized to the hard phase and to
more sophisticated auction protocols.
",matteo marsili,,2006.0,10.1103/PhysRevLett.97.128701,"Phys. Rev. Lett. 97, 128701 (2006)",Galla2006,True,,arXiv,Not available,Statistical mechanics of combinatorial auctions,4f7bd033fa047b2ea97dd2a6f3792a55,http://arxiv.org/abs/cond-mat/0605623v2
16802," Combinatorial auctions are formulated as frustrated lattice gases on sparse
random graphs, allowing the determination of the optimal revenue by methods of
statistical physics. Transitions between computationally easy and hard regimes
are found and interpreted in terms of the geometric structure of the space of
solutions. We introduce an iterative algorithm to solve intermediate and large
instances, and discuss competing states of optimal revenue and maximal number
of satisfied bidders. The algorithm can be generalized to the hard phase and to
more sophisticated auction protocols.
",mauro sellitto,,2006.0,10.1103/PhysRevLett.97.128701,"Phys. Rev. Lett. 97, 128701 (2006)",Galla2006,True,,arXiv,Not available,Statistical mechanics of combinatorial auctions,4f7bd033fa047b2ea97dd2a6f3792a55,http://arxiv.org/abs/cond-mat/0605623v2
16803," Combinatorial auctions are formulated as frustrated lattice gases on sparse
random graphs, allowing the determination of the optimal revenue by methods of
statistical physics. Transitions between computationally easy and hard regimes
are found and interpreted in terms of the geometric structure of the space of
solutions. We introduce an iterative algorithm to solve intermediate and large
instances, and discuss competing states of optimal revenue and maximal number
of satisfied bidders. The algorithm can be generalized to the hard phase and to
more sophisticated auction protocols.
",martin weigt,,2006.0,10.1103/PhysRevLett.97.128701,"Phys. Rev. Lett. 97, 128701 (2006)",Galla2006,True,,arXiv,Not available,Statistical mechanics of combinatorial auctions,4f7bd033fa047b2ea97dd2a6f3792a55,http://arxiv.org/abs/cond-mat/0605623v2
16804," Combinatorial auctions are formulated as frustrated lattice gases on sparse
random graphs, allowing the determination of the optimal revenue by methods of
statistical physics. Transitions between computationally easy and hard regimes
are found and interpreted in terms of the geometric structure of the space of
solutions. We introduce an iterative algorithm to solve intermediate and large
instances, and discuss competing states of optimal revenue and maximal number
of satisfied bidders. The algorithm can be generalized to the hard phase and to
more sophisticated auction protocols.
",riccardo zecchina,,2006.0,10.1103/PhysRevLett.97.128701,"Phys. Rev. Lett. 97, 128701 (2006)",Galla2006,True,,arXiv,Not available,Statistical mechanics of combinatorial auctions,4f7bd033fa047b2ea97dd2a6f3792a55,http://arxiv.org/abs/cond-mat/0605623v2
16805," We characterize the statistical properties of a large number of online
auctions run on eBay. Both stationary and dynamic properties, like
distributions of prices, number of bids etc., as well as relations between
these quantities are studied. The analysis of the data reveals surprisingly
simple distributions and relations, typically of power-law form. Based on these
findings we introduce a simple method to identify suspicious auctions that
could be influenced by a form of fraud known as shill bidding. Furthermore the
influence of bidding strategies is discussed. The results indicate that the
observed behavior is related to a mixture of agents using a variety of
strategies.
",alireza namazi,,2006.0,10.1142/S012918310600993X,arXiv,Namazi2006,True,,arXiv,Not available,Statistical properties of online auctions,d1b08ad7d1d5fee6440bdadb247d3b02,http://arxiv.org/abs/physics/0608232v1
16806," We characterize the statistical properties of a large number of online
auctions run on eBay. Both stationary and dynamic properties, like
distributions of prices, number of bids etc., as well as relations between
these quantities are studied. The analysis of the data reveals surprisingly
simple distributions and relations, typically of power-law form. Based on these
findings we introduce a simple method to identify suspicious auctions that
could be influenced by a form of fraud known as shill bidding. Furthermore the
influence of bidding strategies is discussed. The results indicate that the
observed behavior is related to a mixture of agents using a variety of
strategies.
",andreas schadschneider,,2006.0,10.1142/S012918310600993X,arXiv,Namazi2006,True,,arXiv,Not available,Statistical properties of online auctions,d1b08ad7d1d5fee6440bdadb247d3b02,http://arxiv.org/abs/physics/0608232v1
16807," The interaction of competing agents is described by classical game theory. It
is now well known that this can be extended to the quantum domain, where agents
obey the rules of quantum mechanics. This is of emerging interest for exploring
quantum foundations, quantum protocols, quantum auctions, quantum cryptography,
and the dynamics of quantum cryptocurrency, for example. In this paper, we
investigate two-player games in which a strategy pair can exist as a Nash
equilibrium when the games obey the rules of quantum mechanics. Using a
generalized Einstein-Podolsky-Rosen (EPR) setting for two-player quantum games,
and considering a particular strategy pair, we identify sets of games for which
the pair can exist as a Nash equilibrium only when Bell's inequality is
violated. We thus determine specific games for which the Nash inequality
becomes equivalent to Bell's inequality for the considered strategy pair.
",james chappell,,2015.0,,"Physics Letters A, Vol. 382, Issue 40, pp 2908-2913 (2018)",Iqbal2015,True,,arXiv,Not available,"The equivalence of Bell's inequality and the Nash inequality in a
quantum game-theoretic setting",27c0cd284e04078d495498e1b9b21cbc,http://arxiv.org/abs/1507.07341v6
16808," Creating a loyal customer base is one of the most important, and at the same
time, most difficult tasks a company faces. Creating loyalty online (e-loyalty)
is especially difficult since customers can ``switch'' to a competitor with the
click of a mouse. In this paper we investigate e-loyalty in online auctions.
Using a unique data set of over 30,000 auctions from one of the main
consumer-to-consumer online auction houses, we propose a novel measure of
e-loyalty via the associated network of transactions between bidders and
sellers. Using a bipartite network of bidder and seller nodes, two nodes are
linked when a bidder purchases from a seller and the number of repeat-purchases
determines the strength of that link. We employ ideas from functional principal
component analysis to derive, from this network, the loyalty distribution which
measures the perceived loyalty of every individual seller, and associated
loyalty scores which summarize this distribution in a parsimonious way. We then
investigate the effect of loyalty on the outcome of an auction. In doing so, we
are confronted with several statistical challenges in that standard statistical
models lead to a misrepresentation of the data and a violation of the model
assumptions. The reason is that loyalty networks result in an extreme
clustering of the data, with few high-volume sellers accounting for most of the
individual transactions. We investigate several remedies to the clustering
problem and conclude that loyalty networks consist of very distinct segments
that can best be understood individually.
",wolfgang jank,,2010.0,10.1214/09-AOAS310,"Annals of Applied Statistics 2010, Vol. 4, No. 1, 151-178",Jank2010,True,,arXiv,Not available,E-loyalty networks in online auctions,625f0874b1fb6469b07d3439afb23a04,http://arxiv.org/abs/1010.1636v1
16809," Creating a loyal customer base is one of the most important, and at the same
time, most difficult tasks a company faces. Creating loyalty online (e-loyalty)
is especially difficult since customers can ``switch'' to a competitor with the
click of a mouse. In this paper we investigate e-loyalty in online auctions.
Using a unique data set of over 30,000 auctions from one of the main
consumer-to-consumer online auction houses, we propose a novel measure of
e-loyalty via the associated network of transactions between bidders and
sellers. Using a bipartite network of bidder and seller nodes, two nodes are
linked when a bidder purchases from a seller and the number of repeat-purchases
determines the strength of that link. We employ ideas from functional principal
component analysis to derive, from this network, the loyalty distribution which
measures the perceived loyalty of every individual seller, and associated
loyalty scores which summarize this distribution in a parsimonious way. We then
investigate the effect of loyalty on the outcome of an auction. In doing so, we
are confronted with several statistical challenges in that standard statistical
models lead to a misrepresentation of the data and a violation of the model
assumptions. The reason is that loyalty networks result in an extreme
clustering of the data, with few high-volume sellers accounting for most of the
individual transactions. We investigate several remedies to the clustering
problem and conclude that loyalty networks consist of very distinct segments
that can best be understood individually.
",inbal yahav,,2010.0,10.1214/09-AOAS310,"Annals of Applied Statistics 2010, Vol. 4, No. 1, 151-178",Jank2010,True,,arXiv,Not available,E-loyalty networks in online auctions,625f0874b1fb6469b07d3439afb23a04,http://arxiv.org/abs/1010.1636v1
16814," Internet search results are a growing and highly profitable advertising
platform. Search providers auction advertising slots to advertisers on their
search result pages. Due to the high volume of searches and the users' low
tolerance for search result latency, it is imperative to resolve these auctions
fast. Current approaches restrict the expressiveness of bids in order to
achieve fast winner determination, which is the problem of allocating slots to
advertisers so as to maximize the expected revenue given the advertisers' bids.
The goal of our work is to permit more expressive bidding, thus allowing
advertisers to achieve complex advertising goals, while still providing fast
and scalable techniques for winner determination.
",david martin,,2008.0,10.1109/ICDE.2008.4497432,"David J. Martin, Johannes Gehrke, and Joseph Y. Halpern. Toward
Expressive and Scalable Sponsored Search Auctions. In Proceedings of the 24th
IEEE International Conference on Data Engineering, pages 237--246. April 2008",Martin2008,True,,arXiv,Not available,Toward Expressive and Scalable Sponsored Search Auctions,53d3266e5b4d56b9a811f5ec95f8067f,http://arxiv.org/abs/0809.0116v1
16815," Internet search results are a growing and highly profitable advertising
platform. Search providers auction advertising slots to advertisers on their
search result pages. Due to the high volume of searches and the users' low
tolerance for search result latency, it is imperative to resolve these auctions
fast. Current approaches restrict the expressiveness of bids in order to
achieve fast winner determination, which is the problem of allocating slots to
advertisers so as to maximize the expected revenue given the advertisers' bids.
The goal of our work is to permit more expressive bidding, thus allowing
advertisers to achieve complex advertising goals, while still providing fast
and scalable techniques for winner determination.
",johannes gehrke,,2008.0,10.1109/ICDE.2008.4497432,"David J. Martin, Johannes Gehrke, and Joseph Y. Halpern. Toward
Expressive and Scalable Sponsored Search Auctions. In Proceedings of the 24th
IEEE International Conference on Data Engineering, pages 237--246. April 2008",Martin2008,True,,arXiv,Not available,Toward Expressive and Scalable Sponsored Search Auctions,53d3266e5b4d56b9a811f5ec95f8067f,http://arxiv.org/abs/0809.0116v1
16816," Internet search results are a growing and highly profitable advertising
platform. Search providers auction advertising slots to advertisers on their
search result pages. Due to the high volume of searches and the users' low
tolerance for search result latency, it is imperative to resolve these auctions
fast. Current approaches restrict the expressiveness of bids in order to
achieve fast winner determination, which is the problem of allocating slots to
advertisers so as to maximize the expected revenue given the advertisers' bids.
The goal of our work is to permit more expressive bidding, thus allowing
advertisers to achieve complex advertising goals, while still providing fast
and scalable techniques for winner determination.
",joseph halpern,,2008.0,10.1109/ICDE.2008.4497432,"David J. Martin, Johannes Gehrke, and Joseph Y. Halpern. Toward
Expressive and Scalable Sponsored Search Auctions. In Proceedings of the 24th
IEEE International Conference on Data Engineering, pages 237--246. April 2008",Martin2008,True,,arXiv,Not available,Toward Expressive and Scalable Sponsored Search Auctions,53d3266e5b4d56b9a811f5ec95f8067f,http://arxiv.org/abs/0809.0116v1
16817," In this note, we show that any distributive lattice is isomorphic to the set
of reachable configurations of an Edge Firing Game. Together with the result of
James Propp, saying that the set of reachable configurations of any Edge Firing
Game is always a distributive lattice, this shows that the two concepts are
equivalent.
",matthieu latapy,,2001.0,,arXiv,Latapy2001,True,,arXiv,Not available,Coding Distributive Lattices with Edge Firing Games,f5fc378124853fb68ef9f3952d70298e,http://arxiv.org/abs/math/0110214v1
16818," The interaction of competing agents is described by classical game theory. It
is now well known that this can be extended to the quantum domain, where agents
obey the rules of quantum mechanics. This is of emerging interest for exploring
quantum foundations, quantum protocols, quantum auctions, quantum cryptography,
and the dynamics of quantum cryptocurrency, for example. In this paper, we
investigate two-player games in which a strategy pair can exist as a Nash
equilibrium when the games obey the rules of quantum mechanics. Using a
generalized Einstein-Podolsky-Rosen (EPR) setting for two-player quantum games,
and considering a particular strategy pair, we identify sets of games for which
the pair can exist as a Nash equilibrium only when Bell's inequality is
violated. We thus determine specific games for which the Nash inequality
becomes equivalent to Bell's inequality for the considered strategy pair.
",derek abbott,,2015.0,,"Physics Letters A, Vol. 382, Issue 40, pp 2908-2913 (2018)",Iqbal2015,True,,arXiv,Not available,"The equivalence of Bell's inequality and the Nash inequality in a
quantum game-theoretic setting",27c0cd284e04078d495498e1b9b21cbc,http://arxiv.org/abs/1507.07341v6
16819," In this note, we show that any distributive lattice is isomorphic to the set
of reachable configurations of an Edge Firing Game. Together with the result of
James Propp, saying that the set of reachable configurations of any Edge Firing
Game is always a distributive lattice, this shows that the two concepts are
equivalent.
",clemence magnien,,2001.0,,arXiv,Latapy2001,True,,arXiv,Not available,Coding Distributive Lattices with Edge Firing Games,f5fc378124853fb68ef9f3952d70298e,http://arxiv.org/abs/math/0110214v1
16820," This essay gives a self-contained introduction to quantum game theory, and is
primarily oriented to economists with little or no acquaintance with quantum
mechanics. It assumes little more than a basic knowledge of vector algebra.
Quantum mechanical notation and results are introduced as needed. It is also
shown that some fundamental problems of quantum mechanics can be formulated as
games.
",j. grabbe,,2005.0,,arXiv,Grabbe2005,True,,arXiv,Not available,An Introduction to Quantum Game Theory,6ac060522072be5e529a77a0f681132f,http://arxiv.org/abs/quant-ph/0506219v1
16821," Entanglement is a digraph complexity measure that origins in fixed-point
theory. Its purpose is to count the nested depth of cycles in digraphs.
In this paper we prove that the class of undirected graphs of entanglement at
most $k$, for arbitrary fixed $k \in \mathbb{N}$, is closed under taking
minors. Our proof relies on the game theoretic characterization of entanglement
in terms of Robber and Cops games.
",walid belkhir,,2009.0,,arXiv,Belkhir2009,True,,arXiv,Not available,Closure Under Minors of Undirected Entanglement,49721a6d645bbda26c55b66aa7c8f0ff,http://arxiv.org/abs/0904.1703v1
16822," We review and develop a selection of models of systems with competition and
cooperation, with origins in economics, where deep insights can be obtained by
the mathematical methods of game theory. Some of these models were touched upon
in authors' book 'Understanding Game Theory', World Scientific 2010, where also
the necessary background on games can be found.
",vassili kolokoltsov,,2012.0,,arXiv,Kolokoltsov2012,True,,arXiv,Not available,On some models of many agent systems with competition and cooperation,36e2d994a2a77e1fc1cde6e3503b2838,http://arxiv.org/abs/1201.1745v1
16823," We review and develop a selection of models of systems with competition and
cooperation, with origins in economics, where deep insights can be obtained by
the mathematical methods of game theory. Some of these models were touched upon
in authors' book 'Understanding Game Theory', World Scientific 2010, where also
the necessary background on games can be found.
",oleg malafeyev,,2012.0,,arXiv,Kolokoltsov2012,True,,arXiv,Not available,On some models of many agent systems with competition and cooperation,36e2d994a2a77e1fc1cde6e3503b2838,http://arxiv.org/abs/1201.1745v1
16824," This report presents the results of applying different compression algorithms
to the network protocol of an online game. The algorithm implementations
compared are zlib, liblzma and my own implementation based on LZ77 and a
variation of adaptive Huffman coding. The comparison data was collected from
the game TomeNET. The results show that adaptive coding is especially useful
for compressing large amounts of very small packets.
",mikael hirki,,2012.0,,arXiv,Hirki2012,True,,arXiv,Not available,Applying Compression to a Game's Network Protocol,2087dee55351a178d658b8940be896ca,http://arxiv.org/abs/1206.2362v1
16825," We investigate the degree of discontinuity of several solution concepts from
non-cooperative game theory. While the consideration of Nash equilibria forms
the core of our work, also pure and correlated equilibria are dealt with.
Formally, we restrict the treatment to two player games, but results and proofs
extend to the n-player case. As a side result, the degree of discontinuity of
solving systems of linear inequalities is settled.
",arno pauly,,2009.0,,arXiv,Pauly2009,True,,arXiv,Not available,How discontinuous is Computing Nash Equilibria?,05a723f98e662977051f1ffcb6e60198,http://arxiv.org/abs/0907.1482v1
16826," We show that the long time average of solutions of first order mean field
game systems in finite horizon is governed by an ergodic system of mean field
game type. The well-posedness of this later system and the uniqueness of the
ergodic constant rely on weak KAM theory.
",pierre cardaliaguet,,2013.0,,arXiv,Cardaliaguet2013,True,,arXiv,Not available,Long time average of first order mean field games and weak KAM theory,a9c8f9119f8c66fa1bb9597b6aa84fbe,http://arxiv.org/abs/1305.7012v1
16827," Recently Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li and
Frank Stephan proposed a quasi-polynomial time algorithm for parity games. This
paper proposes a short proof of correctness of their algorithm.
",hugo gimbert,,2017.0,,arXiv,Gimbert2017,True,,arXiv,Not available,"A short proof of correctness of the quasi-polynomial time algorithm for
parity games",bbd4abc8c9ce4dfbcc387d0c17545970,http://arxiv.org/abs/1702.01953v4
16828," Recently Cristian S. Calude, Sanjay Jain, Bakhadyr Khoussainov, Wei Li and
Frank Stephan proposed a quasi-polynomial time algorithm for parity games. This
paper proposes a short proof of correctness of their algorithm.
",rasmus ibsen-jensen,,2017.0,,arXiv,Gimbert2017,True,,arXiv,Not available,"A short proof of correctness of the quasi-polynomial time algorithm for
parity games",bbd4abc8c9ce4dfbcc387d0c17545970,http://arxiv.org/abs/1702.01953v4
16829," In evolutionary game theory, an important measure of a mutant trait
(strategy) is its ability to invade and take over an otherwise-monomorphic
population. Typically, one quantifies the success of a mutant strategy via the
probability that a randomly occurring mutant will fixate in the population.
However, in a structured population, this fixation probability may depend on
where the mutant arises. Moreover, the fixation probability is just one
quantity by which one can measure the success of a mutant; fixation time, for
instance, is another. We define a notion of homogeneity for evolutionary games
that captures what it means for two single-mutant states, i.e. two
configurations of a single mutant in an otherwise-monomorphic population, to be
""evolutionarily equivalent"" in the sense that all measures of evolutionary
success are the same for both configurations. Using asymmetric games, we argue
that the term ""homogeneous"" should apply to the evolutionary process as a whole
rather than to just the population structure. For evolutionary matrix games in
graph-structured populations, we give precise conditions under which the
resulting process is homogeneous. Finally, we show that asymmetric matrix games
can be reduced to symmetric games if the population structure possesses a
sufficient degree of symmetry.
",alex mcavoy,,2015.0,10.1098/rsif.2015.0420,"Journal of the Royal Society Interface vol. 12 no. 111, 20150420
(2015)",McAvoy2015,True,,arXiv,Not available,Structural symmetry in evolutionary games,0d99dac44034ab9a6924e209afbdade9,http://arxiv.org/abs/1509.03777v1
16830," Combinatorial games lead to several interesting, clean problems in algorithms
and complexity theory, many of which remain open. The purpose of this paper is
to provide an overview of the area to encourage further research. In
particular, we begin with general background in Combinatorial Game Theory,
which analyzes ideal play in perfect-information games, and Constraint Logic,
which provides a framework for showing hardness. Then we survey results about
the complexity of determining ideal play in these games, and the related
problems of solving puzzles, in terms of both polynomial-time algorithms and
computational intractability results. Our review of background and survey of
algorithmic results are by no means complete, but should serve as a useful
primer.
",erik demaine,,2001.0,,arXiv,Demaine2001,True,,arXiv,Not available,Playing Games with Algorithms: Algorithmic Combinatorial Game Theory,54f78424c0a04e0e865a18ee82fe8b85,http://arxiv.org/abs/cs/0106019v2
16831," Combinatorial games lead to several interesting, clean problems in algorithms
and complexity theory, many of which remain open. The purpose of this paper is
to provide an overview of the area to encourage further research. In
particular, we begin with general background in Combinatorial Game Theory,
which analyzes ideal play in perfect-information games, and Constraint Logic,
which provides a framework for showing hardness. Then we survey results about
the complexity of determining ideal play in these games, and the related
problems of solving puzzles, in terms of both polynomial-time algorithms and
computational intractability results. Our review of background and survey of
algorithmic results are by no means complete, but should serve as a useful
primer.
",robert hearn,,2001.0,,arXiv,Demaine2001,True,,arXiv,Not available,Playing Games with Algorithms: Algorithmic Combinatorial Game Theory,54f78424c0a04e0e865a18ee82fe8b85,http://arxiv.org/abs/cs/0106019v2
16832," In recent years methods have been proposed to extend classical game theory
into the quantum domain. This paper explores further extensions of these ideas
that may have a substantial potential for further research. Upon reformulating
quantum game theory as a theory of classical games played by ""quantum players""
I take a constructive approach. The roles of the players and the arbiter are
investigated for clues on the nature of the quantum game space.
Upon examination of the role of the arbiter, a possible non-commutative
nature of pay-off operators can be deduced. I investigate a sub-class of games
in which the pay-off operators satisfy non-trivial commutation relations.
Non-abelian pay-off operators can be used to generate whole families of quantum
games.
",f. witte,,2002.0,,arXiv,Witte2002,True,,arXiv,Not available,On Pay-off induced Quantum Games,4cf70b723b1968c30eca8ab0b454835a,http://arxiv.org/abs/quant-ph/0208171v1
16833," In evolutionary game theory, an important measure of a mutant trait
(strategy) is its ability to invade and take over an otherwise-monomorphic
population. Typically, one quantifies the success of a mutant strategy via the
probability that a randomly occurring mutant will fixate in the population.
However, in a structured population, this fixation probability may depend on
where the mutant arises. Moreover, the fixation probability is just one
quantity by which one can measure the success of a mutant; fixation time, for
instance, is another. We define a notion of homogeneity for evolutionary games
that captures what it means for two single-mutant states, i.e. two
configurations of a single mutant in an otherwise-monomorphic population, to be
""evolutionarily equivalent"" in the sense that all measures of evolutionary
success are the same for both configurations. Using asymmetric games, we argue
that the term ""homogeneous"" should apply to the evolutionary process as a whole
rather than to just the population structure. For evolutionary matrix games in
graph-structured populations, we give precise conditions under which the
resulting process is homogeneous. Finally, we show that asymmetric matrix games
can be reduced to symmetric games if the population structure possesses a
sufficient degree of symmetry.
",christoph hauert,,2015.0,10.1098/rsif.2015.0420,"Journal of the Royal Society Interface vol. 12 no. 111, 20150420
(2015)",McAvoy2015,True,,arXiv,Not available,Structural symmetry in evolutionary games,0d99dac44034ab9a6924e209afbdade9,http://arxiv.org/abs/1509.03777v1
16834," Crowdscience games may hold unique potentials as learning opportunities
compared to games made for fun or education. They are part of an actual science
problem solving process: By playing, players help scientists, and thereby
interact with real continuous research processes. This mixes the two worlds of
play and science in new ways. During usability testing we discovered that users
of the crowdscience game Quantum Dreams tended to answer questions in game
terms, even when directed explicitly to give science explanations.We then
examined these competing frames of understanding through a mixed correlational
and grounded theory analysis. This essay presents the core ideas of
crowdscience games as learning opportunities, and reports how a group of
players used ""game"", ""science"" and ""conceptual"" frames to interpret their
experience. Our results suggest that oscillating between the frames instead of
sticking to just one led to the largest number of correct science
interpretations, as players could participate legitimately and autonomously at
multiple levels of understanding.
",andreas lieberoth,,2015.0,,"Well Played 4(2), 30 (2015)",Lieberoth2015,True,,arXiv,Not available,Play or science?: a study of learning and framing in crowdscience games,12098faed55fc0d22650c38e29627272,http://arxiv.org/abs/1510.06841v1
16835," Crowdscience games may hold unique potentials as learning opportunities
compared to games made for fun or education. They are part of an actual science
problem solving process: By playing, players help scientists, and thereby
interact with real continuous research processes. This mixes the two worlds of
play and science in new ways. During usability testing we discovered that users
of the crowdscience game Quantum Dreams tended to answer questions in game
terms, even when directed explicitly to give science explanations.We then
examined these competing frames of understanding through a mixed correlational
and grounded theory analysis. This essay presents the core ideas of
crowdscience games as learning opportunities, and reports how a group of
players used ""game"", ""science"" and ""conceptual"" frames to interpret their
experience. Our results suggest that oscillating between the frames instead of
sticking to just one led to the largest number of correct science
interpretations, as players could participate legitimately and autonomously at
multiple levels of understanding.
",mads pedersen,,2015.0,,"Well Played 4(2), 30 (2015)",Lieberoth2015,True,,arXiv,Not available,Play or science?: a study of learning and framing in crowdscience games,12098faed55fc0d22650c38e29627272,http://arxiv.org/abs/1510.06841v1
16836," Crowdscience games may hold unique potentials as learning opportunities
compared to games made for fun or education. They are part of an actual science
problem solving process: By playing, players help scientists, and thereby
interact with real continuous research processes. This mixes the two worlds of
play and science in new ways. During usability testing we discovered that users
of the crowdscience game Quantum Dreams tended to answer questions in game
terms, even when directed explicitly to give science explanations.We then
examined these competing frames of understanding through a mixed correlational
and grounded theory analysis. This essay presents the core ideas of
crowdscience games as learning opportunities, and reports how a group of
players used ""game"", ""science"" and ""conceptual"" frames to interpret their
experience. Our results suggest that oscillating between the frames instead of
sticking to just one led to the largest number of correct science
interpretations, as players could participate legitimately and autonomously at
multiple levels of understanding.
",jacob sherson,,2015.0,,"Well Played 4(2), 30 (2015)",Lieberoth2015,True,,arXiv,Not available,Play or science?: a study of learning and framing in crowdscience games,12098faed55fc0d22650c38e29627272,http://arxiv.org/abs/1510.06841v1
16837," Electric boolean games are compact representations of games where the players
have qualitative objectives described by LTL formulae and have limited
resources. We study the complexity of several decision problems related to the
analysis of rationality in electric boolean games with LTL objectives. In
particular, we report that the problem of deciding whether a profile is a Nash
equilibrium in an iterated electric boolean game is no harder than in iterated
boolean games without resource bounds. We show that it is a PSPACE-complete
problem. As a corollary, we obtain that both rational elimination and rational
construction of Nash equilibria by a supervising authority are PSPACE-complete
problems.
",youssouf oualhadj,,2016.0,10.4204/EPTCS.218.4,"EPTCS 218, 2016, pp. 41-51",Oualhadj2016,True,,arXiv,Not available,Rational Verification in Iterated Electric Boolean Games,591b9da0055eb5826266122024bd8d87,http://arxiv.org/abs/1604.03773v2
16838," We present a new form of a Parrondo game using discrete-time quantum walk on
a line. The two players A and B with different quantum coins operators,
individually losing the game can develop a strategy to emerge as joint winners
by using their coins alternatively, or in combination for each step of the
quantum walk evolution. We also present a strategy for a player A (B) to have a
winning probability more than player B (A). Significance of the game strategy
in information theory and physical applications are also discussed.
",subhashish banerjee,,2010.0,10.1016/j.physleta.2011.02.071,"Physics Letters A 375 (2011), pp. 1553-1558",Chandrashekar2010,True,,arXiv,Not available,Parrondo's game using a discrete-time quantum walk,ee29402383b7e8d780d57f3d0e386dae,http://arxiv.org/abs/1008.5121v2
16839," Electric boolean games are compact representations of games where the players
have qualitative objectives described by LTL formulae and have limited
resources. We study the complexity of several decision problems related to the
analysis of rationality in electric boolean games with LTL objectives. In
particular, we report that the problem of deciding whether a profile is a Nash
equilibrium in an iterated electric boolean game is no harder than in iterated
boolean games without resource bounds. We show that it is a PSPACE-complete
problem. As a corollary, we obtain that both rational elimination and rational
construction of Nash equilibria by a supervising authority are PSPACE-complete
problems.
",nicolas troquard,,2016.0,10.4204/EPTCS.218.4,"EPTCS 218, 2016, pp. 41-51",Oualhadj2016,True,,arXiv,Not available,Rational Verification in Iterated Electric Boolean Games,591b9da0055eb5826266122024bd8d87,http://arxiv.org/abs/1604.03773v2
16840," In this paper we study how to play (stochastic) games optimally using little
space. We focus on repeated games with absorbing states, a type of two-player,
zero-sum concurrent mean-payoff games. The prototypical example of these games
is the well known Big Match of Gillete (1957). These games may not allow
optimal strategies but they always have {\epsilon}-optimal strategies. In this
paper we design {\epsilon}-optimal strategies for Player 1 in these games that
use only O(log log T ) space. Furthermore, we construct strategies for Player 1
that use space s(T), for an arbitrary small unbounded non-decreasing function
s, and which guarantee an {\epsilon}-optimal value for Player 1 in the limit
superior sense. The previously known strategies use space {\Omega}(logT) and it
was known that no strategy can use constant space if it is {\epsilon}-optimal
even in the limit superior sense. We also give a complementary lower bound.
Furthermore, we also show that no Markov strategy, even extended with finite
memory, can ensure value greater than 0 in the Big Match, answering a question
posed by Abraham Neyman.
",kristoffer hansen,,2016.0,,arXiv,Hansen2016,True,,arXiv,Not available,The Big Match in Small Space,d6f1e1854217fe432ce0645daeed41f2,http://arxiv.org/abs/1604.07634v1
16841," In this paper we study how to play (stochastic) games optimally using little
space. We focus on repeated games with absorbing states, a type of two-player,
zero-sum concurrent mean-payoff games. The prototypical example of these games
is the well known Big Match of Gillete (1957). These games may not allow
optimal strategies but they always have {\epsilon}-optimal strategies. In this
paper we design {\epsilon}-optimal strategies for Player 1 in these games that
use only O(log log T ) space. Furthermore, we construct strategies for Player 1
that use space s(T), for an arbitrary small unbounded non-decreasing function
s, and which guarantee an {\epsilon}-optimal value for Player 1 in the limit
superior sense. The previously known strategies use space {\Omega}(logT) and it
was known that no strategy can use constant space if it is {\epsilon}-optimal
even in the limit superior sense. We also give a complementary lower bound.
Furthermore, we also show that no Markov strategy, even extended with finite
memory, can ensure value greater than 0 in the Big Match, answering a question
posed by Abraham Neyman.
",rasmus ibsen-jensen,,2016.0,,arXiv,Hansen2016,True,,arXiv,Not available,The Big Match in Small Space,d6f1e1854217fe432ce0645daeed41f2,http://arxiv.org/abs/1604.07634v1
16842," In this paper we study how to play (stochastic) games optimally using little
space. We focus on repeated games with absorbing states, a type of two-player,
zero-sum concurrent mean-payoff games. The prototypical example of these games
is the well known Big Match of Gillete (1957). These games may not allow
optimal strategies but they always have {\epsilon}-optimal strategies. In this
paper we design {\epsilon}-optimal strategies for Player 1 in these games that
use only O(log log T ) space. Furthermore, we construct strategies for Player 1
that use space s(T), for an arbitrary small unbounded non-decreasing function
s, and which guarantee an {\epsilon}-optimal value for Player 1 in the limit
superior sense. The previously known strategies use space {\Omega}(logT) and it
was known that no strategy can use constant space if it is {\epsilon}-optimal
even in the limit superior sense. We also give a complementary lower bound.
Furthermore, we also show that no Markov strategy, even extended with finite
memory, can ensure value greater than 0 in the Big Match, answering a question
posed by Abraham Neyman.
",michal koucky,,2016.0,,arXiv,Hansen2016,True,,arXiv,Not available,The Big Match in Small Space,d6f1e1854217fe432ce0645daeed41f2,http://arxiv.org/abs/1604.07634v1
16843," Mobile Crowd Sensing (MCS) is a new paradigm of sensing, which can achieve a
flexible and scalable sensing coverage with a low deployment cost, by employing
mobile users/devices to perform sensing tasks. In this work, we propose a novel
MCS framework with data reuse, where multiple tasks with common data
requirement can share (reuse) the common data with each other through an MCS
platform. We study the optimal assignment of mobile users and tasks (with data
reuse) systematically, under both information symmetry and asymmetry, depending
on whether the user cost and the task valuation are public information. In the
former case, we formulate the assignment problem as a generalized Knapsack
problem and solve the problem by using classic algorithms. In the latter case,
we propose a truthful and optimal double auction mechanism, built upon the
above Knapsack assignment problem, to elicit the private information of both
users and tasks and meanwhile achieve the same optimal assignment as under
information symmetry. Simulation results show by allowing data reuse among
tasks, the social welfare can be increased up to 100~380%, comparing with those
without data reuse. We further show that the proposed double auction is not
budget balance for the auctioneer, mainly due to the data reuse among tasks. To
this end, we further introduce a reserve price into the double auction (for
each data item) to achieve a desired tradeoff between the budget balance and
the social efficiency.
",xiaoru zhang,,2017.0,,arXiv,Zhang2017,True,,arXiv,Not available,A Double Auction Mechanism for Mobile Crowd Sensing with Data Reuse,d41ec63bee2052455a38fa1d5e34f22d,http://arxiv.org/abs/1708.08274v1
16844," Mobile Crowd Sensing (MCS) is a new paradigm of sensing, which can achieve a
flexible and scalable sensing coverage with a low deployment cost, by employing
mobile users/devices to perform sensing tasks. In this work, we propose a novel
MCS framework with data reuse, where multiple tasks with common data
requirement can share (reuse) the common data with each other through an MCS
platform. We study the optimal assignment of mobile users and tasks (with data
reuse) systematically, under both information symmetry and asymmetry, depending
on whether the user cost and the task valuation are public information. In the
former case, we formulate the assignment problem as a generalized Knapsack
problem and solve the problem by using classic algorithms. In the latter case,
we propose a truthful and optimal double auction mechanism, built upon the
above Knapsack assignment problem, to elicit the private information of both
users and tasks and meanwhile achieve the same optimal assignment as under
information symmetry. Simulation results show by allowing data reuse among
tasks, the social welfare can be increased up to 100~380%, comparing with those
without data reuse. We further show that the proposed double auction is not
budget balance for the auctioneer, mainly due to the data reuse among tasks. To
this end, we further introduce a reserve price into the double auction (for
each data item) to achieve a desired tradeoff between the budget balance and
the social efficiency.
",lin gao,,2017.0,,arXiv,Zhang2017,True,,arXiv,Not available,A Double Auction Mechanism for Mobile Crowd Sensing with Data Reuse,d41ec63bee2052455a38fa1d5e34f22d,http://arxiv.org/abs/1708.08274v1
16845," Mobile Crowd Sensing (MCS) is a new paradigm of sensing, which can achieve a
flexible and scalable sensing coverage with a low deployment cost, by employing
mobile users/devices to perform sensing tasks. In this work, we propose a novel
MCS framework with data reuse, where multiple tasks with common data
requirement can share (reuse) the common data with each other through an MCS
platform. We study the optimal assignment of mobile users and tasks (with data
reuse) systematically, under both information symmetry and asymmetry, depending
on whether the user cost and the task valuation are public information. In the
former case, we formulate the assignment problem as a generalized Knapsack
problem and solve the problem by using classic algorithms. In the latter case,
we propose a truthful and optimal double auction mechanism, built upon the
above Knapsack assignment problem, to elicit the private information of both
users and tasks and meanwhile achieve the same optimal assignment as under
information symmetry. Simulation results show by allowing data reuse among
tasks, the social welfare can be increased up to 100~380%, comparing with those
without data reuse. We further show that the proposed double auction is not
budget balance for the auctioneer, mainly due to the data reuse among tasks. To
this end, we further introduce a reserve price into the double auction (for
each data item) to achieve a desired tradeoff between the budget balance and
the social efficiency.
",bin cao,,2017.0,,arXiv,Zhang2017,True,,arXiv,Not available,A Double Auction Mechanism for Mobile Crowd Sensing with Data Reuse,d41ec63bee2052455a38fa1d5e34f22d,http://arxiv.org/abs/1708.08274v1
16846," Mobile Crowd Sensing (MCS) is a new paradigm of sensing, which can achieve a
flexible and scalable sensing coverage with a low deployment cost, by employing
mobile users/devices to perform sensing tasks. In this work, we propose a novel
MCS framework with data reuse, where multiple tasks with common data
requirement can share (reuse) the common data with each other through an MCS
platform. We study the optimal assignment of mobile users and tasks (with data
reuse) systematically, under both information symmetry and asymmetry, depending
on whether the user cost and the task valuation are public information. In the
former case, we formulate the assignment problem as a generalized Knapsack
problem and solve the problem by using classic algorithms. In the latter case,
we propose a truthful and optimal double auction mechanism, built upon the
above Knapsack assignment problem, to elicit the private information of both
users and tasks and meanwhile achieve the same optimal assignment as under
information symmetry. Simulation results show by allowing data reuse among
tasks, the social welfare can be increased up to 100~380%, comparing with those
without data reuse. We further show that the proposed double auction is not
budget balance for the auctioneer, mainly due to the data reuse among tasks. To
this end, we further introduce a reserve price into the double auction (for
each data item) to achieve a desired tradeoff between the budget balance and
the social efficiency.
",zhang li,,2017.0,,arXiv,Zhang2017,True,,arXiv,Not available,A Double Auction Mechanism for Mobile Crowd Sensing with Data Reuse,d41ec63bee2052455a38fa1d5e34f22d,http://arxiv.org/abs/1708.08274v1
16847," Mobile Crowd Sensing (MCS) is a new paradigm of sensing, which can achieve a
flexible and scalable sensing coverage with a low deployment cost, by employing
mobile users/devices to perform sensing tasks. In this work, we propose a novel
MCS framework with data reuse, where multiple tasks with common data
requirement can share (reuse) the common data with each other through an MCS
platform. We study the optimal assignment of mobile users and tasks (with data
reuse) systematically, under both information symmetry and asymmetry, depending
on whether the user cost and the task valuation are public information. In the
former case, we formulate the assignment problem as a generalized Knapsack
problem and solve the problem by using classic algorithms. In the latter case,
we propose a truthful and optimal double auction mechanism, built upon the
above Knapsack assignment problem, to elicit the private information of both
users and tasks and meanwhile achieve the same optimal assignment as under
information symmetry. Simulation results show by allowing data reuse among
tasks, the social welfare can be increased up to 100~380%, comparing with those
without data reuse. We further show that the proposed double auction is not
budget balance for the auctioneer, mainly due to the data reuse among tasks. To
this end, we further introduce a reserve price into the double auction (for
each data item) to achieve a desired tradeoff between the budget balance and
the social efficiency.
",mengjing wang,,2017.0,,arXiv,Zhang2017,True,,arXiv,Not available,A Double Auction Mechanism for Mobile Crowd Sensing with Data Reuse,d41ec63bee2052455a38fa1d5e34f22d,http://arxiv.org/abs/1708.08274v1
16848," Blockchain has recently been applied in many applications such as bitcoin,
smart grid, and Internet of Things (IoT) as a public ledger of transactions.
However, the use of blockchain in mobile environments is still limited because
the mining process consumes too much computing and energy resources on mobile
devices. Edge computing offered by the Edge Computing Service Provider can be
adopted as a viable solution for offloading the mining tasks from the mobile
devices, i.e., miners, in the mobile blockchain environment. However, a
mechanism needs to be designed for edge resource allocation to maximize the
revenue for the Edge Computing Service Provider and to ensure incentive
compatibility and individual rationality is still open. In this paper, we
develop an optimal auction based on deep learning for the edge resource
allocation. Specifically, we construct a multi-layer neural network
architecture based on an analytical solution of the optimal auction. The neural
networks first perform monotone transformations of the miners' bids. Then, they
calculate allocation and conditional payment rules for the miners. We use
valuations of the miners as the data training to adjust parameters of the
neural networks so as to optimize the loss function which is the expected,
negated revenue of the Edge Computing Service Provider. We show the
experimental results to confirm the benefits of using the deep learning for
deriving the optimal auction for mobile blockchain with high revenue
",nguyen luong,,2017.0,,arXiv,Luong2017,True,,arXiv,Not available,"Optimal Auction For Edge Computing Resource Management in Mobile
Blockchain Networks: A Deep Learning Approach",a6cbb15dbd9c5cc471ceac74173011c9,http://arxiv.org/abs/1711.02844v2
16849," In a polyomino set (1,2)-achievement game the maker and the breaker
alternately mark one and two previously unmarked cells respectively. The
maker's goal is to mark a set of cells congruent to one of a given set of
polyominoes. The breaker tries to prevent the maker from achieving his goal.
The teams of polyominoes for which the maker has a winning strategy is
determined up to size 4. In set achievement games, it is natural to study
infinitely large polyominoes. This enables the construction of super winners
that characterize all winning teams up to a certain size.
",edgar fisher,,2010.0,,"Theoretical Computer Science 409 (2008), no.3, 333--340",Fisher2010,True,,arXiv,Not available,"Rectangular Polyomino Set Weak (1,2)-achievement Games",40050086e6a37106da1ba57ddcdf040e,http://arxiv.org/abs/1010.0424v1
16850," Blockchain has recently been applied in many applications such as bitcoin,
smart grid, and Internet of Things (IoT) as a public ledger of transactions.
However, the use of blockchain in mobile environments is still limited because
the mining process consumes too much computing and energy resources on mobile
devices. Edge computing offered by the Edge Computing Service Provider can be
adopted as a viable solution for offloading the mining tasks from the mobile
devices, i.e., miners, in the mobile blockchain environment. However, a
mechanism needs to be designed for edge resource allocation to maximize the
revenue for the Edge Computing Service Provider and to ensure incentive
compatibility and individual rationality is still open. In this paper, we
develop an optimal auction based on deep learning for the edge resource
allocation. Specifically, we construct a multi-layer neural network
architecture based on an analytical solution of the optimal auction. The neural
networks first perform monotone transformations of the miners' bids. Then, they
calculate allocation and conditional payment rules for the miners. We use
valuations of the miners as the data training to adjust parameters of the
neural networks so as to optimize the loss function which is the expected,
negated revenue of the Edge Computing Service Provider. We show the
experimental results to confirm the benefits of using the deep learning for
deriving the optimal auction for mobile blockchain with high revenue
",zehui xiong,,2017.0,,arXiv,Luong2017,True,,arXiv,Not available,"Optimal Auction For Edge Computing Resource Management in Mobile
Blockchain Networks: A Deep Learning Approach",a6cbb15dbd9c5cc471ceac74173011c9,http://arxiv.org/abs/1711.02844v2
16851," Blockchain has recently been applied in many applications such as bitcoin,
smart grid, and Internet of Things (IoT) as a public ledger of transactions.
However, the use of blockchain in mobile environments is still limited because
the mining process consumes too much computing and energy resources on mobile
devices. Edge computing offered by the Edge Computing Service Provider can be
adopted as a viable solution for offloading the mining tasks from the mobile
devices, i.e., miners, in the mobile blockchain environment. However, a
mechanism needs to be designed for edge resource allocation to maximize the
revenue for the Edge Computing Service Provider and to ensure incentive
compatibility and individual rationality is still open. In this paper, we
develop an optimal auction based on deep learning for the edge resource
allocation. Specifically, we construct a multi-layer neural network
architecture based on an analytical solution of the optimal auction. The neural
networks first perform monotone transformations of the miners' bids. Then, they
calculate allocation and conditional payment rules for the miners. We use
valuations of the miners as the data training to adjust parameters of the
neural networks so as to optimize the loss function which is the expected,
negated revenue of the Edge Computing Service Provider. We show the
experimental results to confirm the benefits of using the deep learning for
deriving the optimal auction for mobile blockchain with high revenue
",ping wang,,2017.0,,arXiv,Luong2017,True,,arXiv,Not available,"Optimal Auction For Edge Computing Resource Management in Mobile
Blockchain Networks: A Deep Learning Approach",a6cbb15dbd9c5cc471ceac74173011c9,http://arxiv.org/abs/1711.02844v2
16852," Blockchain has recently been applied in many applications such as bitcoin,
smart grid, and Internet of Things (IoT) as a public ledger of transactions.
However, the use of blockchain in mobile environments is still limited because
the mining process consumes too much computing and energy resources on mobile
devices. Edge computing offered by the Edge Computing Service Provider can be
adopted as a viable solution for offloading the mining tasks from the mobile
devices, i.e., miners, in the mobile blockchain environment. However, a
mechanism needs to be designed for edge resource allocation to maximize the
revenue for the Edge Computing Service Provider and to ensure incentive
compatibility and individual rationality is still open. In this paper, we
develop an optimal auction based on deep learning for the edge resource
allocation. Specifically, we construct a multi-layer neural network
architecture based on an analytical solution of the optimal auction. The neural
networks first perform monotone transformations of the miners' bids. Then, they
calculate allocation and conditional payment rules for the miners. We use
valuations of the miners as the data training to adjust parameters of the
neural networks so as to optimize the loss function which is the expected,
negated revenue of the Edge Computing Service Provider. We show the
experimental results to confirm the benefits of using the deep learning for
deriving the optimal auction for mobile blockchain with high revenue
",dusit niyato,,2017.0,,arXiv,Luong2017,True,,arXiv,Not available,"Optimal Auction For Edge Computing Resource Management in Mobile
Blockchain Networks: A Deep Learning Approach",a6cbb15dbd9c5cc471ceac74173011c9,http://arxiv.org/abs/1711.02844v2
16853," As an emerging decentralized secure data management platform, blockchain has
gained much popularity recently. To maintain a canonical state of blockchain
data record, proof-of-work based consensus protocols provide the nodes,
referred to as miners, in the network with incentives for confirming new block
of transactions through a process of ""block mining"" by solving a cryptographic
puzzle. Under the circumstance of limited local computing resources, e.g.,
mobile devices, it is natural for rational miners, i.e., consensus nodes, to
offload computational tasks for proof of work to the cloud/fog computing
servers. Therefore, we focus on the trading between the cloud/fog computing
service provider and miners, and propose an auction-based market model for
efficient computing resource allocation. In particular, we consider a
proof-of-work based blockchain network. Due to the competition among miners in
the blockchain network, the allocative externalities are particularly taken
into account when designing the auction mechanisms. Specifically, we consider
two bidding schemes: the constant-demand scheme where each miner bids for a
fixed quantity of resources, and the multi-demand scheme where the miners can
submit their preferable demands and bids. For the constant-demand bidding
scheme, we propose an auction mechanism that achieves optimal social welfare.
In the multi-demand bidding scheme, the social welfare maximization problem is
NP-hard. Therefore, we design an approximate algorithm which guarantees the
truthfulness, individual rationality and computational efficiency. Through
extensive simulations, we show that our proposed auction mechanisms with the
two bidding schemes can efficiently maximize the social welfare of the
blockchain network and provide effective strategies for the cloud/fog computing
service provider.
",yutao jiao,,2018.0,,arXiv,Jiao2018,True,,arXiv,Not available,"Auction Mechanisms in Cloud/Fog Computing Resource Allocation for Public
Blockchain Networks",0ca98fcca46f928009f2386167d86b5a,http://arxiv.org/abs/1804.09961v2
16854," As an emerging decentralized secure data management platform, blockchain has
gained much popularity recently. To maintain a canonical state of blockchain
data record, proof-of-work based consensus protocols provide the nodes,
referred to as miners, in the network with incentives for confirming new block
of transactions through a process of ""block mining"" by solving a cryptographic
puzzle. Under the circumstance of limited local computing resources, e.g.,
mobile devices, it is natural for rational miners, i.e., consensus nodes, to
offload computational tasks for proof of work to the cloud/fog computing
servers. Therefore, we focus on the trading between the cloud/fog computing
service provider and miners, and propose an auction-based market model for
efficient computing resource allocation. In particular, we consider a
proof-of-work based blockchain network. Due to the competition among miners in
the blockchain network, the allocative externalities are particularly taken
into account when designing the auction mechanisms. Specifically, we consider
two bidding schemes: the constant-demand scheme where each miner bids for a
fixed quantity of resources, and the multi-demand scheme where the miners can
submit their preferable demands and bids. For the constant-demand bidding
scheme, we propose an auction mechanism that achieves optimal social welfare.
In the multi-demand bidding scheme, the social welfare maximization problem is
NP-hard. Therefore, we design an approximate algorithm which guarantees the
truthfulness, individual rationality and computational efficiency. Through
extensive simulations, we show that our proposed auction mechanisms with the
two bidding schemes can efficiently maximize the social welfare of the
blockchain network and provide effective strategies for the cloud/fog computing
service provider.
",ping wang,,2018.0,,arXiv,Jiao2018,True,,arXiv,Not available,"Auction Mechanisms in Cloud/Fog Computing Resource Allocation for Public
Blockchain Networks",0ca98fcca46f928009f2386167d86b5a,http://arxiv.org/abs/1804.09961v2
16855," As an emerging decentralized secure data management platform, blockchain has
gained much popularity recently. To maintain a canonical state of blockchain
data record, proof-of-work based consensus protocols provide the nodes,
referred to as miners, in the network with incentives for confirming new block
of transactions through a process of ""block mining"" by solving a cryptographic
puzzle. Under the circumstance of limited local computing resources, e.g.,
mobile devices, it is natural for rational miners, i.e., consensus nodes, to
offload computational tasks for proof of work to the cloud/fog computing
servers. Therefore, we focus on the trading between the cloud/fog computing
service provider and miners, and propose an auction-based market model for
efficient computing resource allocation. In particular, we consider a
proof-of-work based blockchain network. Due to the competition among miners in
the blockchain network, the allocative externalities are particularly taken
into account when designing the auction mechanisms. Specifically, we consider
two bidding schemes: the constant-demand scheme where each miner bids for a
fixed quantity of resources, and the multi-demand scheme where the miners can
submit their preferable demands and bids. For the constant-demand bidding
scheme, we propose an auction mechanism that achieves optimal social welfare.
In the multi-demand bidding scheme, the social welfare maximization problem is
NP-hard. Therefore, we design an approximate algorithm which guarantees the
truthfulness, individual rationality and computational efficiency. Through
extensive simulations, we show that our proposed auction mechanisms with the
two bidding schemes can efficiently maximize the social welfare of the
blockchain network and provide effective strategies for the cloud/fog computing
service provider.
",dusit niyato,,2018.0,,arXiv,Jiao2018,True,,arXiv,Not available,"Auction Mechanisms in Cloud/Fog Computing Resource Allocation for Public
Blockchain Networks",0ca98fcca46f928009f2386167d86b5a,http://arxiv.org/abs/1804.09961v2
16856," As an emerging decentralized secure data management platform, blockchain has
gained much popularity recently. To maintain a canonical state of blockchain
data record, proof-of-work based consensus protocols provide the nodes,
referred to as miners, in the network with incentives for confirming new block
of transactions through a process of ""block mining"" by solving a cryptographic
puzzle. Under the circumstance of limited local computing resources, e.g.,
mobile devices, it is natural for rational miners, i.e., consensus nodes, to
offload computational tasks for proof of work to the cloud/fog computing
servers. Therefore, we focus on the trading between the cloud/fog computing
service provider and miners, and propose an auction-based market model for
efficient computing resource allocation. In particular, we consider a
proof-of-work based blockchain network. Due to the competition among miners in
the blockchain network, the allocative externalities are particularly taken
into account when designing the auction mechanisms. Specifically, we consider
two bidding schemes: the constant-demand scheme where each miner bids for a
fixed quantity of resources, and the multi-demand scheme where the miners can
submit their preferable demands and bids. For the constant-demand bidding
scheme, we propose an auction mechanism that achieves optimal social welfare.
In the multi-demand bidding scheme, the social welfare maximization problem is
NP-hard. Therefore, we design an approximate algorithm which guarantees the
truthfulness, individual rationality and computational efficiency. Through
extensive simulations, we show that our proposed auction mechanisms with the
two bidding schemes can efficiently maximize the social welfare of the
blockchain network and provide effective strategies for the cloud/fog computing
service provider.
",kongrath suankaewmanee,,2018.0,,arXiv,Jiao2018,True,,arXiv,Not available,"Auction Mechanisms in Cloud/Fog Computing Resource Allocation for Public
Blockchain Networks",0ca98fcca46f928009f2386167d86b5a,http://arxiv.org/abs/1804.09961v2
16857," Sponsored search in E-commerce platforms such as Amazon, Taobao and Tmall
provides sellers an effective way to reach potential buyers with most relevant
purpose. In this paper, we study the auction mechanism optimization problem in
sponsored search on Alibaba's mobile E-commerce platform. Besides generating
revenue, we are supposed to maintain an efficient marketplace with plenty of
quality users, guarantee a reasonable return on investment (ROI) for
advertisers, and meanwhile, facilitate a pleasant shopping experience for the
users. These requirements essentially pose a constrained optimization problem.
Directly optimizing over auction parameters yields a discontinuous, non-convex
problem that denies effective solutions. One of our major contribution is a
practical convex optimization formulation of the original problem. We devise a
novel re-parametrization of auction mechanism with discrete sets of
representative instances. To construct the optimization problem, we build an
auction simulation system which estimates the resulted business indicators of
the selected parameters by replaying the auctions recorded from real online
requests. We summarized the experiments on real search traffics to analyze the
effects of fidelity of auction simulation, the efficacy under various
constraint targets and the influence of regularization. The experiment results
show that with proper entropy regularization, we are able to maximize revenue
while constraining other business indicators within given ranges.
",gang bai,,2018.0,,arXiv,Bai2018,True,,arXiv,Not available,"Practical Constrained Optimization of Auction Mechanisms in E-Commerce
Sponsored Search Advertising",84850ad79c8e9f9921fe4c6a27dd60b1,http://arxiv.org/abs/1807.11790v1
16858," Sponsored search in E-commerce platforms such as Amazon, Taobao and Tmall
provides sellers an effective way to reach potential buyers with most relevant
purpose. In this paper, we study the auction mechanism optimization problem in
sponsored search on Alibaba's mobile E-commerce platform. Besides generating
revenue, we are supposed to maintain an efficient marketplace with plenty of
quality users, guarantee a reasonable return on investment (ROI) for
advertisers, and meanwhile, facilitate a pleasant shopping experience for the
users. These requirements essentially pose a constrained optimization problem.
Directly optimizing over auction parameters yields a discontinuous, non-convex
problem that denies effective solutions. One of our major contribution is a
practical convex optimization formulation of the original problem. We devise a
novel re-parametrization of auction mechanism with discrete sets of
representative instances. To construct the optimization problem, we build an
auction simulation system which estimates the resulted business indicators of
the selected parameters by replaying the auctions recorded from real online
requests. We summarized the experiments on real search traffics to analyze the
effects of fidelity of auction simulation, the efficacy under various
constraint targets and the influence of regularization. The experiment results
show that with proper entropy regularization, we are able to maximize revenue
while constraining other business indicators within given ranges.
",zhihui xie,,2018.0,,arXiv,Bai2018,True,,arXiv,Not available,"Practical Constrained Optimization of Auction Mechanisms in E-Commerce
Sponsored Search Advertising",84850ad79c8e9f9921fe4c6a27dd60b1,http://arxiv.org/abs/1807.11790v1
16859," Sponsored search in E-commerce platforms such as Amazon, Taobao and Tmall
provides sellers an effective way to reach potential buyers with most relevant
purpose. In this paper, we study the auction mechanism optimization problem in
sponsored search on Alibaba's mobile E-commerce platform. Besides generating
revenue, we are supposed to maintain an efficient marketplace with plenty of
quality users, guarantee a reasonable return on investment (ROI) for
advertisers, and meanwhile, facilitate a pleasant shopping experience for the
users. These requirements essentially pose a constrained optimization problem.
Directly optimizing over auction parameters yields a discontinuous, non-convex
problem that denies effective solutions. One of our major contribution is a
practical convex optimization formulation of the original problem. We devise a
novel re-parametrization of auction mechanism with discrete sets of
representative instances. To construct the optimization problem, we build an
auction simulation system which estimates the resulted business indicators of
the selected parameters by replaying the auctions recorded from real online
requests. We summarized the experiments on real search traffics to analyze the
effects of fidelity of auction simulation, the efficacy under various
constraint targets and the influence of regularization. The experiment results
show that with proper entropy regularization, we are able to maximize revenue
while constraining other business indicators within given ranges.
",liang wang,,2018.0,,arXiv,Bai2018,True,,arXiv,Not available,"Practical Constrained Optimization of Auction Mechanisms in E-Commerce
Sponsored Search Advertising",84850ad79c8e9f9921fe4c6a27dd60b1,http://arxiv.org/abs/1807.11790v1
16860," We study an ensemble of individuals playing the two games of the so-called
Parrondo paradox. In our study, players are allowed to choose the game to be
played by the whole ensemble in each turn. The choice cannot conform to the
preferences of all the players and, consequently, they face a simple
frustration phenomenon that requires some strategy to make a collective
decision. We consider several such strategies and analyze how fluctuations can
be used to improve the performance of the system.
",j. parrondo,,2014.0,10.1140/epjst/e2007-00068-0,"Eur. Phys. J. Special Topics 143, 39 (2007)",Parrondo2014,True,,arXiv,Not available,Collective decision making and paradoxical games,c655f281edf34ee886edc2b09cf69f10,http://arxiv.org/abs/1410.0241v1
16861," In a polyomino set (1,2)-achievement game the maker and the breaker
alternately mark one and two previously unmarked cells respectively. The
maker's goal is to mark a set of cells congruent to one of a given set of
polyominoes. The breaker tries to prevent the maker from achieving his goal.
The teams of polyominoes for which the maker has a winning strategy is
determined up to size 4. In set achievement games, it is natural to study
infinitely large polyominoes. This enables the construction of super winners
that characterize all winning teams up to a certain size.
",nandor sieben,,2010.0,,"Theoretical Computer Science 409 (2008), no.3, 333--340",Fisher2010,True,,arXiv,Not available,"Rectangular Polyomino Set Weak (1,2)-achievement Games",40050086e6a37106da1ba57ddcdf040e,http://arxiv.org/abs/1010.0424v1
16862," We present a deterministic exploration mechanism for sponsored search
auctions, which enables the auctioneer to learn the relevance scores of
advertisers, and allows advertisers to estimate the true value of clicks
generated at the auction site. This exploratory mechanism deviates only
minimally from the mechanism being currently used by Google and Yahoo! in the
sense that it retains the same pricing rule, similar ranking scheme, as well
as, similar mathematical structure of payoffs. In particular, the estimations
of the relevance scores and true-values are achieved by providing a chance to
lower ranked advertisers to obtain better slots. This allows the search engine
to potentially test a new pool of advertisers, and correspondingly, enables new
advertisers to estimate the value of clicks/leads generated via the auction.
Both these quantities are unknown a priori, and their knowledge is necessary
for the auction to operate efficiently. We show that such an exploration policy
can be incorporated without any significant loss in revenue for the auctioneer.
We compare the revenue of the new mechanism to that of the standard mechanism
at their corresponding symmetric Nash equilibria and compute the cost of
uncertainty, which is defined as the relative loss in expected revenue per
impression. We also bound the loss in efficiency, as well as, in user
experience due to exploration, under the same solution concept (i.e. SNE). Thus
the proposed exploration mechanism learns the relevance scores while
incorporating the incentive constraints from the advertisers who are selfish
and are trying to maximize their own profits, and therefore, the exploration is
essentially achieved via mechanism design. We also discuss variations of the
new mechanism such as truthful implementations.
",sudhir singh,,2007.0,,arXiv,Singh2007,True,,arXiv,Not available,Exploration via design and the cost of uncertainty in keyword auctions,277df6185c4219da0f7407dffa7bd853,http://arxiv.org/abs/0707.1053v2
16863," We present a deterministic exploration mechanism for sponsored search
auctions, which enables the auctioneer to learn the relevance scores of
advertisers, and allows advertisers to estimate the true value of clicks
generated at the auction site. This exploratory mechanism deviates only
minimally from the mechanism being currently used by Google and Yahoo! in the
sense that it retains the same pricing rule, similar ranking scheme, as well
as, similar mathematical structure of payoffs. In particular, the estimations
of the relevance scores and true-values are achieved by providing a chance to
lower ranked advertisers to obtain better slots. This allows the search engine
to potentially test a new pool of advertisers, and correspondingly, enables new
advertisers to estimate the value of clicks/leads generated via the auction.
Both these quantities are unknown a priori, and their knowledge is necessary
for the auction to operate efficiently. We show that such an exploration policy
can be incorporated without any significant loss in revenue for the auctioneer.
We compare the revenue of the new mechanism to that of the standard mechanism
at their corresponding symmetric Nash equilibria and compute the cost of
uncertainty, which is defined as the relative loss in expected revenue per
impression. We also bound the loss in efficiency, as well as, in user
experience due to exploration, under the same solution concept (i.e. SNE). Thus
the proposed exploration mechanism learns the relevance scores while
incorporating the incentive constraints from the advertisers who are selfish
and are trying to maximize their own profits, and therefore, the exploration is
essentially achieved via mechanism design. We also discuss variations of the
new mechanism such as truthful implementations.
",vwani roychowdhury,,2007.0,,arXiv,Singh2007,True,,arXiv,Not available,Exploration via design and the cost of uncertainty in keyword auctions,277df6185c4219da0f7407dffa7bd853,http://arxiv.org/abs/0707.1053v2
16864," We present a deterministic exploration mechanism for sponsored search
auctions, which enables the auctioneer to learn the relevance scores of
advertisers, and allows advertisers to estimate the true value of clicks
generated at the auction site. This exploratory mechanism deviates only
minimally from the mechanism being currently used by Google and Yahoo! in the
sense that it retains the same pricing rule, similar ranking scheme, as well
as, similar mathematical structure of payoffs. In particular, the estimations
of the relevance scores and true-values are achieved by providing a chance to
lower ranked advertisers to obtain better slots. This allows the search engine
to potentially test a new pool of advertisers, and correspondingly, enables new
advertisers to estimate the value of clicks/leads generated via the auction.
Both these quantities are unknown a priori, and their knowledge is necessary
for the auction to operate efficiently. We show that such an exploration policy
can be incorporated without any significant loss in revenue for the auctioneer.
We compare the revenue of the new mechanism to that of the standard mechanism
at their corresponding symmetric Nash equilibria and compute the cost of
uncertainty, which is defined as the relative loss in expected revenue per
impression. We also bound the loss in efficiency, as well as, in user
experience due to exploration, under the same solution concept (i.e. SNE). Thus
the proposed exploration mechanism learns the relevance scores while
incorporating the incentive constraints from the advertisers who are selfish
and are trying to maximize their own profits, and therefore, the exploration is
essentially achieved via mechanism design. We also discuss variations of the
new mechanism such as truthful implementations.
",milan bradonjic,,2007.0,,arXiv,Singh2007,True,,arXiv,Not available,Exploration via design and the cost of uncertainty in keyword auctions,277df6185c4219da0f7407dffa7bd853,http://arxiv.org/abs/0707.1053v2
16865," We present a deterministic exploration mechanism for sponsored search
auctions, which enables the auctioneer to learn the relevance scores of
advertisers, and allows advertisers to estimate the true value of clicks
generated at the auction site. This exploratory mechanism deviates only
minimally from the mechanism being currently used by Google and Yahoo! in the
sense that it retains the same pricing rule, similar ranking scheme, as well
as, similar mathematical structure of payoffs. In particular, the estimations
of the relevance scores and true-values are achieved by providing a chance to
lower ranked advertisers to obtain better slots. This allows the search engine
to potentially test a new pool of advertisers, and correspondingly, enables new
advertisers to estimate the value of clicks/leads generated via the auction.
Both these quantities are unknown a priori, and their knowledge is necessary
for the auction to operate efficiently. We show that such an exploration policy
can be incorporated without any significant loss in revenue for the auctioneer.
We compare the revenue of the new mechanism to that of the standard mechanism
at their corresponding symmetric Nash equilibria and compute the cost of
uncertainty, which is defined as the relative loss in expected revenue per
impression. We also bound the loss in efficiency, as well as, in user
experience due to exploration, under the same solution concept (i.e. SNE). Thus
the proposed exploration mechanism learns the relevance scores while
incorporating the incentive constraints from the advertisers who are selfish
and are trying to maximize their own profits, and therefore, the exploration is
essentially achieved via mechanism design. We also discuss variations of the
new mechanism such as truthful implementations.
",behnam rezaei,,2007.0,,arXiv,Singh2007,True,,arXiv,Not available,Exploration via design and the cost of uncertainty in keyword auctions,277df6185c4219da0f7407dffa7bd853,http://arxiv.org/abs/0707.1053v2
16866," In this paper, we derive bounds for profit maximizing prior-free procurement
auctions where a buyer wishes to procure multiple units of a homogeneous item
from n sellers who are strategic about their per unit valuation. The buyer
earns the profit by reselling these units in an external consumer market. The
paper looks at three scenarios of increasing complexity. First, we look at unit
capacity sellers where per unit valuation is private information of each seller
and the revenue curve is concave. For this setting, we define two benchmarks.
We show that no randomized prior free auction can be constant competitive
against any of these two benchmarks. However, for a lightly constrained
benchmark we design a prior-free auction PEPA (Profit Extracting Procurement
Auction) which is 4-competitive and we show this bound is tight. Second, we
study a setting where the sellers have non-unit capacities that are common
knowledge and derive similar results. In particular, we propose a prior free
auction PEPAC (Profit Extracting Procurement Auction with Capacity) which is
truthful for any concave revenue curve. Third, we obtain results in the
inherently harder bi-dimensional case where per unit valuation as well as
capacities are private information of the sellers. We show that PEPAC is
truthful and constant competitive for the specific case of linear revenue
curves. We believe that this paper represents the first set of results on
single dimensional and bi-dimensional profit maximizing prior-free multi-unit
procurement auctions.
",arupratan ray,,2015.0,,arXiv,Ray2015,True,,arXiv,Not available,"Profit Maximizing Prior-free Multi-unit Procurement Auctions with
Capacitated Sellers",e3ec1995f04ba0b2caedcfd2ee1bf9a3,http://arxiv.org/abs/1504.01020v1
16867," In this paper, we derive bounds for profit maximizing prior-free procurement
auctions where a buyer wishes to procure multiple units of a homogeneous item
from n sellers who are strategic about their per unit valuation. The buyer
earns the profit by reselling these units in an external consumer market. The
paper looks at three scenarios of increasing complexity. First, we look at unit
capacity sellers where per unit valuation is private information of each seller
and the revenue curve is concave. For this setting, we define two benchmarks.
We show that no randomized prior free auction can be constant competitive
against any of these two benchmarks. However, for a lightly constrained
benchmark we design a prior-free auction PEPA (Profit Extracting Procurement
Auction) which is 4-competitive and we show this bound is tight. Second, we
study a setting where the sellers have non-unit capacities that are common
knowledge and derive similar results. In particular, we propose a prior free
auction PEPAC (Profit Extracting Procurement Auction with Capacity) which is
truthful for any concave revenue curve. Third, we obtain results in the
inherently harder bi-dimensional case where per unit valuation as well as
capacities are private information of the sellers. We show that PEPAC is
truthful and constant competitive for the specific case of linear revenue
curves. We believe that this paper represents the first set of results on
single dimensional and bi-dimensional profit maximizing prior-free multi-unit
procurement auctions.
",debmalya mandal,,2015.0,,arXiv,Ray2015,True,,arXiv,Not available,"Profit Maximizing Prior-free Multi-unit Procurement Auctions with
Capacitated Sellers",e3ec1995f04ba0b2caedcfd2ee1bf9a3,http://arxiv.org/abs/1504.01020v1
16868," In this paper, we derive bounds for profit maximizing prior-free procurement
auctions where a buyer wishes to procure multiple units of a homogeneous item
from n sellers who are strategic about their per unit valuation. The buyer
earns the profit by reselling these units in an external consumer market. The
paper looks at three scenarios of increasing complexity. First, we look at unit
capacity sellers where per unit valuation is private information of each seller
and the revenue curve is concave. For this setting, we define two benchmarks.
We show that no randomized prior free auction can be constant competitive
against any of these two benchmarks. However, for a lightly constrained
benchmark we design a prior-free auction PEPA (Profit Extracting Procurement
Auction) which is 4-competitive and we show this bound is tight. Second, we
study a setting where the sellers have non-unit capacities that are common
knowledge and derive similar results. In particular, we propose a prior free
auction PEPAC (Profit Extracting Procurement Auction with Capacity) which is
truthful for any concave revenue curve. Third, we obtain results in the
inherently harder bi-dimensional case where per unit valuation as well as
capacities are private information of the sellers. We show that PEPAC is
truthful and constant competitive for the specific case of linear revenue
curves. We believe that this paper represents the first set of results on
single dimensional and bi-dimensional profit maximizing prior-free multi-unit
procurement auctions.
",y. narahari,,2015.0,,arXiv,Ray2015,True,,arXiv,Not available,"Profit Maximizing Prior-free Multi-unit Procurement Auctions with
Capacitated Sellers",e3ec1995f04ba0b2caedcfd2ee1bf9a3,http://arxiv.org/abs/1504.01020v1
16869," Sponsored search auctions constitute one of the most successful applications
of microeconomic mechanisms. In mechanism design, auctions are usually designed
to incentivize advertisers to bid their truthful valuations and to assure both
the advertisers and the auctioneer a non-negative utility. Nonetheless, in
sponsored search auctions, the click-through-rates (CTRs) of the advertisers
are often unknown to the auctioneer and thus standard truthful mechanisms
cannot be directly applied and must be paired with an effective learning
algorithm for the estimation of the CTRs. This introduces the critical problem
of designing a learning mechanism able to estimate the CTRs at the same time as
implementing a truthful mechanism with a revenue loss as small as possible
compared to an optimal mechanism designed with the true CTRs. Previous work
showed that, when dominant-strategy truthfulness is adopted, in single-slot
auctions the problem can be solved using suitable exploration-exploitation
mechanisms able to achieve a per-step regret (over the auctioneer's revenue) of
order $O(T^{-1/3})$ (where T is the number of times the auction is repeated).
It is also known that, when truthfulness in expectation is adopted, a per-step
regret (over the social welfare) of order $O(T^{-1/2})$ can be obtained. In
this paper we extend the results known in the literature to the case of
multi-slot auctions. In this case, a model of the user is needed to
characterize how the advertisers' valuations change over the slots. We adopt
the cascade model that is the most famous model in the literature for sponsored
search auctions. We prove a number of novel upper bounds and lower bounds both
on the auctioneer's revenue loss and social welfare w.r.t. to the VCG auction
and we report numerical simulations investigating the accuracy of the bounds in
predicting the dependency of the regret on the auction parameters.
",nicola gatti,,2014.0,,arXiv,Gatti2014,True,,arXiv,Not available,"Truthful Learning Mechanisms for Multi-Slot Sponsored Search Auctions
with Externalities",64e4b83f203a04afb87315851bae690c,http://arxiv.org/abs/1405.2484v1
16870," Sponsored search auctions constitute one of the most successful applications
of microeconomic mechanisms. In mechanism design, auctions are usually designed
to incentivize advertisers to bid their truthful valuations and to assure both
the advertisers and the auctioneer a non-negative utility. Nonetheless, in
sponsored search auctions, the click-through-rates (CTRs) of the advertisers
are often unknown to the auctioneer and thus standard truthful mechanisms
cannot be directly applied and must be paired with an effective learning
algorithm for the estimation of the CTRs. This introduces the critical problem
of designing a learning mechanism able to estimate the CTRs at the same time as
implementing a truthful mechanism with a revenue loss as small as possible
compared to an optimal mechanism designed with the true CTRs. Previous work
showed that, when dominant-strategy truthfulness is adopted, in single-slot
auctions the problem can be solved using suitable exploration-exploitation
mechanisms able to achieve a per-step regret (over the auctioneer's revenue) of
order $O(T^{-1/3})$ (where T is the number of times the auction is repeated).
It is also known that, when truthfulness in expectation is adopted, a per-step
regret (over the social welfare) of order $O(T^{-1/2})$ can be obtained. In
this paper we extend the results known in the literature to the case of
multi-slot auctions. In this case, a model of the user is needed to
characterize how the advertisers' valuations change over the slots. We adopt
the cascade model that is the most famous model in the literature for sponsored
search auctions. We prove a number of novel upper bounds and lower bounds both
on the auctioneer's revenue loss and social welfare w.r.t. to the VCG auction
and we report numerical simulations investigating the accuracy of the bounds in
predicting the dependency of the regret on the auction parameters.
",alessandro lazaric,,2014.0,,arXiv,Gatti2014,True,,arXiv,Not available,"Truthful Learning Mechanisms for Multi-Slot Sponsored Search Auctions
with Externalities",64e4b83f203a04afb87315851bae690c,http://arxiv.org/abs/1405.2484v1
16871," Sponsored search auctions constitute one of the most successful applications
of microeconomic mechanisms. In mechanism design, auctions are usually designed
to incentivize advertisers to bid their truthful valuations and to assure both
the advertisers and the auctioneer a non-negative utility. Nonetheless, in
sponsored search auctions, the click-through-rates (CTRs) of the advertisers
are often unknown to the auctioneer and thus standard truthful mechanisms
cannot be directly applied and must be paired with an effective learning
algorithm for the estimation of the CTRs. This introduces the critical problem
of designing a learning mechanism able to estimate the CTRs at the same time as
implementing a truthful mechanism with a revenue loss as small as possible
compared to an optimal mechanism designed with the true CTRs. Previous work
showed that, when dominant-strategy truthfulness is adopted, in single-slot
auctions the problem can be solved using suitable exploration-exploitation
mechanisms able to achieve a per-step regret (over the auctioneer's revenue) of
order $O(T^{-1/3})$ (where T is the number of times the auction is repeated).
It is also known that, when truthfulness in expectation is adopted, a per-step
regret (over the social welfare) of order $O(T^{-1/2})$ can be obtained. In
this paper we extend the results known in the literature to the case of
multi-slot auctions. In this case, a model of the user is needed to
characterize how the advertisers' valuations change over the slots. We adopt
the cascade model that is the most famous model in the literature for sponsored
search auctions. We prove a number of novel upper bounds and lower bounds both
on the auctioneer's revenue loss and social welfare w.r.t. to the VCG auction
and we report numerical simulations investigating the accuracy of the bounds in
predicting the dependency of the regret on the auction parameters.
",marco rocco,,2014.0,,arXiv,Gatti2014,True,,arXiv,Not available,"Truthful Learning Mechanisms for Multi-Slot Sponsored Search Auctions
with Externalities",64e4b83f203a04afb87315851bae690c,http://arxiv.org/abs/1405.2484v1
16872," This paper considers the problem of cooperative power control in distributed
small cell wireless networks. We introduce a novel framework, based on repeated
games, which models the interactions of the different transmit base stations in
the downlink. By exploiting the specific structure of the game, we show that we
can improve the system performance by selecting the Pareto optimal solution as
well as reduce the price of stability.
",mael treust,,2010.0,,"Future Network and MobileSummit 2010, Italy (2010)",Treust2010,True,,arXiv,Not available,Coverage games in small cells networks,cd3dedd97ed8ced89ca14172805d2e51,http://arxiv.org/abs/1011.4366v1
16873," Sponsored search auctions constitute one of the most successful applications
of microeconomic mechanisms. In mechanism design, auctions are usually designed
to incentivize advertisers to bid their truthful valuations and to assure both
the advertisers and the auctioneer a non-negative utility. Nonetheless, in
sponsored search auctions, the click-through-rates (CTRs) of the advertisers
are often unknown to the auctioneer and thus standard truthful mechanisms
cannot be directly applied and must be paired with an effective learning
algorithm for the estimation of the CTRs. This introduces the critical problem
of designing a learning mechanism able to estimate the CTRs at the same time as
implementing a truthful mechanism with a revenue loss as small as possible
compared to an optimal mechanism designed with the true CTRs. Previous work
showed that, when dominant-strategy truthfulness is adopted, in single-slot
auctions the problem can be solved using suitable exploration-exploitation
mechanisms able to achieve a per-step regret (over the auctioneer's revenue) of
order $O(T^{-1/3})$ (where T is the number of times the auction is repeated).
It is also known that, when truthfulness in expectation is adopted, a per-step
regret (over the social welfare) of order $O(T^{-1/2})$ can be obtained. In
this paper we extend the results known in the literature to the case of
multi-slot auctions. In this case, a model of the user is needed to
characterize how the advertisers' valuations change over the slots. We adopt
the cascade model that is the most famous model in the literature for sponsored
search auctions. We prove a number of novel upper bounds and lower bounds both
on the auctioneer's revenue loss and social welfare w.r.t. to the VCG auction
and we report numerical simulations investigating the accuracy of the bounds in
predicting the dependency of the regret on the auction parameters.
",francesco trovo,,2014.0,,arXiv,Gatti2014,True,,arXiv,Not available,"Truthful Learning Mechanisms for Multi-Slot Sponsored Search Auctions
with Externalities",64e4b83f203a04afb87315851bae690c,http://arxiv.org/abs/1405.2484v1
16874," A seminal result of Bulow and Klemperer [1989] demonstrates the power of
competition for extracting revenue: when selling a single item to $n$ bidders
whose values are drawn i.i.d. from a regular distribution, the simple
welfare-maximizing VCG mechanism (in this case, a second price-auction) with
one additional bidder extracts at least as much revenue in expectation as the
optimal mechanism. The beauty of this theorem stems from the fact that VCG is a
{\em prior-independent} mechanism, where the seller possesses no information
about the distribution, and yet, by recruiting one additional bidder it
performs better than any prior-dependent mechanism tailored exactly to the
distribution at hand (without the additional bidder).
In this work, we establish the first {\em full Bulow-Klemperer} results in
{\em multi-dimensional} environments, proving that by recruiting additional
bidders, the revenue of the VCG mechanism surpasses that of the optimal
(possibly randomized, Bayesian incentive compatible) mechanism. For a given
environment with i.i.d. bidders, we term the number of additional bidders
needed to achieve this guarantee the environment's {\em competition
complexity}.
Using the recent duality-based framework of Cai et al. [2016] for reasoning
about optimal revenue, we show that the competition complexity of $n$ bidders
with additive valuations over $m$ independent, regular items is at most
$n+2m-2$ and at least $\log(m)$. We extend our results to bidders with additive
valuations subject to downward-closed constraints, showing that these
significantly more general valuations increase the competition complexity by at
most an additive $m-1$ factor. We further improve this bound for the special
case of matroid constraints, and provide additional extensions as well.
",alon eden,,2016.0,,arXiv,Eden2016,True,,arXiv,Not available,"The Competition Complexity of Auctions: A Bulow-Klemperer Result for
Multi-Dimensional Bidders",f765706bd28e237aab603f412cb10b1d,http://arxiv.org/abs/1612.08821v1
16875," A seminal result of Bulow and Klemperer [1989] demonstrates the power of
competition for extracting revenue: when selling a single item to $n$ bidders
whose values are drawn i.i.d. from a regular distribution, the simple
welfare-maximizing VCG mechanism (in this case, a second price-auction) with
one additional bidder extracts at least as much revenue in expectation as the
optimal mechanism. The beauty of this theorem stems from the fact that VCG is a
{\em prior-independent} mechanism, where the seller possesses no information
about the distribution, and yet, by recruiting one additional bidder it
performs better than any prior-dependent mechanism tailored exactly to the
distribution at hand (without the additional bidder).
In this work, we establish the first {\em full Bulow-Klemperer} results in
{\em multi-dimensional} environments, proving that by recruiting additional
bidders, the revenue of the VCG mechanism surpasses that of the optimal
(possibly randomized, Bayesian incentive compatible) mechanism. For a given
environment with i.i.d. bidders, we term the number of additional bidders
needed to achieve this guarantee the environment's {\em competition
complexity}.
Using the recent duality-based framework of Cai et al. [2016] for reasoning
about optimal revenue, we show that the competition complexity of $n$ bidders
with additive valuations over $m$ independent, regular items is at most
$n+2m-2$ and at least $\log(m)$. We extend our results to bidders with additive
valuations subject to downward-closed constraints, showing that these
significantly more general valuations increase the competition complexity by at
most an additive $m-1$ factor. We further improve this bound for the special
case of matroid constraints, and provide additional extensions as well.
",michal feldman,,2016.0,,arXiv,Eden2016,True,,arXiv,Not available,"The Competition Complexity of Auctions: A Bulow-Klemperer Result for
Multi-Dimensional Bidders",f765706bd28e237aab603f412cb10b1d,http://arxiv.org/abs/1612.08821v1
16876," A seminal result of Bulow and Klemperer [1989] demonstrates the power of
competition for extracting revenue: when selling a single item to $n$ bidders
whose values are drawn i.i.d. from a regular distribution, the simple
welfare-maximizing VCG mechanism (in this case, a second price-auction) with
one additional bidder extracts at least as much revenue in expectation as the
optimal mechanism. The beauty of this theorem stems from the fact that VCG is a
{\em prior-independent} mechanism, where the seller possesses no information
about the distribution, and yet, by recruiting one additional bidder it
performs better than any prior-dependent mechanism tailored exactly to the
distribution at hand (without the additional bidder).
In this work, we establish the first {\em full Bulow-Klemperer} results in
{\em multi-dimensional} environments, proving that by recruiting additional
bidders, the revenue of the VCG mechanism surpasses that of the optimal
(possibly randomized, Bayesian incentive compatible) mechanism. For a given
environment with i.i.d. bidders, we term the number of additional bidders
needed to achieve this guarantee the environment's {\em competition
complexity}.
Using the recent duality-based framework of Cai et al. [2016] for reasoning
about optimal revenue, we show that the competition complexity of $n$ bidders
with additive valuations over $m$ independent, regular items is at most
$n+2m-2$ and at least $\log(m)$. We extend our results to bidders with additive
valuations subject to downward-closed constraints, showing that these
significantly more general valuations increase the competition complexity by at
most an additive $m-1$ factor. We further improve this bound for the special
case of matroid constraints, and provide additional extensions as well.
",ophir friedler,,2016.0,,arXiv,Eden2016,True,,arXiv,Not available,"The Competition Complexity of Auctions: A Bulow-Klemperer Result for
Multi-Dimensional Bidders",f765706bd28e237aab603f412cb10b1d,http://arxiv.org/abs/1612.08821v1
16877," A seminal result of Bulow and Klemperer [1989] demonstrates the power of
competition for extracting revenue: when selling a single item to $n$ bidders
whose values are drawn i.i.d. from a regular distribution, the simple
welfare-maximizing VCG mechanism (in this case, a second price-auction) with
one additional bidder extracts at least as much revenue in expectation as the
optimal mechanism. The beauty of this theorem stems from the fact that VCG is a
{\em prior-independent} mechanism, where the seller possesses no information
about the distribution, and yet, by recruiting one additional bidder it
performs better than any prior-dependent mechanism tailored exactly to the
distribution at hand (without the additional bidder).
In this work, we establish the first {\em full Bulow-Klemperer} results in
{\em multi-dimensional} environments, proving that by recruiting additional
bidders, the revenue of the VCG mechanism surpasses that of the optimal
(possibly randomized, Bayesian incentive compatible) mechanism. For a given
environment with i.i.d. bidders, we term the number of additional bidders
needed to achieve this guarantee the environment's {\em competition
complexity}.
Using the recent duality-based framework of Cai et al. [2016] for reasoning
about optimal revenue, we show that the competition complexity of $n$ bidders
with additive valuations over $m$ independent, regular items is at most
$n+2m-2$ and at least $\log(m)$. We extend our results to bidders with additive
valuations subject to downward-closed constraints, showing that these
significantly more general valuations increase the competition complexity by at
most an additive $m-1$ factor. We further improve this bound for the special
case of matroid constraints, and provide additional extensions as well.
",inbal talgam-cohen,,2016.0,,arXiv,Eden2016,True,,arXiv,Not available,"The Competition Complexity of Auctions: A Bulow-Klemperer Result for
Multi-Dimensional Bidders",f765706bd28e237aab603f412cb10b1d,http://arxiv.org/abs/1612.08821v1
16878," A seminal result of Bulow and Klemperer [1989] demonstrates the power of
competition for extracting revenue: when selling a single item to $n$ bidders
whose values are drawn i.i.d. from a regular distribution, the simple
welfare-maximizing VCG mechanism (in this case, a second price-auction) with
one additional bidder extracts at least as much revenue in expectation as the
optimal mechanism. The beauty of this theorem stems from the fact that VCG is a
{\em prior-independent} mechanism, where the seller possesses no information
about the distribution, and yet, by recruiting one additional bidder it
performs better than any prior-dependent mechanism tailored exactly to the
distribution at hand (without the additional bidder).
In this work, we establish the first {\em full Bulow-Klemperer} results in
{\em multi-dimensional} environments, proving that by recruiting additional
bidders, the revenue of the VCG mechanism surpasses that of the optimal
(possibly randomized, Bayesian incentive compatible) mechanism. For a given
environment with i.i.d. bidders, we term the number of additional bidders
needed to achieve this guarantee the environment's {\em competition
complexity}.
Using the recent duality-based framework of Cai et al. [2016] for reasoning
about optimal revenue, we show that the competition complexity of $n$ bidders
with additive valuations over $m$ independent, regular items is at most
$n+2m-2$ and at least $\log(m)$. We extend our results to bidders with additive
valuations subject to downward-closed constraints, showing that these
significantly more general valuations increase the competition complexity by at
most an additive $m-1$ factor. We further improve this bound for the special
case of matroid constraints, and provide additional extensions as well.
",s. weinberg,,2016.0,,arXiv,Eden2016,True,,arXiv,Not available,"The Competition Complexity of Auctions: A Bulow-Klemperer Result for
Multi-Dimensional Bidders",f765706bd28e237aab603f412cb10b1d,http://arxiv.org/abs/1612.08821v1
16879," Second price auctions with reserve price are widely used by the main Internet
actors because of their incentive compatibility property. We show that once
reserve price are learned based on past bidder behavior, this auction is not
anymore incentive compatible. Through a functional analytic rather than game
theoretic approach, we exhibit shading strategies which lead to large increase
of revenue for the bidders. In the symmetric case, we show that there exists a
simple equilibrium strategy that enables bidders to get the revenue they would
get in a second price auction without reserve price. We then study the
consequences of this result on recent work on collusion in second price
auctions and prove that the proposed bidding strategies are robust to some
approximation error of the auctioneer.
",thomas nedelec,,2018.0,,arXiv,Nedelec2018,True,,arXiv,Not available,"Thresholding the virtual value: a simple method to increase welfare and
lower reserve prices in online auction systems",39d9040751116e9fdf56518f6135089f,http://arxiv.org/abs/1808.06979v1
16880," Second price auctions with reserve price are widely used by the main Internet
actors because of their incentive compatibility property. We show that once
reserve price are learned based on past bidder behavior, this auction is not
anymore incentive compatible. Through a functional analytic rather than game
theoretic approach, we exhibit shading strategies which lead to large increase
of revenue for the bidders. In the symmetric case, we show that there exists a
simple equilibrium strategy that enables bidders to get the revenue they would
get in a second price auction without reserve price. We then study the
consequences of this result on recent work on collusion in second price
auctions and prove that the proposed bidding strategies are robust to some
approximation error of the auctioneer.
",marc abeille,,2018.0,,arXiv,Nedelec2018,True,,arXiv,Not available,"Thresholding the virtual value: a simple method to increase welfare and
lower reserve prices in online auction systems",39d9040751116e9fdf56518f6135089f,http://arxiv.org/abs/1808.06979v1
16881," Second price auctions with reserve price are widely used by the main Internet
actors because of their incentive compatibility property. We show that once
reserve price are learned based on past bidder behavior, this auction is not
anymore incentive compatible. Through a functional analytic rather than game
theoretic approach, we exhibit shading strategies which lead to large increase
of revenue for the bidders. In the symmetric case, we show that there exists a
simple equilibrium strategy that enables bidders to get the revenue they would
get in a second price auction without reserve price. We then study the
consequences of this result on recent work on collusion in second price
auctions and prove that the proposed bidding strategies are robust to some
approximation error of the auctioneer.
",clement calauzenes,,2018.0,,arXiv,Nedelec2018,True,,arXiv,Not available,"Thresholding the virtual value: a simple method to increase welfare and
lower reserve prices in online auction systems",39d9040751116e9fdf56518f6135089f,http://arxiv.org/abs/1808.06979v1
16882," Second price auctions with reserve price are widely used by the main Internet
actors because of their incentive compatibility property. We show that once
reserve price are learned based on past bidder behavior, this auction is not
anymore incentive compatible. Through a functional analytic rather than game
theoretic approach, we exhibit shading strategies which lead to large increase
of revenue for the bidders. In the symmetric case, we show that there exists a
simple equilibrium strategy that enables bidders to get the revenue they would
get in a second price auction without reserve price. We then study the
consequences of this result on recent work on collusion in second price
auctions and prove that the proposed bidding strategies are robust to some
approximation error of the auctioneer.
",noureddine karoui,,2018.0,,arXiv,Nedelec2018,True,,arXiv,Not available,"Thresholding the virtual value: a simple method to increase welfare and
lower reserve prices in online auction systems",39d9040751116e9fdf56518f6135089f,http://arxiv.org/abs/1808.06979v1
16883," This paper considers the problem of cooperative power control in distributed
small cell wireless networks. We introduce a novel framework, based on repeated
games, which models the interactions of the different transmit base stations in
the downlink. By exploiting the specific structure of the game, we show that we
can improve the system performance by selecting the Pareto optimal solution as
well as reduce the price of stability.
",hamidou tembine,,2010.0,,"Future Network and MobileSummit 2010, Italy (2010)",Treust2010,True,,arXiv,Not available,Coverage games in small cells networks,cd3dedd97ed8ced89ca14172805d2e51,http://arxiv.org/abs/1011.4366v1
16884," Second price auctions with reserve price are widely used by the main Internet
actors because of their incentive compatibility property. We show that once
reserve price are learned based on past bidder behavior, this auction is not
anymore incentive compatible. Through a functional analytic rather than game
theoretic approach, we exhibit shading strategies which lead to large increase
of revenue for the bidders. In the symmetric case, we show that there exists a
simple equilibrium strategy that enables bidders to get the revenue they would
get in a second price auction without reserve price. We then study the
consequences of this result on recent work on collusion in second price
auctions and prove that the proposed bidding strategies are robust to some
approximation error of the auctioneer.
",benjamin heymann,,2018.0,,arXiv,Nedelec2018,True,,arXiv,Not available,"Thresholding the virtual value: a simple method to increase welfare and
lower reserve prices in online auction systems",39d9040751116e9fdf56518f6135089f,http://arxiv.org/abs/1808.06979v1
16885," Second price auctions with reserve price are widely used by the main Internet
actors because of their incentive compatibility property. We show that once
reserve price are learned based on past bidder behavior, this auction is not
anymore incentive compatible. Through a functional analytic rather than game
theoretic approach, we exhibit shading strategies which lead to large increase
of revenue for the bidders. In the symmetric case, we show that there exists a
simple equilibrium strategy that enables bidders to get the revenue they would
get in a second price auction without reserve price. We then study the
consequences of this result on recent work on collusion in second price
auctions and prove that the proposed bidding strategies are robust to some
approximation error of the auctioneer.
",vianney perchet,,2018.0,,arXiv,Nedelec2018,True,,arXiv,Not available,"Thresholding the virtual value: a simple method to increase welfare and
lower reserve prices in online auction systems",39d9040751116e9fdf56518f6135089f,http://arxiv.org/abs/1808.06979v1
16886," Most modern financial markets use a continuous double auction mechanism to
store and match orders and facilitate trading. In this paper we develop a
microscopic dynamical statistical model for the continuous double auction under
the assumption of IID random order flow, and analyze it using simulation,
dimensional analysis, and theoretical tools based on mean field approximations.
The model makes testable predictions for basic properties of markets, such as
price volatility, the depth of stored supply and demand vs. price, the bid-ask
spread, the price impact function, and the time and probability of filling
orders. These predictions are based on properties of order flow and the limit
order book, such as share volume of market and limit orders, cancellations,
typical order size, and tick size. Because these quantities can all be measured
directly there are no free parameters. We show that the order size, which can
be cast as a nondimensional granularity parameter, is in most cases a more
significant determinant of market behavior than tick size. We also provide an
explanation for the observed highly concave nature of the price impact
function. On a broader level, this work suggests how stochastic models based on
zero-intelligence agents may be useful to probe the structure of market
institutions. Like the model of perfect rationality, a stochastic-zero
intelligence model can be used to make strong predictions based on a compact
set of assumptions, even if these assumptions are not fully believable.
",eric smith,,2002.0,10.1088/1469-7688/3/6/307,arXiv,Smith2002,True,,arXiv,Not available,Statistical theory of the continuous double auction,9e918503af2b2e7667b1d414e4b16f62,http://arxiv.org/abs/cond-mat/0210475v1
16887," Most modern financial markets use a continuous double auction mechanism to
store and match orders and facilitate trading. In this paper we develop a
microscopic dynamical statistical model for the continuous double auction under
the assumption of IID random order flow, and analyze it using simulation,
dimensional analysis, and theoretical tools based on mean field approximations.
The model makes testable predictions for basic properties of markets, such as
price volatility, the depth of stored supply and demand vs. price, the bid-ask
spread, the price impact function, and the time and probability of filling
orders. These predictions are based on properties of order flow and the limit
order book, such as share volume of market and limit orders, cancellations,
typical order size, and tick size. Because these quantities can all be measured
directly there are no free parameters. We show that the order size, which can
be cast as a nondimensional granularity parameter, is in most cases a more
significant determinant of market behavior than tick size. We also provide an
explanation for the observed highly concave nature of the price impact
function. On a broader level, this work suggests how stochastic models based on
zero-intelligence agents may be useful to probe the structure of market
institutions. Like the model of perfect rationality, a stochastic-zero
intelligence model can be used to make strong predictions based on a compact
set of assumptions, even if these assumptions are not fully believable.
",j. farmer,,2002.0,10.1088/1469-7688/3/6/307,arXiv,Smith2002,True,,arXiv,Not available,Statistical theory of the continuous double auction,9e918503af2b2e7667b1d414e4b16f62,http://arxiv.org/abs/cond-mat/0210475v1
16888," Most modern financial markets use a continuous double auction mechanism to
store and match orders and facilitate trading. In this paper we develop a
microscopic dynamical statistical model for the continuous double auction under
the assumption of IID random order flow, and analyze it using simulation,
dimensional analysis, and theoretical tools based on mean field approximations.
The model makes testable predictions for basic properties of markets, such as
price volatility, the depth of stored supply and demand vs. price, the bid-ask
spread, the price impact function, and the time and probability of filling
orders. These predictions are based on properties of order flow and the limit
order book, such as share volume of market and limit orders, cancellations,
typical order size, and tick size. Because these quantities can all be measured
directly there are no free parameters. We show that the order size, which can
be cast as a nondimensional granularity parameter, is in most cases a more
significant determinant of market behavior than tick size. We also provide an
explanation for the observed highly concave nature of the price impact
function. On a broader level, this work suggests how stochastic models based on
zero-intelligence agents may be useful to probe the structure of market
institutions. Like the model of perfect rationality, a stochastic-zero
intelligence model can be used to make strong predictions based on a compact
set of assumptions, even if these assumptions are not fully believable.
",laszlo gillemot,,2002.0,10.1088/1469-7688/3/6/307,arXiv,Smith2002,True,,arXiv,Not available,Statistical theory of the continuous double auction,9e918503af2b2e7667b1d414e4b16f62,http://arxiv.org/abs/cond-mat/0210475v1
16889," Most modern financial markets use a continuous double auction mechanism to
store and match orders and facilitate trading. In this paper we develop a
microscopic dynamical statistical model for the continuous double auction under
the assumption of IID random order flow, and analyze it using simulation,
dimensional analysis, and theoretical tools based on mean field approximations.
The model makes testable predictions for basic properties of markets, such as
price volatility, the depth of stored supply and demand vs. price, the bid-ask
spread, the price impact function, and the time and probability of filling
orders. These predictions are based on properties of order flow and the limit
order book, such as share volume of market and limit orders, cancellations,
typical order size, and tick size. Because these quantities can all be measured
directly there are no free parameters. We show that the order size, which can
be cast as a nondimensional granularity parameter, is in most cases a more
significant determinant of market behavior than tick size. We also provide an
explanation for the observed highly concave nature of the price impact
function. On a broader level, this work suggests how stochastic models based on
zero-intelligence agents may be useful to probe the structure of market
institutions. Like the model of perfect rationality, a stochastic-zero
intelligence model can be used to make strong predictions based on a compact
set of assumptions, even if these assumptions are not fully believable.
",supriya krishnamurthy,,2002.0,10.1088/1469-7688/3/6/307,arXiv,Smith2002,True,,arXiv,Not available,Statistical theory of the continuous double auction,9e918503af2b2e7667b1d414e4b16f62,http://arxiv.org/abs/cond-mat/0210475v1
16890," Subtraction games is a class of impartial combinatorial games, They with
finite subtraction sets are known to have periodic nim-sequences. So people try
to find the regular of the games. But for specific of Sprague-Grundy Theory, it
is too difficult to find, they obtained some conclusions just by simple
observing. This paper used PTFN algorithm to analyze the period of the
Subtraction games. It is more suitable than Sprague-Grundy Theory, and this
paper obtained four conclusions by PTFN algorithm . This algorithm provide a
new direction to study the subtraction games' period.
",zhihui qin,,2012.0,,arXiv,Qin2012,True,,arXiv,Not available,The Period of the subtraction games,93f19624269e50a6b184e0c05abf0964,http://arxiv.org/abs/1208.6134v1
16891," Subtraction games is a class of impartial combinatorial games, They with
finite subtraction sets are known to have periodic nim-sequences. So people try
to find the regular of the games. But for specific of Sprague-Grundy Theory, it
is too difficult to find, they obtained some conclusions just by simple
observing. This paper used PTFN algorithm to analyze the period of the
Subtraction games. It is more suitable than Sprague-Grundy Theory, and this
paper obtained four conclusions by PTFN algorithm . This algorithm provide a
new direction to study the subtraction games' period.
",guanglei he,,2012.0,,arXiv,Qin2012,True,,arXiv,Not available,The Period of the subtraction games,93f19624269e50a6b184e0c05abf0964,http://arxiv.org/abs/1208.6134v1
16892," The recent theory of sequential games and selection functions by Mar- tin
Escardo and Paulo Oliva is extended to games in which players move
simultaneously. The Nash existence theorem for mixed-strategy equilibria of
finite games is generalised to games defined by selection functions. A normal
form construction is given which generalises the game-theoretic normal form,
and its soundness is proven. Minimax strategies also gener- alise to the new
class of games and are computed by the Berardi-Bezem- Coquand functional,
studied in proof theory as an interpretation of the axiom of countable choice.
",julian hedges,,2013.0,10.1098/rspa.2013.0041,arXiv,Hedges2013,True,,arXiv,Not available,A generalisation of Nash's theorem with higher-order functionals,25b84a6db239ca358c804ab45302a6f9,http://arxiv.org/abs/1301.4845v1
16893," This paper presents the ""Game Theory Explorer"" software tool to create and
analyze games as models of strategic interaction. A game in extensive or
strategic form is created and nicely displayed with a graphical user interface
in a web browser. State-of-the-art algorithms then compute all Nash equilibria
of the game after a mouseclick. In tutorial fashion, we present how the program
is used, and the ideas behind its main algorithms. We report on experiences
with the architecture of the software and its development as an open-source
project.
",rahul savani,,2014.0,10.1007/s10287-014-0206-x,"Computational Management Science 12:1, 5-33 (2015)",Savani2014,True,,arXiv,Not available,Game Theory Explorer - Software for the Applied Game Theorist,e57fa9ab944f1ac8a633b302cab987cb,http://arxiv.org/abs/1403.3969v1
16894," This paper considers the problem of cooperative power control in distributed
small cell wireless networks. We introduce a novel framework, based on repeated
games, which models the interactions of the different transmit base stations in
the downlink. By exploiting the specific structure of the game, we show that we
can improve the system performance by selecting the Pareto optimal solution as
well as reduce the price of stability.
",samson lasaulce,,2010.0,,"Future Network and MobileSummit 2010, Italy (2010)",Treust2010,True,,arXiv,Not available,Coverage games in small cells networks,cd3dedd97ed8ced89ca14172805d2e51,http://arxiv.org/abs/1011.4366v1
16895," This paper presents the ""Game Theory Explorer"" software tool to create and
analyze games as models of strategic interaction. A game in extensive or
strategic form is created and nicely displayed with a graphical user interface
in a web browser. State-of-the-art algorithms then compute all Nash equilibria
of the game after a mouseclick. In tutorial fashion, we present how the program
is used, and the ideas behind its main algorithms. We report on experiences
with the architecture of the software and its development as an open-source
project.
",bernhard stengel,,2014.0,10.1007/s10287-014-0206-x,"Computational Management Science 12:1, 5-33 (2015)",Savani2014,True,,arXiv,Not available,Game Theory Explorer - Software for the Applied Game Theorist,e57fa9ab944f1ac8a633b302cab987cb,http://arxiv.org/abs/1403.3969v1
16896," We introduce new theoretical insights into two-population asymmetric games
allowing for an elegant symmetric decomposition into two single population
symmetric games. Specifically, we show how an asymmetric bimatrix game (A,B)
can be decomposed into its symmetric counterparts by envisioning and
investigating the payoff tables (A and B) that constitute the asymmetric game,
as two independent, single population, symmetric games. We reveal several
surprising formal relationships between an asymmetric two-population game and
its symmetric single population counterparts, which facilitate a convenient
analysis of the original asymmetric game due to the dimensionality reduction of
the decomposition. The main finding reveals that if (x,y) is a Nash equilibrium
of an asymmetric game (A,B), this implies that y is a Nash equilibrium of the
symmetric counterpart game determined by payoff table A, and x is a Nash
equilibrium of the symmetric counterpart game determined by payoff table B.
Also the reverse holds and combinations of Nash equilibria of the counterpart
games form Nash equilibria of the asymmetric game. We illustrate how these
formal relationships aid in identifying and analysing the Nash structure of
asymmetric games, by examining the evolutionary dynamics of the simpler
counterpart games in several canonical examples.
",karl tuyls,,2017.0,,arXiv,Tuyls2017,True,,arXiv,Not available,Symmetric Decomposition of Asymmetric Games,af3f694bb8f0bfd4920d1a29c82a7411,http://arxiv.org/abs/1711.05074v3
16897," We introduce new theoretical insights into two-population asymmetric games
allowing for an elegant symmetric decomposition into two single population
symmetric games. Specifically, we show how an asymmetric bimatrix game (A,B)
can be decomposed into its symmetric counterparts by envisioning and
investigating the payoff tables (A and B) that constitute the asymmetric game,
as two independent, single population, symmetric games. We reveal several
surprising formal relationships between an asymmetric two-population game and
its symmetric single population counterparts, which facilitate a convenient
analysis of the original asymmetric game due to the dimensionality reduction of
the decomposition. The main finding reveals that if (x,y) is a Nash equilibrium
of an asymmetric game (A,B), this implies that y is a Nash equilibrium of the
symmetric counterpart game determined by payoff table A, and x is a Nash
equilibrium of the symmetric counterpart game determined by payoff table B.
Also the reverse holds and combinations of Nash equilibria of the counterpart
games form Nash equilibria of the asymmetric game. We illustrate how these
formal relationships aid in identifying and analysing the Nash structure of
asymmetric games, by examining the evolutionary dynamics of the simpler
counterpart games in several canonical examples.
",julien perolat,,2017.0,,arXiv,Tuyls2017,True,,arXiv,Not available,Symmetric Decomposition of Asymmetric Games,af3f694bb8f0bfd4920d1a29c82a7411,http://arxiv.org/abs/1711.05074v3
16898," We introduce new theoretical insights into two-population asymmetric games
allowing for an elegant symmetric decomposition into two single population
symmetric games. Specifically, we show how an asymmetric bimatrix game (A,B)
can be decomposed into its symmetric counterparts by envisioning and
investigating the payoff tables (A and B) that constitute the asymmetric game,
as two independent, single population, symmetric games. We reveal several
surprising formal relationships between an asymmetric two-population game and
its symmetric single population counterparts, which facilitate a convenient
analysis of the original asymmetric game due to the dimensionality reduction of
the decomposition. The main finding reveals that if (x,y) is a Nash equilibrium
of an asymmetric game (A,B), this implies that y is a Nash equilibrium of the
symmetric counterpart game determined by payoff table A, and x is a Nash
equilibrium of the symmetric counterpart game determined by payoff table B.
Also the reverse holds and combinations of Nash equilibria of the counterpart
games form Nash equilibria of the asymmetric game. We illustrate how these
formal relationships aid in identifying and analysing the Nash structure of
asymmetric games, by examining the evolutionary dynamics of the simpler
counterpart games in several canonical examples.
",marc lanctot,,2017.0,,arXiv,Tuyls2017,True,,arXiv,Not available,Symmetric Decomposition of Asymmetric Games,af3f694bb8f0bfd4920d1a29c82a7411,http://arxiv.org/abs/1711.05074v3
16899," We introduce new theoretical insights into two-population asymmetric games
allowing for an elegant symmetric decomposition into two single population
symmetric games. Specifically, we show how an asymmetric bimatrix game (A,B)
can be decomposed into its symmetric counterparts by envisioning and
investigating the payoff tables (A and B) that constitute the asymmetric game,
as two independent, single population, symmetric games. We reveal several
surprising formal relationships between an asymmetric two-population game and
its symmetric single population counterparts, which facilitate a convenient
analysis of the original asymmetric game due to the dimensionality reduction of
the decomposition. The main finding reveals that if (x,y) is a Nash equilibrium
of an asymmetric game (A,B), this implies that y is a Nash equilibrium of the
symmetric counterpart game determined by payoff table A, and x is a Nash
equilibrium of the symmetric counterpart game determined by payoff table B.
Also the reverse holds and combinations of Nash equilibria of the counterpart
games form Nash equilibria of the asymmetric game. We illustrate how these
formal relationships aid in identifying and analysing the Nash structure of
asymmetric games, by examining the evolutionary dynamics of the simpler
counterpart games in several canonical examples.
",georg ostrovski,,2017.0,,arXiv,Tuyls2017,True,,arXiv,Not available,Symmetric Decomposition of Asymmetric Games,af3f694bb8f0bfd4920d1a29c82a7411,http://arxiv.org/abs/1711.05074v3
16900," We introduce new theoretical insights into two-population asymmetric games
allowing for an elegant symmetric decomposition into two single population
symmetric games. Specifically, we show how an asymmetric bimatrix game (A,B)
can be decomposed into its symmetric counterparts by envisioning and
investigating the payoff tables (A and B) that constitute the asymmetric game,
as two independent, single population, symmetric games. We reveal several
surprising formal relationships between an asymmetric two-population game and
its symmetric single population counterparts, which facilitate a convenient
analysis of the original asymmetric game due to the dimensionality reduction of
the decomposition. The main finding reveals that if (x,y) is a Nash equilibrium
of an asymmetric game (A,B), this implies that y is a Nash equilibrium of the
symmetric counterpart game determined by payoff table A, and x is a Nash
equilibrium of the symmetric counterpart game determined by payoff table B.
Also the reverse holds and combinations of Nash equilibria of the counterpart
games form Nash equilibria of the asymmetric game. We illustrate how these
formal relationships aid in identifying and analysing the Nash structure of
asymmetric games, by examining the evolutionary dynamics of the simpler
counterpart games in several canonical examples.
",rahul savani,,2017.0,,arXiv,Tuyls2017,True,,arXiv,Not available,Symmetric Decomposition of Asymmetric Games,af3f694bb8f0bfd4920d1a29c82a7411,http://arxiv.org/abs/1711.05074v3
16901," We introduce new theoretical insights into two-population asymmetric games
allowing for an elegant symmetric decomposition into two single population
symmetric games. Specifically, we show how an asymmetric bimatrix game (A,B)
can be decomposed into its symmetric counterparts by envisioning and
investigating the payoff tables (A and B) that constitute the asymmetric game,
as two independent, single population, symmetric games. We reveal several
surprising formal relationships between an asymmetric two-population game and
its symmetric single population counterparts, which facilitate a convenient
analysis of the original asymmetric game due to the dimensionality reduction of
the decomposition. The main finding reveals that if (x,y) is a Nash equilibrium
of an asymmetric game (A,B), this implies that y is a Nash equilibrium of the
symmetric counterpart game determined by payoff table A, and x is a Nash
equilibrium of the symmetric counterpart game determined by payoff table B.
Also the reverse holds and combinations of Nash equilibria of the counterpart
games form Nash equilibria of the asymmetric game. We illustrate how these
formal relationships aid in identifying and analysing the Nash structure of
asymmetric games, by examining the evolutionary dynamics of the simpler
counterpart games in several canonical examples.
",joel leibo,,2017.0,,arXiv,Tuyls2017,True,,arXiv,Not available,Symmetric Decomposition of Asymmetric Games,af3f694bb8f0bfd4920d1a29c82a7411,http://arxiv.org/abs/1711.05074v3
16902," We introduce new theoretical insights into two-population asymmetric games
allowing for an elegant symmetric decomposition into two single population
symmetric games. Specifically, we show how an asymmetric bimatrix game (A,B)
can be decomposed into its symmetric counterparts by envisioning and
investigating the payoff tables (A and B) that constitute the asymmetric game,
as two independent, single population, symmetric games. We reveal several
surprising formal relationships between an asymmetric two-population game and
its symmetric single population counterparts, which facilitate a convenient
analysis of the original asymmetric game due to the dimensionality reduction of
the decomposition. The main finding reveals that if (x,y) is a Nash equilibrium
of an asymmetric game (A,B), this implies that y is a Nash equilibrium of the
symmetric counterpart game determined by payoff table A, and x is a Nash
equilibrium of the symmetric counterpart game determined by payoff table B.
Also the reverse holds and combinations of Nash equilibria of the counterpart
games form Nash equilibria of the asymmetric game. We illustrate how these
formal relationships aid in identifying and analysing the Nash structure of
asymmetric games, by examining the evolutionary dynamics of the simpler
counterpart games in several canonical examples.
",toby ord,,2017.0,,arXiv,Tuyls2017,True,,arXiv,Not available,Symmetric Decomposition of Asymmetric Games,af3f694bb8f0bfd4920d1a29c82a7411,http://arxiv.org/abs/1711.05074v3
16903," We introduce new theoretical insights into two-population asymmetric games
allowing for an elegant symmetric decomposition into two single population
symmetric games. Specifically, we show how an asymmetric bimatrix game (A,B)
can be decomposed into its symmetric counterparts by envisioning and
investigating the payoff tables (A and B) that constitute the asymmetric game,
as two independent, single population, symmetric games. We reveal several
surprising formal relationships between an asymmetric two-population game and
its symmetric single population counterparts, which facilitate a convenient
analysis of the original asymmetric game due to the dimensionality reduction of
the decomposition. The main finding reveals that if (x,y) is a Nash equilibrium
of an asymmetric game (A,B), this implies that y is a Nash equilibrium of the
symmetric counterpart game determined by payoff table A, and x is a Nash
equilibrium of the symmetric counterpart game determined by payoff table B.
Also the reverse holds and combinations of Nash equilibria of the counterpart
games form Nash equilibria of the asymmetric game. We illustrate how these
formal relationships aid in identifying and analysing the Nash structure of
asymmetric games, by examining the evolutionary dynamics of the simpler
counterpart games in several canonical examples.
",thore graepel,,2017.0,,arXiv,Tuyls2017,True,,arXiv,Not available,Symmetric Decomposition of Asymmetric Games,af3f694bb8f0bfd4920d1a29c82a7411,http://arxiv.org/abs/1711.05074v3
16904," We introduce new theoretical insights into two-population asymmetric games
allowing for an elegant symmetric decomposition into two single population
symmetric games. Specifically, we show how an asymmetric bimatrix game (A,B)
can be decomposed into its symmetric counterparts by envisioning and
investigating the payoff tables (A and B) that constitute the asymmetric game,
as two independent, single population, symmetric games. We reveal several
surprising formal relationships between an asymmetric two-population game and
its symmetric single population counterparts, which facilitate a convenient
analysis of the original asymmetric game due to the dimensionality reduction of
the decomposition. The main finding reveals that if (x,y) is a Nash equilibrium
of an asymmetric game (A,B), this implies that y is a Nash equilibrium of the
symmetric counterpart game determined by payoff table A, and x is a Nash
equilibrium of the symmetric counterpart game determined by payoff table B.
Also the reverse holds and combinations of Nash equilibria of the counterpart
games form Nash equilibria of the asymmetric game. We illustrate how these
formal relationships aid in identifying and analysing the Nash structure of
asymmetric games, by examining the evolutionary dynamics of the simpler
counterpart games in several canonical examples.
",shane legg,,2017.0,,arXiv,Tuyls2017,True,,arXiv,Not available,Symmetric Decomposition of Asymmetric Games,af3f694bb8f0bfd4920d1a29c82a7411,http://arxiv.org/abs/1711.05074v3
16905," This paper considers the problem of cooperative power control in distributed
small cell wireless networks. We introduce a novel framework, based on repeated
games, which models the interactions of the different transmit base stations in
the downlink. By exploiting the specific structure of the game, we show that we
can improve the system performance by selecting the Pareto optimal solution as
well as reduce the price of stability.
",merouane debbah,,2010.0,,"Future Network and MobileSummit 2010, Italy (2010)",Treust2010,True,,arXiv,Not available,Coverage games in small cells networks,cd3dedd97ed8ced89ca14172805d2e51,http://arxiv.org/abs/1011.4366v1
16906," Noncooperative game theory provides a normative framework for analyzing
strategic interactions. However, for the toolbox to be operational, the
solutions it defines will have to be computed. In this paper, we provide a
single reduction that 1) demonstrates NP-hardness of determining whether Nash
equilibria with certain natural properties exist, and 2) demonstrates the
#P-hardness of counting Nash equilibria (or connected sets of Nash equilibria).
We also show that 3) determining whether a pure-strategy Bayes-Nash equilibrium
exists is NP-hard, and that 4) determining whether a pure-strategy Nash
equilibrium exists in a stochastic (Markov) game is PSPACE-hard even if the
game is invisible (this remains NP-hard if the game is finite). All of our
hardness results hold even if there are only two players and the game is
symmetric.
Keywords: Nash equilibrium; game theory; computational complexity;
noncooperative game theory; normal form game; stochastic game; Markov game;
Bayes-Nash equilibrium; multiagent systems.
",vincent conitzer,,2002.0,,"In Proceedings of the 18th International Joint Conference on
Artificial Intelligence (IJCAI-03), Acapulco, Mexico, 2003",Conitzer2002,True,,arXiv,Not available,Complexity Results about Nash Equilibria,7804d423392e186b1d2d5cbf5c19710e,http://arxiv.org/abs/cs/0205074v1
16907," Noncooperative game theory provides a normative framework for analyzing
strategic interactions. However, for the toolbox to be operational, the
solutions it defines will have to be computed. In this paper, we provide a
single reduction that 1) demonstrates NP-hardness of determining whether Nash
equilibria with certain natural properties exist, and 2) demonstrates the
#P-hardness of counting Nash equilibria (or connected sets of Nash equilibria).
We also show that 3) determining whether a pure-strategy Bayes-Nash equilibrium
exists is NP-hard, and that 4) determining whether a pure-strategy Nash
equilibrium exists in a stochastic (Markov) game is PSPACE-hard even if the
game is invisible (this remains NP-hard if the game is finite). All of our
hardness results hold even if there are only two players and the game is
symmetric.
Keywords: Nash equilibrium; game theory; computational complexity;
noncooperative game theory; normal form game; stochastic game; Markov game;
Bayes-Nash equilibrium; multiagent systems.
",tuomas sandholm,,2002.0,,"In Proceedings of the 18th International Joint Conference on
Artificial Intelligence (IJCAI-03), Acapulco, Mexico, 2003",Conitzer2002,True,,arXiv,Not available,Complexity Results about Nash Equilibria,7804d423392e186b1d2d5cbf5c19710e,http://arxiv.org/abs/cs/0205074v1
16908," The theory of combinatorial game (like board games) and the theory of social
games (where one looks for Nash equilibria) are normally considered as two
separate theories. Here we shall see what comes out of combining the ideas. The
central idea is Conway's observation that real numbers can be interpreted as
special types of combinatorial games. Therefore the payoff function of a social
game is a combinatorial game. Probability theory should be considered as a
safety net that prevents inconsistent decisions via the Dutch Book Argument.
This result can be extended to situations where the payoff function is a more
general game than a real number. The main difference between number valued
payoff and game valued payoff is that a probability distribution that gives
non-negative mean payoff does not ensure that the game will be lost due to the
existence of infinitisimal games. Also the Ramsay/de Finetti theorem on
exchangable sequences is discussed.
",peter harremoes,,2009.0,,arXiv,Harremoes2009,True,,arXiv,Not available,Dutch Books and Combinatorial Games,a78e53091b66301623ab1ecb4a83504f,http://arxiv.org/abs/0903.5429v2
16909," Blackwell games are infinite games of imperfect information. The two players
simultaneously make their moves, and are then informed of each other's moves.
Payoff is determined by a Borel measurable function $f$ on the set of possible
resulting sequences of moves. A standard result in Game Theory is that finite
games of this type are determined. Blackwell proved that infinite games are
determined, but only for the cases where the payoff function is the indicator
function of an open or $G_\delta$ set. For games of perfect information,
determinacy has been proven for games of arbitrary Borel complexity.
In this paper I prove the determinacy of Blackwell games over a
$G_{\delta\sigma}$ set, in a manner similar to Davis' proof of determinacy of
games of $G_{\delta\sigma}$ complexity of perfect information. There is also
extensive literature about the consequences of assuming AD, the axiom that
_all_ such games of perfect information are determined. In the final section of
this paper I formulate an analogous axiom for games of imperfect information,
and explore some of the consequences of this axiom.
",marco vervoort,,1996.0,,arXiv,Vervoort1996,True,,arXiv,Not available,Blackwell Games,ca81092110a12d56ae8bbbfe4acad267,http://arxiv.org/abs/math/9604208v1
16910," Structured game representations have recently attracted interest as models
for multi-agent artificial intelligence scenarios, with rational behavior most
commonly characterized by Nash equilibria. This paper presents efficient, exact
algorithms for computing Nash equilibria in structured game representations,
including both graphical games and multi-agent influence diagrams (MAIDs). The
algorithms are derived from a continuation method for normal-form and
extensive-form games due to Govindan and Wilson; they follow a trajectory
through a space of perturbed games and their equilibria, exploiting game
structure through fast computation of the Jacobian of the payoff function. They
are theoretically guaranteed to find at least one equilibrium of the game, and
may find more. Our approach provides the first efficient algorithm for
computing exact equilibria in graphical games with arbitrary topology, and the
first algorithm to exploit fine-grained structural properties of MAIDs.
Experimental results are presented demonstrating the effectiveness of the
algorithms and comparing them to predecessors. The running time of the
graphical game algorithm is similar to, and often better than, the running time
of previous approximate algorithms. The algorithm for MAIDs can effectively
solve games that are much larger than those solvable by previous methods.
",b. blum,,2011.0,10.1613/jair.1947,"Journal Of Artificial Intelligence Research, Volume 25, pages
457-502, 2006",Blum2011,True,,arXiv,Not available,A Continuation Method for Nash Equilibria in Structured Games,2457328a4863a6e775c11c95f2a01cfa,http://arxiv.org/abs/1110.5886v1
16911," Structured game representations have recently attracted interest as models
for multi-agent artificial intelligence scenarios, with rational behavior most
commonly characterized by Nash equilibria. This paper presents efficient, exact
algorithms for computing Nash equilibria in structured game representations,
including both graphical games and multi-agent influence diagrams (MAIDs). The
algorithms are derived from a continuation method for normal-form and
extensive-form games due to Govindan and Wilson; they follow a trajectory
through a space of perturbed games and their equilibria, exploiting game
structure through fast computation of the Jacobian of the payoff function. They
are theoretically guaranteed to find at least one equilibrium of the game, and
may find more. Our approach provides the first efficient algorithm for
computing exact equilibria in graphical games with arbitrary topology, and the
first algorithm to exploit fine-grained structural properties of MAIDs.
Experimental results are presented demonstrating the effectiveness of the
algorithms and comparing them to predecessors. The running time of the
graphical game algorithm is similar to, and often better than, the running time
of previous approximate algorithms. The algorithm for MAIDs can effectively
solve games that are much larger than those solvable by previous methods.
",d. koller,,2011.0,10.1613/jair.1947,"Journal Of Artificial Intelligence Research, Volume 25, pages
457-502, 2006",Blum2011,True,,arXiv,Not available,A Continuation Method for Nash Equilibria in Structured Games,2457328a4863a6e775c11c95f2a01cfa,http://arxiv.org/abs/1110.5886v1
16912," Structured game representations have recently attracted interest as models
for multi-agent artificial intelligence scenarios, with rational behavior most
commonly characterized by Nash equilibria. This paper presents efficient, exact
algorithms for computing Nash equilibria in structured game representations,
including both graphical games and multi-agent influence diagrams (MAIDs). The
algorithms are derived from a continuation method for normal-form and
extensive-form games due to Govindan and Wilson; they follow a trajectory
through a space of perturbed games and their equilibria, exploiting game
structure through fast computation of the Jacobian of the payoff function. They
are theoretically guaranteed to find at least one equilibrium of the game, and
may find more. Our approach provides the first efficient algorithm for
computing exact equilibria in graphical games with arbitrary topology, and the
first algorithm to exploit fine-grained structural properties of MAIDs.
Experimental results are presented demonstrating the effectiveness of the
algorithms and comparing them to predecessors. The running time of the
graphical game algorithm is similar to, and often better than, the running time
of previous approximate algorithms. The algorithm for MAIDs can effectively
solve games that are much larger than those solvable by previous methods.
",c. shelton,,2011.0,10.1613/jair.1947,"Journal Of Artificial Intelligence Research, Volume 25, pages
457-502, 2006",Blum2011,True,,arXiv,Not available,A Continuation Method for Nash Equilibria in Structured Games,2457328a4863a6e775c11c95f2a01cfa,http://arxiv.org/abs/1110.5886v1
16913," In some games, additional information hurts a player, e.g., in games with
first-mover advantage, the second-mover is hurt by seeing the first-mover's
move. What properties of a game determine whether it has such negative ""value
of information"" for a particular player? Can a game have negative value of
information for all players? To answer such questions, we generalize the
definition of marginal utility of a good to define the marginal utility of a
parameter vector specifying a game. So rather than analyze the global structure
of the relationship between a game's parameter vector and player behavior, as
in previous work, we focus on the local structure of that relationship. This
allows us to prove that generically, every game can have negative marginal
value of information, unless one imposes a priori constraints on allowed
changes to the game's parameter vector. We demonstrate these and related
results numerically, and discuss their implications.
",nils bertschinger,,2013.0,,arXiv,Bertschinger2013,True,,arXiv,Not available,Value of information in noncooperative games,8177c13c2b4fa5284dfe614bf76cb73b,http://arxiv.org/abs/1401.0001v3
16914," In some games, additional information hurts a player, e.g., in games with
first-mover advantage, the second-mover is hurt by seeing the first-mover's
move. What properties of a game determine whether it has such negative ""value
of information"" for a particular player? Can a game have negative value of
information for all players? To answer such questions, we generalize the
definition of marginal utility of a good to define the marginal utility of a
parameter vector specifying a game. So rather than analyze the global structure
of the relationship between a game's parameter vector and player behavior, as
in previous work, we focus on the local structure of that relationship. This
allows us to prove that generically, every game can have negative marginal
value of information, unless one imposes a priori constraints on allowed
changes to the game's parameter vector. We demonstrate these and related
results numerically, and discuss their implications.
",david wolpert,,2013.0,,arXiv,Bertschinger2013,True,,arXiv,Not available,Value of information in noncooperative games,8177c13c2b4fa5284dfe614bf76cb73b,http://arxiv.org/abs/1401.0001v3
16915," In some games, additional information hurts a player, e.g., in games with
first-mover advantage, the second-mover is hurt by seeing the first-mover's
move. What properties of a game determine whether it has such negative ""value
of information"" for a particular player? Can a game have negative value of
information for all players? To answer such questions, we generalize the
definition of marginal utility of a good to define the marginal utility of a
parameter vector specifying a game. So rather than analyze the global structure
of the relationship between a game's parameter vector and player behavior, as
in previous work, we focus on the local structure of that relationship. This
allows us to prove that generically, every game can have negative marginal
value of information, unless one imposes a priori constraints on allowed
changes to the game's parameter vector. We demonstrate these and related
results numerically, and discuss their implications.
",eckehard olbrich,,2013.0,,arXiv,Bertschinger2013,True,,arXiv,Not available,Value of information in noncooperative games,8177c13c2b4fa5284dfe614bf76cb73b,http://arxiv.org/abs/1401.0001v3
16916," This paper introduced the classical prisoner dilemma with the character and
structure of quantum prisoner dilemma's strategy space. Associate with the
Dirac spinor field, apply the basic quantum game strategy to the translation of
the dynamics of Dirac equation. Decompose the real space and time to lattice we
found that the basic interaction of spinor could be translated into quantum
game theory. At the same time, we gained the new dynamics of quantized spacial
evolutionary game.
",haizhao zhi,,2011.0,,arXiv,Zhi2011,True,,arXiv,Not available,Quantum game interpretation of Dirac spinor field,b78ad2a8e9e6b7188a3c91f7c772b99b,http://arxiv.org/abs/1104.2841v2
16917," In some games, additional information hurts a player, e.g., in games with
first-mover advantage, the second-mover is hurt by seeing the first-mover's
move. What properties of a game determine whether it has such negative ""value
of information"" for a particular player? Can a game have negative value of
information for all players? To answer such questions, we generalize the
definition of marginal utility of a good to define the marginal utility of a
parameter vector specifying a game. So rather than analyze the global structure
of the relationship between a game's parameter vector and player behavior, as
in previous work, we focus on the local structure of that relationship. This
allows us to prove that generically, every game can have negative marginal
value of information, unless one imposes a priori constraints on allowed
changes to the game's parameter vector. We demonstrate these and related
results numerically, and discuss their implications.
",juergen jost,,2013.0,,arXiv,Bertschinger2013,True,,arXiv,Not available,Value of information in noncooperative games,8177c13c2b4fa5284dfe614bf76cb73b,http://arxiv.org/abs/1401.0001v3
16918," Games have always been a popular test bed for artificial intelligence
techniques. Game developers are always in constant search for techniques that
can automatically create computer games minimizing the developer's task. In
this work we present an evolutionary strategy based solution towards the
automatic generation of two player board games. To guide the evolutionary
process towards games, which are entertaining, we propose a set of metrics.
These metrics are based upon different theories of entertainment in computer
games. This work also compares the entertainment value of the evolved games
with the existing popular board based games. Further to verify the
entertainment value of the evolved games with the entertainment value of the
human user a human user survey is conducted. In addition to the user survey we
check the learnability of the evolved games using an artificial neural network
based controller. The proposed metrics and the evolutionary process can be
employed for generating new and entertaining board games, provided an initial
search space is given to the evolutionary algorithm.
",zahid halim,,2014.0,10.1142/S0218213013500280,arXiv,Halim2014,True,,arXiv,Not available,"Evolutionary Search in the Space of Rules for Creation of New Two-Player
Board Games",a86d6f891acdcfa2ce0ef799c5ae17a3,http://arxiv.org/abs/1406.0175v1
16919," We propose a new all-pay auction format in which risk-loving bidders pay a
constant fee each time they bid for an object whose monetary value is common
knowledge among the bidders, and bidding fees are the only source of benefit
for the seller. We show that for the proposed model there exists a {unique}
Symmetric Subgame Perfect Equilibrium (SSPE). The characterized SSPE is
stationary when re-entry in the auction is allowed, and it is Markov perfect
when re-entry is forbidden. Furthermore, we fully characterize the expected
revenue of the seller. Generally, with or without re-entry, it is more
beneficial for the seller to choose $v$ (value of the object), $s$ (sale
price), and $c$ (bidding fee) such that $\frac{v-s}{c}$ becomes sufficiently
large. In particular, when re-entry is permitted: the expected revenue of the
seller is \emph{independent} of the number of bidders, decreasing in the sale
price, increasing in the value of the object, and decreasing in the bidding
fee; Moreover, the seller's revenue is equal to the value of the object when
players are risk neutral, and it is strictly greater than the value of the
object when bidders are risk-loving. We further show that allowing re-entry can
be important in practice. Because, if the seller were to run such an auction
without allowing re-entry, the auction would last a long time, and for almost
all of its duration have only two remaining players. Thus, the seller's revenue
relies on those two players being willing to participate, without any breaks,
in an auction that might last for thousands of rounds
",ali kakhbod,,2011.0,,arXiv,Kakhbod2011,True,,arXiv,Not available,Resource allocation with costly participation,572a491e2875b8bb2a1246cdc3ba2dae,http://arxiv.org/abs/1108.2018v6
16920," Complements between goods - where one good takes on added value in the
presence of another - have been a thorn in the side of algorithmic mechanism
designers. On the one hand, complements are common in the standard motivating
applications for combinatorial auctions, like spectrum license auctions. On the
other, welfare maximization in the presence of complements is notoriously
difficult, and this intractability has stymied theoretical progress in the
area. For example, there are no known positive results for combinatorial
auctions in which bidder valuations are multi-parameter and
non-complement-free, other than the relatively weak results known for general
valuations.
To make inroads on the problem of combinatorial auction design in the
presence of complements, we propose a model for valuations with complements
that is parameterized by the ""size"" of the complements. A valuation in our
model is represented succinctly by a weighted hypergraph, where the size of the
hyper-edges corresponds to degree of complementarity. Our model permits a
variety of computationally efficient queries, and non-trivial
welfare-maximization algorithms and mechanisms.
We design the following polynomial-time approximation algorithms and truthful
mechanisms for welfare maximization with bidders with hypergraph valuations.
1- For bidders whose valuations correspond to subgraphs of a known graph that
is planar (or more generally, excludes a fixed minor), we give a truthful and
(1+epsilon)-approximate mechanism.
2- We give a polynomial-time, r-approximation algorithm for welfare
maximization with hypergraph-r valuations. Our algorithm randomly rounds a
compact linear programming relaxation of the problem.
3- We design a different approximation algorithm and use it to give a
polynomial-time, truthful-in-expectation mechanism that has an approximation
factor of O(log^r m).
",ittai abraham,,2012.0,,arXiv,Abraham2012,True,,arXiv,Not available,Combinatorial Auctions with Restricted Complements,685501ee4d224548ebf2ed26de3e6c2a,http://arxiv.org/abs/1205.4104v1
16921," Complements between goods - where one good takes on added value in the
presence of another - have been a thorn in the side of algorithmic mechanism
designers. On the one hand, complements are common in the standard motivating
applications for combinatorial auctions, like spectrum license auctions. On the
other, welfare maximization in the presence of complements is notoriously
difficult, and this intractability has stymied theoretical progress in the
area. For example, there are no known positive results for combinatorial
auctions in which bidder valuations are multi-parameter and
non-complement-free, other than the relatively weak results known for general
valuations.
To make inroads on the problem of combinatorial auction design in the
presence of complements, we propose a model for valuations with complements
that is parameterized by the ""size"" of the complements. A valuation in our
model is represented succinctly by a weighted hypergraph, where the size of the
hyper-edges corresponds to degree of complementarity. Our model permits a
variety of computationally efficient queries, and non-trivial
welfare-maximization algorithms and mechanisms.
We design the following polynomial-time approximation algorithms and truthful
mechanisms for welfare maximization with bidders with hypergraph valuations.
1- For bidders whose valuations correspond to subgraphs of a known graph that
is planar (or more generally, excludes a fixed minor), we give a truthful and
(1+epsilon)-approximate mechanism.
2- We give a polynomial-time, r-approximation algorithm for welfare
maximization with hypergraph-r valuations. Our algorithm randomly rounds a
compact linear programming relaxation of the problem.
3- We design a different approximation algorithm and use it to give a
polynomial-time, truthful-in-expectation mechanism that has an approximation
factor of O(log^r m).
",moshe babaioff,,2012.0,,arXiv,Abraham2012,True,,arXiv,Not available,Combinatorial Auctions with Restricted Complements,685501ee4d224548ebf2ed26de3e6c2a,http://arxiv.org/abs/1205.4104v1
16922," Complements between goods - where one good takes on added value in the
presence of another - have been a thorn in the side of algorithmic mechanism
designers. On the one hand, complements are common in the standard motivating
applications for combinatorial auctions, like spectrum license auctions. On the
other, welfare maximization in the presence of complements is notoriously
difficult, and this intractability has stymied theoretical progress in the
area. For example, there are no known positive results for combinatorial
auctions in which bidder valuations are multi-parameter and
non-complement-free, other than the relatively weak results known for general
valuations.
To make inroads on the problem of combinatorial auction design in the
presence of complements, we propose a model for valuations with complements
that is parameterized by the ""size"" of the complements. A valuation in our
model is represented succinctly by a weighted hypergraph, where the size of the
hyper-edges corresponds to degree of complementarity. Our model permits a
variety of computationally efficient queries, and non-trivial
welfare-maximization algorithms and mechanisms.
We design the following polynomial-time approximation algorithms and truthful
mechanisms for welfare maximization with bidders with hypergraph valuations.
1- For bidders whose valuations correspond to subgraphs of a known graph that
is planar (or more generally, excludes a fixed minor), we give a truthful and
(1+epsilon)-approximate mechanism.
2- We give a polynomial-time, r-approximation algorithm for welfare
maximization with hypergraph-r valuations. Our algorithm randomly rounds a
compact linear programming relaxation of the problem.
3- We design a different approximation algorithm and use it to give a
polynomial-time, truthful-in-expectation mechanism that has an approximation
factor of O(log^r m).
",shaddin dughmi,,2012.0,,arXiv,Abraham2012,True,,arXiv,Not available,Combinatorial Auctions with Restricted Complements,685501ee4d224548ebf2ed26de3e6c2a,http://arxiv.org/abs/1205.4104v1
16923," Complements between goods - where one good takes on added value in the
presence of another - have been a thorn in the side of algorithmic mechanism
designers. On the one hand, complements are common in the standard motivating
applications for combinatorial auctions, like spectrum license auctions. On the
other, welfare maximization in the presence of complements is notoriously
difficult, and this intractability has stymied theoretical progress in the
area. For example, there are no known positive results for combinatorial
auctions in which bidder valuations are multi-parameter and
non-complement-free, other than the relatively weak results known for general
valuations.
To make inroads on the problem of combinatorial auction design in the
presence of complements, we propose a model for valuations with complements
that is parameterized by the ""size"" of the complements. A valuation in our
model is represented succinctly by a weighted hypergraph, where the size of the
hyper-edges corresponds to degree of complementarity. Our model permits a
variety of computationally efficient queries, and non-trivial
welfare-maximization algorithms and mechanisms.
We design the following polynomial-time approximation algorithms and truthful
mechanisms for welfare maximization with bidders with hypergraph valuations.
1- For bidders whose valuations correspond to subgraphs of a known graph that
is planar (or more generally, excludes a fixed minor), we give a truthful and
(1+epsilon)-approximate mechanism.
2- We give a polynomial-time, r-approximation algorithm for welfare
maximization with hypergraph-r valuations. Our algorithm randomly rounds a
compact linear programming relaxation of the problem.
3- We design a different approximation algorithm and use it to give a
polynomial-time, truthful-in-expectation mechanism that has an approximation
factor of O(log^r m).
",tim roughgarden,,2012.0,,arXiv,Abraham2012,True,,arXiv,Not available,Combinatorial Auctions with Restricted Complements,685501ee4d224548ebf2ed26de3e6c2a,http://arxiv.org/abs/1205.4104v1
16924," We consider a multi-round auction setting motivated by pay-per-click auctions
for Internet advertising. In each round the auctioneer selects an advertiser
and shows her ad, which is then either clicked or not. An advertiser derives
value from clicks; the value of a click is her private information. Initially,
neither the auctioneer nor the advertisers have any information about the
likelihood of clicks on the advertisements. The auctioneer's goal is to design
a (dominant strategies) truthful mechanism that (approximately) maximizes the
social welfare.
If the advertisers bid their true private values, our problem is equivalent
to the ""multi-armed bandit problem"", and thus can be viewed as a strategic
version of the latter. In particular, for both problems the quality of an
algorithm can be characterized by ""regret"", the difference in social welfare
between the algorithm and the benchmark which always selects the same ""best""
advertisement. We investigate how the design of multi-armed bandit algorithms
is affected by the restriction that the resulting mechanism must be truthful.
We find that truthful mechanisms have certain strong structural properties --
essentially, they must separate exploration from exploitation -- and they incur
much higher regret than the optimal multi-armed bandit algorithms. Moreover, we
provide a truthful mechanism which (essentially) matches our lower bound on
regret.
",moshe babaioff,,2008.0,,arXiv,Babaioff2008,True,,arXiv,Not available,Characterizing Truthful Multi-Armed Bandit Mechanisms,84a5aada4f4f32e605aaafec04988254,http://arxiv.org/abs/0812.2291v7
16925," We consider a multi-round auction setting motivated by pay-per-click auctions
for Internet advertising. In each round the auctioneer selects an advertiser
and shows her ad, which is then either clicked or not. An advertiser derives
value from clicks; the value of a click is her private information. Initially,
neither the auctioneer nor the advertisers have any information about the
likelihood of clicks on the advertisements. The auctioneer's goal is to design
a (dominant strategies) truthful mechanism that (approximately) maximizes the
social welfare.
If the advertisers bid their true private values, our problem is equivalent
to the ""multi-armed bandit problem"", and thus can be viewed as a strategic
version of the latter. In particular, for both problems the quality of an
algorithm can be characterized by ""regret"", the difference in social welfare
between the algorithm and the benchmark which always selects the same ""best""
advertisement. We investigate how the design of multi-armed bandit algorithms
is affected by the restriction that the resulting mechanism must be truthful.
We find that truthful mechanisms have certain strong structural properties --
essentially, they must separate exploration from exploitation -- and they incur
much higher regret than the optimal multi-armed bandit algorithms. Moreover, we
provide a truthful mechanism which (essentially) matches our lower bound on
regret.
",yogeshwer sharma,,2008.0,,arXiv,Babaioff2008,True,,arXiv,Not available,Characterizing Truthful Multi-Armed Bandit Mechanisms,84a5aada4f4f32e605aaafec04988254,http://arxiv.org/abs/0812.2291v7
16926," We consider a multi-round auction setting motivated by pay-per-click auctions
for Internet advertising. In each round the auctioneer selects an advertiser
and shows her ad, which is then either clicked or not. An advertiser derives
value from clicks; the value of a click is her private information. Initially,
neither the auctioneer nor the advertisers have any information about the
likelihood of clicks on the advertisements. The auctioneer's goal is to design
a (dominant strategies) truthful mechanism that (approximately) maximizes the
social welfare.
If the advertisers bid their true private values, our problem is equivalent
to the ""multi-armed bandit problem"", and thus can be viewed as a strategic
version of the latter. In particular, for both problems the quality of an
algorithm can be characterized by ""regret"", the difference in social welfare
between the algorithm and the benchmark which always selects the same ""best""
advertisement. We investigate how the design of multi-armed bandit algorithms
is affected by the restriction that the resulting mechanism must be truthful.
We find that truthful mechanisms have certain strong structural properties --
essentially, they must separate exploration from exploitation -- and they incur
much higher regret than the optimal multi-armed bandit algorithms. Moreover, we
provide a truthful mechanism which (essentially) matches our lower bound on
regret.
",aleksandrs slivkins,,2008.0,,arXiv,Babaioff2008,True,,arXiv,Not available,Characterizing Truthful Multi-Armed Bandit Mechanisms,84a5aada4f4f32e605aaafec04988254,http://arxiv.org/abs/0812.2291v7
16927," We propose a simple model of network co-evolution in a game-dynamical system
of interacting agents that play repeated games with their neighbors, and adapt
their behaviors and network links based on the outcome of those games. The
adaptation is achieved through a simple reinforcement learning scheme. We show
that the collective evolution of such a system can be described by
appropriately defined replicator dynamics equations. In particular, we suggest
an appropriate factorization of the agents' strategies that results in a
coupled system of equations characterizing the evolution of both strategies and
network structure, and illustrate the framework on two simple examples.
",aram galstyan,,2011.0,,arXiv,Galstyan2011,True,,arXiv,Not available,Replicator Dynamics of Co-Evolving Networks,15c60630db34ddd141e8db04b37518a9,http://arxiv.org/abs/1107.5354v1
16928," The tremendous increase in mobile data traffic coupled with fierce
competition in wireless industry brings about spectrum scarcity and bandwidth
fragmentation. This inevitably results in asymmetric-valued LTE spectrum
allocation that stems from different timing for twice improvement in capacity
between competing operators, given spectrum allocations today. This motivates
us to study the economic effects of asymmetric-valued LTE spectrum allocation.
In this paper, we formulate the interactions between operators and users as a
hierarchical dynamic game framework, where two spiteful operators
simultaneously make spectrum acquisition decisions in the upper-level
first-price sealed-bid auction game, and dynamic pricing decisions in the
lower-level differential game, taking into account user subscription dynamics.
Using backward induction, we derive the equilibrium of the entire game under
mild conditions. Through analytical and numerical results, we verify our
studies by comparing the latest result of LTE spectrum auction in South Korea,
which serves as the benchmark of asymmetric-valued LTE spectrum auction
designs.
",sang jung,,2015.0,,arXiv,Jung2015,True,,arXiv,Not available,"Bidding, Pricing, and User Subscription Dynamics in Asymmetric-valued
Korean LTE Spectrum Auction: A Hierarchical Dynamic Game Approach",9c965bb8137a65191576a01422f63664,http://arxiv.org/abs/1507.00379v1
16929," The tremendous increase in mobile data traffic coupled with fierce
competition in wireless industry brings about spectrum scarcity and bandwidth
fragmentation. This inevitably results in asymmetric-valued LTE spectrum
allocation that stems from different timing for twice improvement in capacity
between competing operators, given spectrum allocations today. This motivates
us to study the economic effects of asymmetric-valued LTE spectrum allocation.
In this paper, we formulate the interactions between operators and users as a
hierarchical dynamic game framework, where two spiteful operators
simultaneously make spectrum acquisition decisions in the upper-level
first-price sealed-bid auction game, and dynamic pricing decisions in the
lower-level differential game, taking into account user subscription dynamics.
Using backward induction, we derive the equilibrium of the entire game under
mild conditions. Through analytical and numerical results, we verify our
studies by comparing the latest result of LTE spectrum auction in South Korea,
which serves as the benchmark of asymmetric-valued LTE spectrum auction
designs.
",seong-lyun kim,,2015.0,,arXiv,Jung2015,True,,arXiv,Not available,"Bidding, Pricing, and User Subscription Dynamics in Asymmetric-valued
Korean LTE Spectrum Auction: A Hierarchical Dynamic Game Approach",9c965bb8137a65191576a01422f63664,http://arxiv.org/abs/1507.00379v1
16930," We propose a game-theoretic framework that incorporates both incomplete
information and general ambiguity attitudes on factors external to all players.
Our starting point is players' preferences on payoff-distribution vectors,
essentially mappings from states of the world to distributions of payoffs to be
received by players. There are two ways in which equilibria for this preference
game can be defined. When the preferences possess ever more features, we can
gradually add ever more structures to the game. These include real-valued
utility-like functions over payoff-distribution vectors, sets of probabilistic
priors over states of the world, and eventually the traditional
expected-utility framework involving one single prior. We establish equilibrium
existence results, show the upper hemi-continuity of equilibrium sets over
changing ambiguity attitudes, and uncover relations between the two versions of
equilibria. Some attention is paid to the enterprising game, in which players
exhibit ambiguity seeking attitudes while betting optimistically on the
favorable resolution of ambiguities. The two solution concepts are unified at
this game's pure equilibria, whose existence is guaranteed when strategic
complementarities are present. The current framework can be applied to settings
like auctions involving ambiguity on competitors' assessments of item worths.
",jian yang,,2015.0,,arXiv,Yang2015,True,,arXiv,Not available,Game-theoretic Modeling of Players' Ambiguities on External Factors,dc4fca20490490858f9d43ab035a57c8,http://arxiv.org/abs/1510.06812v4
16931," We design an expected polynomial-time, truthful-in-expectation,
(1-1/e)-approximation mechanism for welfare maximization in a fundamental class
of combinatorial auctions. Our results apply to bidders with valuations that
are m matroid rank sums (MRS), which encompass most concrete examples of
submodular functions studied in this context, including coverage functions,
matroid weighted-rank functions, and convex combinations thereof. Our
approximation factor is the best possible, even for known and explicitly given
coverage valuations, assuming P != NP. Ours is the first
truthful-in-expectation and polynomial-time mechanism to achieve a
constant-factor approximation for an NP-hard welfare maximization problem in
combinatorial auctions with heterogeneous goods and restricted valuations.
Our mechanism is an instantiation of a new framework for designing
approximation mechanisms based on randomized rounding algorithms. A typical
such algorithm first optimizes over a fractional relaxation of the original
problem, and then randomly rounds the fractional solution to an integral one.
With rare exceptions, such algorithms cannot be converted into truthful
mechanisms. The high-level idea of our mechanism design framework is to
optimize directly over the (random) output of the rounding algorithm, rather
than over the input to the rounding algorithm. This approach leads to
truthful-in-expectation mechanisms, and these mechanisms can be implemented
efficiently when the corresponding objective function is concave. For bidders
with MRS valuations, we give a novel randomized rounding algorithm that leads
to both a concave objective function and a (1-1/e)-approximation of the optimal
welfare.
",shaddin dughmi,,2011.0,,arXiv,Dughmi2011,True,,arXiv,Not available,"From Convex Optimization to Randomized Mechanisms: Toward Optimal
Combinatorial Auctions",9840843cf5682a74c2ecb5a6cf14de52,http://arxiv.org/abs/1103.0040v3
16932," We design an expected polynomial-time, truthful-in-expectation,
(1-1/e)-approximation mechanism for welfare maximization in a fundamental class
of combinatorial auctions. Our results apply to bidders with valuations that
are m matroid rank sums (MRS), which encompass most concrete examples of
submodular functions studied in this context, including coverage functions,
matroid weighted-rank functions, and convex combinations thereof. Our
approximation factor is the best possible, even for known and explicitly given
coverage valuations, assuming P != NP. Ours is the first
truthful-in-expectation and polynomial-time mechanism to achieve a
constant-factor approximation for an NP-hard welfare maximization problem in
combinatorial auctions with heterogeneous goods and restricted valuations.
Our mechanism is an instantiation of a new framework for designing
approximation mechanisms based on randomized rounding algorithms. A typical
such algorithm first optimizes over a fractional relaxation of the original
problem, and then randomly rounds the fractional solution to an integral one.
With rare exceptions, such algorithms cannot be converted into truthful
mechanisms. The high-level idea of our mechanism design framework is to
optimize directly over the (random) output of the rounding algorithm, rather
than over the input to the rounding algorithm. This approach leads to
truthful-in-expectation mechanisms, and these mechanisms can be implemented
efficiently when the corresponding objective function is concave. For bidders
with MRS valuations, we give a novel randomized rounding algorithm that leads
to both a concave objective function and a (1-1/e)-approximation of the optimal
welfare.
",tim roughgarden,,2011.0,,arXiv,Dughmi2011,True,,arXiv,Not available,"From Convex Optimization to Randomized Mechanisms: Toward Optimal
Combinatorial Auctions",9840843cf5682a74c2ecb5a6cf14de52,http://arxiv.org/abs/1103.0040v3
16933," We design an expected polynomial-time, truthful-in-expectation,
(1-1/e)-approximation mechanism for welfare maximization in a fundamental class
of combinatorial auctions. Our results apply to bidders with valuations that
are m matroid rank sums (MRS), which encompass most concrete examples of
submodular functions studied in this context, including coverage functions,
matroid weighted-rank functions, and convex combinations thereof. Our
approximation factor is the best possible, even for known and explicitly given
coverage valuations, assuming P != NP. Ours is the first
truthful-in-expectation and polynomial-time mechanism to achieve a
constant-factor approximation for an NP-hard welfare maximization problem in
combinatorial auctions with heterogeneous goods and restricted valuations.
Our mechanism is an instantiation of a new framework for designing
approximation mechanisms based on randomized rounding algorithms. A typical
such algorithm first optimizes over a fractional relaxation of the original
problem, and then randomly rounds the fractional solution to an integral one.
With rare exceptions, such algorithms cannot be converted into truthful
mechanisms. The high-level idea of our mechanism design framework is to
optimize directly over the (random) output of the rounding algorithm, rather
than over the input to the rounding algorithm. This approach leads to
truthful-in-expectation mechanisms, and these mechanisms can be implemented
efficiently when the corresponding objective function is concave. For bidders
with MRS valuations, we give a novel randomized rounding algorithm that leads
to both a concave objective function and a (1-1/e)-approximation of the optimal
welfare.
",qiqi yan,,2011.0,,arXiv,Dughmi2011,True,,arXiv,Not available,"From Convex Optimization to Randomized Mechanisms: Toward Optimal
Combinatorial Auctions",9840843cf5682a74c2ecb5a6cf14de52,http://arxiv.org/abs/1103.0040v3
16934," The secretary and the prophet inequality problems are central to the field of
Stopping Theory. Recently, there has been a lot of work in generalizing these
models to multiple items because of their applications in mechanism design. The
most important of these generalizations are to matroids and to combinatorial
auctions (extends bipartite matching). Kleinberg-Weinberg \cite{KW-STOC12} and
Feldman et al. \cite{feldman2015combinatorial} show that for adversarial
arrival order of random variables the optimal prophet inequalities give a
$1/2$-approximation. For many settings, however, it's conceivable that the
arrival order is chosen uniformly at random, akin to the secretary problem. For
such a random arrival model, we improve upon the $1/2$-approximation and obtain
$(1-1/e)$-approximation prophet inequalities for both matroids and
combinatorial auctions. This also gives improvements to the results of Yan
\cite{yan2011mechanism} and Esfandiari et al. \cite{esfandiari2015prophet} who
worked in the special cases where we can fully control the arrival order or
when there is only a single item.
Our techniques are threshold based. We convert our discrete problem into a
continuous setting and then give a generic template on how to dynamically
adjust these thresholds to lower bound the expected total welfare.
",soheil ehsani,,2017.0,,arXiv,Ehsani2017,True,,arXiv,Not available,Prophet Secretary for Combinatorial Auctions and Matroids,d60bc50e728fa2e130d90036f213e0b2,http://arxiv.org/abs/1710.11213v2
16935," The secretary and the prophet inequality problems are central to the field of
Stopping Theory. Recently, there has been a lot of work in generalizing these
models to multiple items because of their applications in mechanism design. The
most important of these generalizations are to matroids and to combinatorial
auctions (extends bipartite matching). Kleinberg-Weinberg \cite{KW-STOC12} and
Feldman et al. \cite{feldman2015combinatorial} show that for adversarial
arrival order of random variables the optimal prophet inequalities give a
$1/2$-approximation. For many settings, however, it's conceivable that the
arrival order is chosen uniformly at random, akin to the secretary problem. For
such a random arrival model, we improve upon the $1/2$-approximation and obtain
$(1-1/e)$-approximation prophet inequalities for both matroids and
combinatorial auctions. This also gives improvements to the results of Yan
\cite{yan2011mechanism} and Esfandiari et al. \cite{esfandiari2015prophet} who
worked in the special cases where we can fully control the arrival order or
when there is only a single item.
Our techniques are threshold based. We convert our discrete problem into a
continuous setting and then give a generic template on how to dynamically
adjust these thresholds to lower bound the expected total welfare.
",mohammadtaghi hajiaghayi,,2017.0,,arXiv,Ehsani2017,True,,arXiv,Not available,Prophet Secretary for Combinatorial Auctions and Matroids,d60bc50e728fa2e130d90036f213e0b2,http://arxiv.org/abs/1710.11213v2
16936," The secretary and the prophet inequality problems are central to the field of
Stopping Theory. Recently, there has been a lot of work in generalizing these
models to multiple items because of their applications in mechanism design. The
most important of these generalizations are to matroids and to combinatorial
auctions (extends bipartite matching). Kleinberg-Weinberg \cite{KW-STOC12} and
Feldman et al. \cite{feldman2015combinatorial} show that for adversarial
arrival order of random variables the optimal prophet inequalities give a
$1/2$-approximation. For many settings, however, it's conceivable that the
arrival order is chosen uniformly at random, akin to the secretary problem. For
such a random arrival model, we improve upon the $1/2$-approximation and obtain
$(1-1/e)$-approximation prophet inequalities for both matroids and
combinatorial auctions. This also gives improvements to the results of Yan
\cite{yan2011mechanism} and Esfandiari et al. \cite{esfandiari2015prophet} who
worked in the special cases where we can fully control the arrival order or
when there is only a single item.
Our techniques are threshold based. We convert our discrete problem into a
continuous setting and then give a generic template on how to dynamically
adjust these thresholds to lower bound the expected total welfare.
",thomas kesselheim,,2017.0,,arXiv,Ehsani2017,True,,arXiv,Not available,Prophet Secretary for Combinatorial Auctions and Matroids,d60bc50e728fa2e130d90036f213e0b2,http://arxiv.org/abs/1710.11213v2
16937," The secretary and the prophet inequality problems are central to the field of
Stopping Theory. Recently, there has been a lot of work in generalizing these
models to multiple items because of their applications in mechanism design. The
most important of these generalizations are to matroids and to combinatorial
auctions (extends bipartite matching). Kleinberg-Weinberg \cite{KW-STOC12} and
Feldman et al. \cite{feldman2015combinatorial} show that for adversarial
arrival order of random variables the optimal prophet inequalities give a
$1/2$-approximation. For many settings, however, it's conceivable that the
arrival order is chosen uniformly at random, akin to the secretary problem. For
such a random arrival model, we improve upon the $1/2$-approximation and obtain
$(1-1/e)$-approximation prophet inequalities for both matroids and
combinatorial auctions. This also gives improvements to the results of Yan
\cite{yan2011mechanism} and Esfandiari et al. \cite{esfandiari2015prophet} who
worked in the special cases where we can fully control the arrival order or
when there is only a single item.
Our techniques are threshold based. We convert our discrete problem into a
continuous setting and then give a generic template on how to dynamically
adjust these thresholds to lower bound the expected total welfare.
",sahil singla,,2017.0,,arXiv,Ehsani2017,True,,arXiv,Not available,Prophet Secretary for Combinatorial Auctions and Matroids,d60bc50e728fa2e130d90036f213e0b2,http://arxiv.org/abs/1710.11213v2
16938," We propose a simple model of network co-evolution in a game-dynamical system
of interacting agents that play repeated games with their neighbors, and adapt
their behaviors and network links based on the outcome of those games. The
adaptation is achieved through a simple reinforcement learning scheme. We show
that the collective evolution of such a system can be described by
appropriately defined replicator dynamics equations. In particular, we suggest
an appropriate factorization of the agents' strategies that results in a
coupled system of equations characterizing the evolution of both strategies and
network structure, and illustrate the framework on two simple examples.
",ardeshir kianercy,,2011.0,,arXiv,Galstyan2011,True,,arXiv,Not available,Replicator Dynamics of Co-Evolving Networks,15c60630db34ddd141e8db04b37518a9,http://arxiv.org/abs/1107.5354v1
16939," In this paper, we investigate optimal resource allocation in a power
beacon-assisted wireless-powered communication network (PB-WPCN), which
consists of a set of hybrid access point (AP)-source pairs and a power beacon
(PB). Each source, which has no embedded power supply, first harvests energy
from its associated AP and/or the PB in the downlink (DL) and then uses the
harvested energy to transmit information to its AP in the uplink (UL). We
consider both cooperative and non-cooperative scenarios based on whether the PB
is cooperative with the APs or not. For the cooperative scenario, we formulate
a social welfare maximization problem to maximize the weighted sum-throughput
of all AP-source pairs, which is subsequently solved by a water-filling based
distributed algorithm. In the non-cooperative scenario, all the APs and the PB
are assumed to be rational and self-interested such that incentives from each
AP are needed for the PB to provide wireless charging service. We then
formulate an auction game and propose an auction based distributed algorithm by
considering the PB as the auctioneer and the APs as the bidders. Finally,
numerical results are performed to validate the convergence of both the
proposed algorithms and demonstrate the impacts of various system parameters.
",yuanye ma,,2015.0,10.1109/TCOMM.2015.2468215,arXiv,Ma2015,True,,arXiv,Not available,"Distributed and Optimal Resource Allocation for Power Beacon-Assisted
Wireless-Powered Communications",80f513ac21209e8ba66340f378e12707,http://arxiv.org/abs/1508.01617v1
16940," In this paper, we investigate optimal resource allocation in a power
beacon-assisted wireless-powered communication network (PB-WPCN), which
consists of a set of hybrid access point (AP)-source pairs and a power beacon
(PB). Each source, which has no embedded power supply, first harvests energy
from its associated AP and/or the PB in the downlink (DL) and then uses the
harvested energy to transmit information to its AP in the uplink (UL). We
consider both cooperative and non-cooperative scenarios based on whether the PB
is cooperative with the APs or not. For the cooperative scenario, we formulate
a social welfare maximization problem to maximize the weighted sum-throughput
of all AP-source pairs, which is subsequently solved by a water-filling based
distributed algorithm. In the non-cooperative scenario, all the APs and the PB
are assumed to be rational and self-interested such that incentives from each
AP are needed for the PB to provide wireless charging service. We then
formulate an auction game and propose an auction based distributed algorithm by
considering the PB as the auctioneer and the APs as the bidders. Finally,
numerical results are performed to validate the convergence of both the
proposed algorithms and demonstrate the impacts of various system parameters.
",he chen,,2015.0,10.1109/TCOMM.2015.2468215,arXiv,Ma2015,True,,arXiv,Not available,"Distributed and Optimal Resource Allocation for Power Beacon-Assisted
Wireless-Powered Communications",80f513ac21209e8ba66340f378e12707,http://arxiv.org/abs/1508.01617v1
16941," In this paper, we investigate optimal resource allocation in a power
beacon-assisted wireless-powered communication network (PB-WPCN), which
consists of a set of hybrid access point (AP)-source pairs and a power beacon
(PB). Each source, which has no embedded power supply, first harvests energy
from its associated AP and/or the PB in the downlink (DL) and then uses the
harvested energy to transmit information to its AP in the uplink (UL). We
consider both cooperative and non-cooperative scenarios based on whether the PB
is cooperative with the APs or not. For the cooperative scenario, we formulate
a social welfare maximization problem to maximize the weighted sum-throughput
of all AP-source pairs, which is subsequently solved by a water-filling based
distributed algorithm. In the non-cooperative scenario, all the APs and the PB
are assumed to be rational and self-interested such that incentives from each
AP are needed for the PB to provide wireless charging service. We then
formulate an auction game and propose an auction based distributed algorithm by
considering the PB as the auctioneer and the APs as the bidders. Finally,
numerical results are performed to validate the convergence of both the
proposed algorithms and demonstrate the impacts of various system parameters.
",zihuai lin,,2015.0,10.1109/TCOMM.2015.2468215,arXiv,Ma2015,True,,arXiv,Not available,"Distributed and Optimal Resource Allocation for Power Beacon-Assisted
Wireless-Powered Communications",80f513ac21209e8ba66340f378e12707,http://arxiv.org/abs/1508.01617v1
16942," In this paper, we investigate optimal resource allocation in a power
beacon-assisted wireless-powered communication network (PB-WPCN), which
consists of a set of hybrid access point (AP)-source pairs and a power beacon
(PB). Each source, which has no embedded power supply, first harvests energy
from its associated AP and/or the PB in the downlink (DL) and then uses the
harvested energy to transmit information to its AP in the uplink (UL). We
consider both cooperative and non-cooperative scenarios based on whether the PB
is cooperative with the APs or not. For the cooperative scenario, we formulate
a social welfare maximization problem to maximize the weighted sum-throughput
of all AP-source pairs, which is subsequently solved by a water-filling based
distributed algorithm. In the non-cooperative scenario, all the APs and the PB
are assumed to be rational and self-interested such that incentives from each
AP are needed for the PB to provide wireless charging service. We then
formulate an auction game and propose an auction based distributed algorithm by
considering the PB as the auctioneer and the APs as the bidders. Finally,
numerical results are performed to validate the convergence of both the
proposed algorithms and demonstrate the impacts of various system parameters.
",yonghui li,,2015.0,10.1109/TCOMM.2015.2468215,arXiv,Ma2015,True,,arXiv,Not available,"Distributed and Optimal Resource Allocation for Power Beacon-Assisted
Wireless-Powered Communications",80f513ac21209e8ba66340f378e12707,http://arxiv.org/abs/1508.01617v1
16943," In this paper, we investigate optimal resource allocation in a power
beacon-assisted wireless-powered communication network (PB-WPCN), which
consists of a set of hybrid access point (AP)-source pairs and a power beacon
(PB). Each source, which has no embedded power supply, first harvests energy
from its associated AP and/or the PB in the downlink (DL) and then uses the
harvested energy to transmit information to its AP in the uplink (UL). We
consider both cooperative and non-cooperative scenarios based on whether the PB
is cooperative with the APs or not. For the cooperative scenario, we formulate
a social welfare maximization problem to maximize the weighted sum-throughput
of all AP-source pairs, which is subsequently solved by a water-filling based
distributed algorithm. In the non-cooperative scenario, all the APs and the PB
are assumed to be rational and self-interested such that incentives from each
AP are needed for the PB to provide wireless charging service. We then
formulate an auction game and propose an auction based distributed algorithm by
considering the PB as the auctioneer and the APs as the bidders. Finally,
numerical results are performed to validate the convergence of both the
proposed algorithms and demonstrate the impacts of various system parameters.
",branka vucetic,,2015.0,10.1109/TCOMM.2015.2468215,arXiv,Ma2015,True,,arXiv,Not available,"Distributed and Optimal Resource Allocation for Power Beacon-Assisted
Wireless-Powered Communications",80f513ac21209e8ba66340f378e12707,http://arxiv.org/abs/1508.01617v1
16944," Multi-agent games are becoming an increasing prevalent formalism for the
study of electronic commerce and auctions. The speed at which transactions can
take place and the growing complexity of electronic marketplaces makes the
study of computationally simple agents an appealing direction. In this work, we
analyze the behavior of agents that incrementally adapt their strategy through
gradient ascent on expected payoff, in the simple setting of two-player,
two-action, iterated general-sum games, and present a surprising result. We
show that either the agents will converge to Nash equilibrium, or if the
strategies themselves do not converge, then their average payoffs will
nevertheless converge to the payoffs of a Nash equilibrium.
",satinder singh,,2013.0,,arXiv,Singh2013,True,,arXiv,Not available,Nash Convergence of Gradient Dynamics in Iterated General-Sum Games,7b5c1f3f1b624989464de66367b6744e,http://arxiv.org/abs/1301.3892v1
16945," Multi-agent games are becoming an increasing prevalent formalism for the
study of electronic commerce and auctions. The speed at which transactions can
take place and the growing complexity of electronic marketplaces makes the
study of computationally simple agents an appealing direction. In this work, we
analyze the behavior of agents that incrementally adapt their strategy through
gradient ascent on expected payoff, in the simple setting of two-player,
two-action, iterated general-sum games, and present a surprising result. We
show that either the agents will converge to Nash equilibrium, or if the
strategies themselves do not converge, then their average payoffs will
nevertheless converge to the payoffs of a Nash equilibrium.
",michael kearns,,2013.0,,arXiv,Singh2013,True,,arXiv,Not available,Nash Convergence of Gradient Dynamics in Iterated General-Sum Games,7b5c1f3f1b624989464de66367b6744e,http://arxiv.org/abs/1301.3892v1
16946," Multi-agent games are becoming an increasing prevalent formalism for the
study of electronic commerce and auctions. The speed at which transactions can
take place and the growing complexity of electronic marketplaces makes the
study of computationally simple agents an appealing direction. In this work, we
analyze the behavior of agents that incrementally adapt their strategy through
gradient ascent on expected payoff, in the simple setting of two-player,
two-action, iterated general-sum games, and present a surprising result. We
show that either the agents will converge to Nash equilibrium, or if the
strategies themselves do not converge, then their average payoffs will
nevertheless converge to the payoffs of a Nash equilibrium.
",yishay mansour,,2013.0,,arXiv,Singh2013,True,,arXiv,Not available,Nash Convergence of Gradient Dynamics in Iterated General-Sum Games,7b5c1f3f1b624989464de66367b6744e,http://arxiv.org/abs/1301.3892v1
16947," We study adaptive dynamics in games where players abandon the population at a
given rate, and are replaced by naive players characterized by a prior
distribution over the admitted strategies. We demonstrate how such process
leads macroscopically to a variant of the replicator equation, with an
additional term accounting for player turnover. We study how Nash equilibria
and the dynamics of the system are modified by this additional term, for
prototypical examples such as the rock-scissor-paper game and different classes
of two-action games played between two distinct populations. We conclude by
showing how player turnover can account for non-trivial departures from Nash
equilibria observed in data from lowest unique bid auctions.
",jeppe juul,,2013.0,10.1103/PhysRevE.88.022806,"Physical Review E 88, 022806 (2013)",Juul2013,True,,arXiv,Not available,Replicator dynamics with turnover of players,9040dbf266f297034c3ac73e2b732c66,http://arxiv.org/abs/1303.5656v2
16948," We study adaptive dynamics in games where players abandon the population at a
given rate, and are replaced by naive players characterized by a prior
distribution over the admitted strategies. We demonstrate how such process
leads macroscopically to a variant of the replicator equation, with an
additional term accounting for player turnover. We study how Nash equilibria
and the dynamics of the system are modified by this additional term, for
prototypical examples such as the rock-scissor-paper game and different classes
of two-action games played between two distinct populations. We conclude by
showing how player turnover can account for non-trivial departures from Nash
equilibria observed in data from lowest unique bid auctions.
",ardeshir kianercy,,2013.0,10.1103/PhysRevE.88.022806,"Physical Review E 88, 022806 (2013)",Juul2013,True,,arXiv,Not available,Replicator dynamics with turnover of players,9040dbf266f297034c3ac73e2b732c66,http://arxiv.org/abs/1303.5656v2
16949," We propose a simple model of network co-evolution in a game-dynamical system
of interacting agents that play repeated games with their neighbors, and adapt
their behaviors and network links based on the outcome of those games. The
adaptation is achieved through a simple reinforcement learning scheme. We show
that the collective evolution of such a system can be described by
appropriately defined replicator dynamics equations. In particular, we suggest
an appropriate factorization of the agents' strategies that results in a
coupled system of equations characterizing the evolution of both strategies and
network structure, and illustrate the framework on two simple examples.
",armen allahverdyan,,2011.0,,arXiv,Galstyan2011,True,,arXiv,Not available,Replicator Dynamics of Co-Evolving Networks,15c60630db34ddd141e8db04b37518a9,http://arxiv.org/abs/1107.5354v1
16950," We study adaptive dynamics in games where players abandon the population at a
given rate, and are replaced by naive players characterized by a prior
distribution over the admitted strategies. We demonstrate how such process
leads macroscopically to a variant of the replicator equation, with an
additional term accounting for player turnover. We study how Nash equilibria
and the dynamics of the system are modified by this additional term, for
prototypical examples such as the rock-scissor-paper game and different classes
of two-action games played between two distinct populations. We conclude by
showing how player turnover can account for non-trivial departures from Nash
equilibria observed in data from lowest unique bid auctions.
",sebastian bernhardsson,,2013.0,10.1103/PhysRevE.88.022806,"Physical Review E 88, 022806 (2013)",Juul2013,True,,arXiv,Not available,Replicator dynamics with turnover of players,9040dbf266f297034c3ac73e2b732c66,http://arxiv.org/abs/1303.5656v2
16951," We study adaptive dynamics in games where players abandon the population at a
given rate, and are replaced by naive players characterized by a prior
distribution over the admitted strategies. We demonstrate how such process
leads macroscopically to a variant of the replicator equation, with an
additional term accounting for player turnover. We study how Nash equilibria
and the dynamics of the system are modified by this additional term, for
prototypical examples such as the rock-scissor-paper game and different classes
of two-action games played between two distinct populations. We conclude by
showing how player turnover can account for non-trivial departures from Nash
equilibria observed in data from lowest unique bid auctions.
",simone pigolotti,,2013.0,10.1103/PhysRevE.88.022806,"Physical Review E 88, 022806 (2013)",Juul2013,True,,arXiv,Not available,Replicator dynamics with turnover of players,9040dbf266f297034c3ac73e2b732c66,http://arxiv.org/abs/1303.5656v2
16952," Admissible strategies, i.e. those that are not dominated by any other
strategy, are a typical rationality notion in game theory. In many classes of
games this is justified by results showing that any strategy is admissible or
dominated by an admissible strategy. However, in games played on finite graphs
with quantitative objectives (as used for reactive synthesis), this is not the
case.
We consider increasing chains of strategies instead to recover a satisfactory
rationality notion based on dominance in such games. We start with some
order-theoretic considerations establishing sufficient criteria for this to
work. We then turn our attention to generalised safety/reachability games as a
particular application. We propose the notion of maximal uniform chain as the
desired dominance-based rationality concept in these games. Decidability of
some fundamental questions about uniform chains is established.
",nicolas basset,,2018.0,,arXiv,Basset2018,True,,arXiv,Not available,Beyond admissibility: Dominance between chains of strategies,a29f1ebedaa7f50c1b44f6e08571a76a,http://arxiv.org/abs/1805.11608v1
16953," Admissible strategies, i.e. those that are not dominated by any other
strategy, are a typical rationality notion in game theory. In many classes of
games this is justified by results showing that any strategy is admissible or
dominated by an admissible strategy. However, in games played on finite graphs
with quantitative objectives (as used for reactive synthesis), this is not the
case.
We consider increasing chains of strategies instead to recover a satisfactory
rationality notion based on dominance in such games. We start with some
order-theoretic considerations establishing sufficient criteria for this to
work. We then turn our attention to generalised safety/reachability games as a
particular application. We propose the notion of maximal uniform chain as the
desired dominance-based rationality concept in these games. Decidability of
some fundamental questions about uniform chains is established.
",ismael jecker,,2018.0,,arXiv,Basset2018,True,,arXiv,Not available,Beyond admissibility: Dominance between chains of strategies,a29f1ebedaa7f50c1b44f6e08571a76a,http://arxiv.org/abs/1805.11608v1
16954," Admissible strategies, i.e. those that are not dominated by any other
strategy, are a typical rationality notion in game theory. In many classes of
games this is justified by results showing that any strategy is admissible or
dominated by an admissible strategy. However, in games played on finite graphs
with quantitative objectives (as used for reactive synthesis), this is not the
case.
We consider increasing chains of strategies instead to recover a satisfactory
rationality notion based on dominance in such games. We start with some
order-theoretic considerations establishing sufficient criteria for this to
work. We then turn our attention to generalised safety/reachability games as a
particular application. We propose the notion of maximal uniform chain as the
desired dominance-based rationality concept in these games. Decidability of
some fundamental questions about uniform chains is established.
",arno pauly,,2018.0,,arXiv,Basset2018,True,,arXiv,Not available,Beyond admissibility: Dominance between chains of strategies,a29f1ebedaa7f50c1b44f6e08571a76a,http://arxiv.org/abs/1805.11608v1
16955," Admissible strategies, i.e. those that are not dominated by any other
strategy, are a typical rationality notion in game theory. In many classes of
games this is justified by results showing that any strategy is admissible or
dominated by an admissible strategy. However, in games played on finite graphs
with quantitative objectives (as used for reactive synthesis), this is not the
case.
We consider increasing chains of strategies instead to recover a satisfactory
rationality notion based on dominance in such games. We start with some
order-theoretic considerations establishing sufficient criteria for this to
work. We then turn our attention to generalised safety/reachability games as a
particular application. We propose the notion of maximal uniform chain as the
desired dominance-based rationality concept in these games. Decidability of
some fundamental questions about uniform chains is established.
",jean-francois raskin,,2018.0,,arXiv,Basset2018,True,,arXiv,Not available,Beyond admissibility: Dominance between chains of strategies,a29f1ebedaa7f50c1b44f6e08571a76a,http://arxiv.org/abs/1805.11608v1
16956," Admissible strategies, i.e. those that are not dominated by any other
strategy, are a typical rationality notion in game theory. In many classes of
games this is justified by results showing that any strategy is admissible or
dominated by an admissible strategy. However, in games played on finite graphs
with quantitative objectives (as used for reactive synthesis), this is not the
case.
We consider increasing chains of strategies instead to recover a satisfactory
rationality notion based on dominance in such games. We start with some
order-theoretic considerations establishing sufficient criteria for this to
work. We then turn our attention to generalised safety/reachability games as a
particular application. We propose the notion of maximal uniform chain as the
desired dominance-based rationality concept in these games. Decidability of
some fundamental questions about uniform chains is established.
",marie bogaard,,2018.0,,arXiv,Basset2018,True,,arXiv,Not available,Beyond admissibility: Dominance between chains of strategies,a29f1ebedaa7f50c1b44f6e08571a76a,http://arxiv.org/abs/1805.11608v1
16957," Combinatorial games are played under two different play conventions: normal
play, where the last player to move wins, and \mis play, where the last player
to move loses. Combinatorial games are also classified into impartial positions
and partizan positions, where a position is impartial if both players have the
same available moves and partizan otherwise.
\Mis play games lack many of the useful calculational and theoretical
properties of normal play games. Until Plambeck's indistinguishability quotient
and \mis monoid theory were developed in 2004, research on \mis play games had
stalled. This thesis investigates partizan combinatorial \mis play games, by
taking Plambeck's indistinguishability and \mis monoid theory for impartial
positions and extending it to partizan ones, as well as examining the
difficulties in constructing a category of \mis play games in a similar manner
to Joyal's category of normal play games.
This thesis succeeds in finding an infinite set of positions which each have
finite \mis monoid, examining conditions on positions for when $* + *$ is
equivalent to 0, finding a set of positions which have Tweedledum-Tweedledee
type strategy, and the two most important results of this thesis: giving
necessary and sufficient conditions on a set of positions $\Upsilon$ such that
the \mis monoid of $\Upsilon$ is the same as the \mis monoid of $*$ and giving
a construction theorem which builds all positions $\xi$ such that the \mis
monoid of $\xi$ is the same as the \mis monoid of $*$.
",meghan allen,,2010.0,,arXiv,Allen2010,True,,arXiv,Not available,An Investigation of Partizan Misere Games,2a2c41d3e7dd2a86f95008d6cd392f24,http://arxiv.org/abs/1008.4109v1
16958," In this work we aim to analyze the role of noise in the spatial Public Goods
Game, one of the most famous games in Evolutionary Game Theory. The dynamics of
this game is affected by a number of parameters and processes, namely the
topology of interactions among the agents, the synergy factor, and the strategy
revision phase. The latter is a process that allows agents to change their
strategy. Notably, rational agents tend to imitate richer neighbors, in order
to increase the probability to maximize their payoff. By implementing a
stochastic revision process, it is possible to control the level of noise in
the system, so that even irrational updates may occur. In particular, in this
work we study the effect of noise on the macroscopic behavior of a finite
structured population playing the Public Goods Game. We consider both the case
of a homogeneous population, where the noise in the system is controlled by
tuning a parameter representing the level of stochasticity in the strategy
revision phase, and a heterogeneous population composed of a variable
proportion of rational and irrational agents. In both cases numerical
investigations show that the Public Goods Game has a very rich behavior which
strongly depends on the amount of noise in the system and on the value of the
synergy factor. To conclude, our study sheds a new light on the relations
between the microscopic dynamics of the Public Goods Game and its macroscopic
behavior, strengthening the link between the field of Evolutionary Game Theory
and statistical physics.
",marco javarone,,2016.0,10.1088/1742-5468/2016/07/073404,"Journal of Statistical Mechanics: Theory and Experiment 2016 (7),
073404",Javarone2016,True,,arXiv,Not available,The Role of Noise in the Spatial Public Goods Game,4130074071a5087b4ab45d9cb0318d60,http://arxiv.org/abs/1605.08690v1
16959," In this work we aim to analyze the role of noise in the spatial Public Goods
Game, one of the most famous games in Evolutionary Game Theory. The dynamics of
this game is affected by a number of parameters and processes, namely the
topology of interactions among the agents, the synergy factor, and the strategy
revision phase. The latter is a process that allows agents to change their
strategy. Notably, rational agents tend to imitate richer neighbors, in order
to increase the probability to maximize their payoff. By implementing a
stochastic revision process, it is possible to control the level of noise in
the system, so that even irrational updates may occur. In particular, in this
work we study the effect of noise on the macroscopic behavior of a finite
structured population playing the Public Goods Game. We consider both the case
of a homogeneous population, where the noise in the system is controlled by
tuning a parameter representing the level of stochasticity in the strategy
revision phase, and a heterogeneous population composed of a variable
proportion of rational and irrational agents. In both cases numerical
investigations show that the Public Goods Game has a very rich behavior which
strongly depends on the amount of noise in the system and on the value of the
synergy factor. To conclude, our study sheds a new light on the relations
between the microscopic dynamics of the Public Goods Game and its macroscopic
behavior, strengthening the link between the field of Evolutionary Game Theory
and statistical physics.
",federico battiston,,2016.0,10.1088/1742-5468/2016/07/073404,"Journal of Statistical Mechanics: Theory and Experiment 2016 (7),
073404",Javarone2016,True,,arXiv,Not available,The Role of Noise in the Spatial Public Goods Game,4130074071a5087b4ab45d9cb0318d60,http://arxiv.org/abs/1605.08690v1
16960," We present an algorithm which attains O(\sqrt{T}) internal (and thus
external) regret for finite games with partial monitoring under the local
observability condition. Recently, this condition has been shown by (Bartok,
Pal, and Szepesvari, 2011) to imply the O(\sqrt{T}) rate for partial monitoring
games against an i.i.d. opponent, and the authors conjectured that the same
holds for non-stochastic adversaries. Our result is in the affirmative, and it
completes the characterization of possible rates for finite partial-monitoring
games, an open question stated by (Cesa-Bianchi, Lugosi, and Stoltz, 2006). Our
regret guarantees also hold for the more general model of partial monitoring
with random signals.
",dean foster,,2011.0,,arXiv,Foster2011,True,,arXiv,Not available,No Internal Regret via Neighborhood Watch,d78b38a4e1c48e48b98ea3f2a9df4f10,http://arxiv.org/abs/1108.6088v1
16961," In nature and society problems arise when different interests are difficult
to reconcile, which are modeled in game theory. While most applications assume
uncorrelated games, a more detailed modeling is necessary to consider the
correlations that influence the decisions of the players. The current theory
for correlated games, however, enforces the players to obey the instructions
from a third party or ""correlation device"" to reach equilibrium, but this
cannot be achieved for all initial correlations. We extend here the existing
framework of correlated games and find that there are other interesting and
previously unknown Nash equilibria that make use of correlations to obtain the
best payoff. This is achieved by allowing the players the freedom to follow or
not to follow the suggestions of the correlation device. By assigning
independent probabilities to follow every possible suggestion, the players
engage in a response game that turns out to have a rich structure of Nash
equilibria that goes beyond the correlated equilibrium and mixed-strategy
solutions. We determine the Nash equilibria for all possible correlated
Snowdrift games, which we find to be describable by Ising Models in thermal
equilibrium. We believe that our approach paves the way to a study of
correlations in games that uncovers the existence of interesting underlying
interaction mechanisms, without compromising the independence of the players.
",a. correia,,2018.0,,arXiv,Correia2018,True,,arXiv,Not available,Nash Equilibria in the Response Strategy of Correlated Games,7700d017220f53bda6d0f01b93abf026,http://arxiv.org/abs/1809.03860v1
16962," In nature and society problems arise when different interests are difficult
to reconcile, which are modeled in game theory. While most applications assume
uncorrelated games, a more detailed modeling is necessary to consider the
correlations that influence the decisions of the players. The current theory
for correlated games, however, enforces the players to obey the instructions
from a third party or ""correlation device"" to reach equilibrium, but this
cannot be achieved for all initial correlations. We extend here the existing
framework of correlated games and find that there are other interesting and
previously unknown Nash equilibria that make use of correlations to obtain the
best payoff. This is achieved by allowing the players the freedom to follow or
not to follow the suggestions of the correlation device. By assigning
independent probabilities to follow every possible suggestion, the players
engage in a response game that turns out to have a rich structure of Nash
equilibria that goes beyond the correlated equilibrium and mixed-strategy
solutions. We determine the Nash equilibria for all possible correlated
Snowdrift games, which we find to be describable by Ising Models in thermal
equilibrium. We believe that our approach paves the way to a study of
correlations in games that uncovers the existence of interesting underlying
interaction mechanisms, without compromising the independence of the players.
",h. stoof,,2018.0,,arXiv,Correia2018,True,,arXiv,Not available,Nash Equilibria in the Response Strategy of Correlated Games,7700d017220f53bda6d0f01b93abf026,http://arxiv.org/abs/1809.03860v1
16963," The ability to compose parts to form a more complex whole, and to analyze a
whole as a combination of elements, is desirable across disciplines. This
workshop bring together researchers applying compositional approaches to
physics, NLP, cognitive science, and game theory. Within NLP, a long-standing
aim is to represent how words can combine to form phrases and sentences. Within
the framework of distributional semantics, words are represented as vectors in
vector spaces. The categorical model of Coecke et al. [2010], inspired by
quantum protocols, has provided a convincing account of compositionality in
vector space models of NLP. There is furthermore a history of vector space
models in cognitive science. Theories of categorization such as those developed
by Nosofsky [1986] and Smith et al. [1988] utilise notions of distance between
feature vectors. More recently G\""ardenfors [2004, 2014] has developed a model
of concepts in which conceptual spaces provide geometric structures, and
information is represented by points, vectors and regions in vector spaces. The
same compositional approach has been applied to this formalism, giving
conceptual spaces theory a richer model of compositionality than previously
[Bolt et al., 2018]. Compositional approaches have also been applied in the
study of strategic games and Nash equilibria. In contrast to classical game
theory, where games are studied monolithically as one global object,
compositional game theory works bottom-up by building large and complex games
from smaller components. Such an approach is inherently difficult since the
interaction between games has to be considered. Research into categorical
compositional methods for this field have recently begun [Ghani et al., 2018].
Moreover, the interaction between the three disciplines of cognitive science,
linguistics and game theory is a fertile ground for research. Game theory in
cognitive science is a well-established area [Camerer, 2011]. Similarly game
theoretic approaches have been applied in linguistics [J\""ager, 2008]. Lastly,
the study of linguistics and cognitive science is intimately intertwined
[Smolensky and Legendre, 2006, Jackendoff, 2007]. Physics supplies
compositional approaches via vector spaces and categorical quantum theory,
allowing the interplay between the three disciplines to be examined.
",martha lewis,,2018.0,10.4204/EPTCS.283,"EPTCS 283, 2018",Lewis2018,True,,arXiv,Not available,"Proceedings of the 2018 Workshop on Compositional Approaches in Physics,
NLP, and Social Sciences",c3af9db6f6ac3928ebe2ea4f6f077ac7,http://arxiv.org/abs/1811.02701v1
16964," The ability to compose parts to form a more complex whole, and to analyze a
whole as a combination of elements, is desirable across disciplines. This
workshop bring together researchers applying compositional approaches to
physics, NLP, cognitive science, and game theory. Within NLP, a long-standing
aim is to represent how words can combine to form phrases and sentences. Within
the framework of distributional semantics, words are represented as vectors in
vector spaces. The categorical model of Coecke et al. [2010], inspired by
quantum protocols, has provided a convincing account of compositionality in
vector space models of NLP. There is furthermore a history of vector space
models in cognitive science. Theories of categorization such as those developed
by Nosofsky [1986] and Smith et al. [1988] utilise notions of distance between
feature vectors. More recently G\""ardenfors [2004, 2014] has developed a model
of concepts in which conceptual spaces provide geometric structures, and
information is represented by points, vectors and regions in vector spaces. The
same compositional approach has been applied to this formalism, giving
conceptual spaces theory a richer model of compositionality than previously
[Bolt et al., 2018]. Compositional approaches have also been applied in the
study of strategic games and Nash equilibria. In contrast to classical game
theory, where games are studied monolithically as one global object,
compositional game theory works bottom-up by building large and complex games
from smaller components. Such an approach is inherently difficult since the
interaction between games has to be considered. Research into categorical
compositional methods for this field have recently begun [Ghani et al., 2018].
Moreover, the interaction between the three disciplines of cognitive science,
linguistics and game theory is a fertile ground for research. Game theory in
cognitive science is a well-established area [Camerer, 2011]. Similarly game
theoretic approaches have been applied in linguistics [J\""ager, 2008]. Lastly,
the study of linguistics and cognitive science is intimately intertwined
[Smolensky and Legendre, 2006, Jackendoff, 2007]. Physics supplies
compositional approaches via vector spaces and categorical quantum theory,
allowing the interplay between the three disciplines to be examined.
",bob coecke,,2018.0,10.4204/EPTCS.283,"EPTCS 283, 2018",Lewis2018,True,,arXiv,Not available,"Proceedings of the 2018 Workshop on Compositional Approaches in Physics,
NLP, and Social Sciences",c3af9db6f6ac3928ebe2ea4f6f077ac7,http://arxiv.org/abs/1811.02701v1
16965," The ability to compose parts to form a more complex whole, and to analyze a
whole as a combination of elements, is desirable across disciplines. This
workshop bring together researchers applying compositional approaches to
physics, NLP, cognitive science, and game theory. Within NLP, a long-standing
aim is to represent how words can combine to form phrases and sentences. Within
the framework of distributional semantics, words are represented as vectors in
vector spaces. The categorical model of Coecke et al. [2010], inspired by
quantum protocols, has provided a convincing account of compositionality in
vector space models of NLP. There is furthermore a history of vector space
models in cognitive science. Theories of categorization such as those developed
by Nosofsky [1986] and Smith et al. [1988] utilise notions of distance between
feature vectors. More recently G\""ardenfors [2004, 2014] has developed a model
of concepts in which conceptual spaces provide geometric structures, and
information is represented by points, vectors and regions in vector spaces. The
same compositional approach has been applied to this formalism, giving
conceptual spaces theory a richer model of compositionality than previously
[Bolt et al., 2018]. Compositional approaches have also been applied in the
study of strategic games and Nash equilibria. In contrast to classical game
theory, where games are studied monolithically as one global object,
compositional game theory works bottom-up by building large and complex games
from smaller components. Such an approach is inherently difficult since the
interaction between games has to be considered. Research into categorical
compositional methods for this field have recently begun [Ghani et al., 2018].
Moreover, the interaction between the three disciplines of cognitive science,
linguistics and game theory is a fertile ground for research. Game theory in
cognitive science is a well-established area [Camerer, 2011]. Similarly game
theoretic approaches have been applied in linguistics [J\""ager, 2008]. Lastly,
the study of linguistics and cognitive science is intimately intertwined
[Smolensky and Legendre, 2006, Jackendoff, 2007]. Physics supplies
compositional approaches via vector spaces and categorical quantum theory,
allowing the interplay between the three disciplines to be examined.
",jules hedges,,2018.0,10.4204/EPTCS.283,"EPTCS 283, 2018",Lewis2018,True,,arXiv,Not available,"Proceedings of the 2018 Workshop on Compositional Approaches in Physics,
NLP, and Social Sciences",c3af9db6f6ac3928ebe2ea4f6f077ac7,http://arxiv.org/abs/1811.02701v1
16966," The ability to compose parts to form a more complex whole, and to analyze a
whole as a combination of elements, is desirable across disciplines. This
workshop bring together researchers applying compositional approaches to
physics, NLP, cognitive science, and game theory. Within NLP, a long-standing
aim is to represent how words can combine to form phrases and sentences. Within
the framework of distributional semantics, words are represented as vectors in
vector spaces. The categorical model of Coecke et al. [2010], inspired by
quantum protocols, has provided a convincing account of compositionality in
vector space models of NLP. There is furthermore a history of vector space
models in cognitive science. Theories of categorization such as those developed
by Nosofsky [1986] and Smith et al. [1988] utilise notions of distance between
feature vectors. More recently G\""ardenfors [2004, 2014] has developed a model
of concepts in which conceptual spaces provide geometric structures, and
information is represented by points, vectors and regions in vector spaces. The
same compositional approach has been applied to this formalism, giving
conceptual spaces theory a richer model of compositionality than previously
[Bolt et al., 2018]. Compositional approaches have also been applied in the
study of strategic games and Nash equilibria. In contrast to classical game
theory, where games are studied monolithically as one global object,
compositional game theory works bottom-up by building large and complex games
from smaller components. Such an approach is inherently difficult since the
interaction between games has to be considered. Research into categorical
compositional methods for this field have recently begun [Ghani et al., 2018].
Moreover, the interaction between the three disciplines of cognitive science,
linguistics and game theory is a fertile ground for research. Game theory in
cognitive science is a well-established area [Camerer, 2011]. Similarly game
theoretic approaches have been applied in linguistics [J\""ager, 2008]. Lastly,
the study of linguistics and cognitive science is intimately intertwined
[Smolensky and Legendre, 2006, Jackendoff, 2007]. Physics supplies
compositional approaches via vector spaces and categorical quantum theory,
allowing the interplay between the three disciplines to be examined.
",dimitri kartsaklis,,2018.0,10.4204/EPTCS.283,"EPTCS 283, 2018",Lewis2018,True,,arXiv,Not available,"Proceedings of the 2018 Workshop on Compositional Approaches in Physics,
NLP, and Social Sciences",c3af9db6f6ac3928ebe2ea4f6f077ac7,http://arxiv.org/abs/1811.02701v1
16967," The ability to compose parts to form a more complex whole, and to analyze a
whole as a combination of elements, is desirable across disciplines. This
workshop bring together researchers applying compositional approaches to
physics, NLP, cognitive science, and game theory. Within NLP, a long-standing
aim is to represent how words can combine to form phrases and sentences. Within
the framework of distributional semantics, words are represented as vectors in
vector spaces. The categorical model of Coecke et al. [2010], inspired by
quantum protocols, has provided a convincing account of compositionality in
vector space models of NLP. There is furthermore a history of vector space
models in cognitive science. Theories of categorization such as those developed
by Nosofsky [1986] and Smith et al. [1988] utilise notions of distance between
feature vectors. More recently G\""ardenfors [2004, 2014] has developed a model
of concepts in which conceptual spaces provide geometric structures, and
information is represented by points, vectors and regions in vector spaces. The
same compositional approach has been applied to this formalism, giving
conceptual spaces theory a richer model of compositionality than previously
[Bolt et al., 2018]. Compositional approaches have also been applied in the
study of strategic games and Nash equilibria. In contrast to classical game
theory, where games are studied monolithically as one global object,
compositional game theory works bottom-up by building large and complex games
from smaller components. Such an approach is inherently difficult since the
interaction between games has to be considered. Research into categorical
compositional methods for this field have recently begun [Ghani et al., 2018].
Moreover, the interaction between the three disciplines of cognitive science,
linguistics and game theory is a fertile ground for research. Game theory in
cognitive science is a well-established area [Camerer, 2011]. Similarly game
theoretic approaches have been applied in linguistics [J\""ager, 2008]. Lastly,
the study of linguistics and cognitive science is intimately intertwined
[Smolensky and Legendre, 2006, Jackendoff, 2007]. Physics supplies
compositional approaches via vector spaces and categorical quantum theory,
allowing the interplay between the three disciplines to be examined.
",dan marsden,,2018.0,10.4204/EPTCS.283,"EPTCS 283, 2018",Lewis2018,True,,arXiv,Not available,"Proceedings of the 2018 Workshop on Compositional Approaches in Physics,
NLP, and Social Sciences",c3af9db6f6ac3928ebe2ea4f6f077ac7,http://arxiv.org/abs/1811.02701v1
16968," This paper unifies the concepts of evolutionary games and quantum strategies.
First, we state the formulation and properties of classical evolutionary
strategies, with focus on the destinations of evolution in 2-player 2-strategy
games. We then introduce a new formalism of quantum evolutionary dynamics, and
give an example where an evolving quantum strategy gives reward if played
against its classical counterpart.
",ming leung,,2011.0,,arXiv,Leung2011,True,,arXiv,Not available,"Classical vs Quantum Games: Continuous-time Evolutionary Strategy
Dynamics",78c0f07ae59dbdf3e89aaa0ea0aed642,http://arxiv.org/abs/1104.3953v1
16969," We consider a wireless channel shared by multiple transmitter-receiver pairs.
Their transmissions interfere with each other. Each transmitter-receiver pair
aims to maximize its long-term average transmission rate subject to an average
power constraint. This scenario is modeled as a stochastic game. We provide
sufficient conditions for existence and uniqueness of a Nash equilibrium (NE).
We then formulate the problem of finding NE as a variational inequality (VI)
problem and present an algorithm to solve the VI using regularization. We also
provide distributed algorithms to compute Pareto optimal solutions for the
proposed game.
",krishna a,,2014.0,,arXiv,A2014,True,,arXiv,Not available,Algorithms for Stochastic Games on Interference Channels,0299b6c52bbb4fc42f6e1b198143e51f,http://arxiv.org/abs/1409.7551v1
16970," We consider a wireless channel shared by multiple transmitter-receiver pairs.
Their transmissions interfere with each other. Each transmitter-receiver pair
aims to maximize its long-term average transmission rate subject to an average
power constraint. This scenario is modeled as a stochastic game. We provide
sufficient conditions for existence and uniqueness of a Nash equilibrium (NE).
We then formulate the problem of finding NE as a variational inequality (VI)
problem and present an algorithm to solve the VI using regularization. We also
provide distributed algorithms to compute Pareto optimal solutions for the
proposed game.
",utpal mukherji,,2014.0,,arXiv,A2014,True,,arXiv,Not available,Algorithms for Stochastic Games on Interference Channels,0299b6c52bbb4fc42f6e1b198143e51f,http://arxiv.org/abs/1409.7551v1
16971," We study an ensemble of individuals playing the two games of the so-called
Parrondo paradox. In our study, players are allowed to choose the game to be
played by the whole ensemble in each turn. The choice cannot conform to the
preferences of all the players and, consequently, they face a simple
frustration phenomenon that requires some strategy to make a collective
decision. We consider several such strategies and analyze how fluctuations can
be used to improve the performance of the system.
",l. dinis,,2014.0,10.1140/epjst/e2007-00068-0,"Eur. Phys. J. Special Topics 143, 39 (2007)",Parrondo2014,True,,arXiv,Not available,Collective decision making and paradoxical games,c655f281edf34ee886edc2b09cf69f10,http://arxiv.org/abs/1410.0241v1
16972," We present an algorithm which attains O(\sqrt{T}) internal (and thus
external) regret for finite games with partial monitoring under the local
observability condition. Recently, this condition has been shown by (Bartok,
Pal, and Szepesvari, 2011) to imply the O(\sqrt{T}) rate for partial monitoring
games against an i.i.d. opponent, and the authors conjectured that the same
holds for non-stochastic adversaries. Our result is in the affirmative, and it
completes the characterization of possible rates for finite partial-monitoring
games, an open question stated by (Cesa-Bianchi, Lugosi, and Stoltz, 2006). Our
regret guarantees also hold for the more general model of partial monitoring
with random signals.
",alexander rakhlin,,2011.0,,arXiv,Foster2011,True,,arXiv,Not available,No Internal Regret via Neighborhood Watch,d78b38a4e1c48e48b98ea3f2a9df4f10,http://arxiv.org/abs/1108.6088v1
16973," We consider a wireless channel shared by multiple transmitter-receiver pairs.
Their transmissions interfere with each other. Each transmitter-receiver pair
aims to maximize its long-term average transmission rate subject to an average
power constraint. This scenario is modeled as a stochastic game. We provide
sufficient conditions for existence and uniqueness of a Nash equilibrium (NE).
We then formulate the problem of finding NE as a variational inequality (VI)
problem and present an algorithm to solve the VI using regularization. We also
provide distributed algorithms to compute Pareto optimal solutions for the
proposed game.
",vinod sharma,,2014.0,,arXiv,A2014,True,,arXiv,Not available,Algorithms for Stochastic Games on Interference Channels,0299b6c52bbb4fc42f6e1b198143e51f,http://arxiv.org/abs/1409.7551v1
16974," In game theory, the notion of a player's beliefs about the game players'
beliefs about other players' beliefs arises naturally. In this paper, we
present a non-self-referential paradox in epistemic game theory which shows
that completely modeling players' epistemic beliefs and assumptions is
impossible. Furthermore, we introduce an interactive temporal assumption logic
to give an appropriate formalization of the new paradox. Formalizing the new
paradox in this logic shows that there is no complete interactive temporal
assumption model.
",ahmad karimi,,2016.0,,arXiv,Karimi2016,True,,arXiv,Not available,A Non-Self-Referential Paradox in Epistemic Game Theory,316df663756b53a539442297861562ff,http://arxiv.org/abs/1601.06661v1
16975," We study the problem of \emph{jamming} in multiple independent \emph{Gaussian
channels} as a zero-sum game. We show that in the unique Nash equilibrium of
the game the best-response strategy of the transmitter is the
\emph{waterfilling} to the sum of the jamming and the noise power in each
channel and the best-response strategy of the jammer is the \emph{waterfilling}
only to the noise power.
",michail fasoulakis,,2018.0,,arXiv,Fasoulakis2018,True,,arXiv,Not available,Jamming in multiple independent Gaussian channels as a game,a5b7431992c93c17c9030f99e6a9fff2,http://arxiv.org/abs/1807.09749v1
16976," We study the problem of \emph{jamming} in multiple independent \emph{Gaussian
channels} as a zero-sum game. We show that in the unique Nash equilibrium of
the game the best-response strategy of the transmitter is the
\emph{waterfilling} to the sum of the jamming and the noise power in each
channel and the best-response strategy of the jammer is the \emph{waterfilling}
only to the noise power.
",apostolos traganitis,,2018.0,,arXiv,Fasoulakis2018,True,,arXiv,Not available,Jamming in multiple independent Gaussian channels as a game,a5b7431992c93c17c9030f99e6a9fff2,http://arxiv.org/abs/1807.09749v1
16977," We study the problem of \emph{jamming} in multiple independent \emph{Gaussian
channels} as a zero-sum game. We show that in the unique Nash equilibrium of
the game the best-response strategy of the transmitter is the
\emph{waterfilling} to the sum of the jamming and the noise power in each
channel and the best-response strategy of the jammer is the \emph{waterfilling}
only to the noise power.
",anthony ephremides,,2018.0,,arXiv,Fasoulakis2018,True,,arXiv,Not available,Jamming in multiple independent Gaussian channels as a game,a5b7431992c93c17c9030f99e6a9fff2,http://arxiv.org/abs/1807.09749v1
16978," We explore the possibility that physical phenomena arising from interacting
multi-particle systems, can be usefully interpreted in terms of multi-player
games. We show how non-cooperative phenomena can emerge from Ising
Hamiltonians, even though the individual spins behave cooperatively. Our
findings establish a mapping between two fundamental models from condensed
matter physics and game theory.
",chiu lee,,2002.0,,arXiv,Lee2002,True,,arXiv,Not available,Interacting many-body systems as non-cooperative games,cb53f0b78842a544fea17ae9c788731e,http://arxiv.org/abs/cond-mat/0212505v1
16979," We explore the possibility that physical phenomena arising from interacting
multi-particle systems, can be usefully interpreted in terms of multi-player
games. We show how non-cooperative phenomena can emerge from Ising
Hamiltonians, even though the individual spins behave cooperatively. Our
findings establish a mapping between two fundamental models from condensed
matter physics and game theory.
",neil johnson,,2002.0,,arXiv,Lee2002,True,,arXiv,Not available,Interacting many-body systems as non-cooperative games,cb53f0b78842a544fea17ae9c788731e,http://arxiv.org/abs/cond-mat/0212505v1
16980," We build off the game, NimG to create a version named Neighboring Nim. By
reducing from Geography, we show that this game is PSPACE-hard. The games
created by the reduction share strong similarities with Undirected (Vertex)
Geography and regular Nim, both of which are in P. We show how to construct
PSPACE-complete versions with nim heaps *1 and *2. This application of graphs
can be used as a form of game sum with any games, not only Nim.
",kyle burke,,2011.0,,arXiv,Burke2011,True,,arXiv,Not available,A PSPACE-complete Graph Nim,0c893f1083d7686795b9bf5083a215c6,http://arxiv.org/abs/1101.1507v3
16981," We build off the game, NimG to create a version named Neighboring Nim. By
reducing from Geography, we show that this game is PSPACE-hard. The games
created by the reduction share strong similarities with Undirected (Vertex)
Geography and regular Nim, both of which are in P. We show how to construct
PSPACE-complete versions with nim heaps *1 and *2. This application of graphs
can be used as a form of game sum with any games, not only Nim.
",olivia george,,2011.0,,arXiv,Burke2011,True,,arXiv,Not available,A PSPACE-complete Graph Nim,0c893f1083d7686795b9bf5083a215c6,http://arxiv.org/abs/1101.1507v3
16982," The price of anarchy (PoA) has been widely used in static games to quantify
the loss of efficiency due to noncooperation. Here, we extend this concept to a
general differential games framework. In addition, we introduce the price of
information (PoI) to characterize comparative game performances under different
information structures, as well as the price of cooperation to capture the
extent of benefit or loss a player accrues as a result of altruistic behavior.
We further characterize PoA and PoI for a class of scalar linear quadratic
differential games under open-loop and closed-loop feedback information
structures. We also obtain some explicit bounds on these indices in a large
population regime.
",tamer basar,,2011.0,,arXiv,Basar2011,True,,arXiv,Not available,"Prices of Anarchy, Information, and Cooperation in Differential Games",b99994dc98f6268f776fd0f00d7a22f5,http://arxiv.org/abs/1103.2579v1
16983," Flip a coin repeatedly, and stop whenever you want. Your payoff is the
proportion of heads, and you wish to maximize this payoff in expectation. This
so-called Chow-Robbins game is amenable to computer analysis, but while
simple-minded number crunching can show that it is best to continue in a given
position, establishing rigorously that stopping is optimal seems at first sight
to require ""backward induction from infinity"". We establish a simple upper
bound on the expected payoff in a given position, allowing efficient and
rigorous computer analysis of positions early in the game. In particular we
confirm that with 5 heads and 3 tails, stopping is optimal.
",olle haggstrom,,2012.0,,arXiv,Häggström2012,True,,arXiv,Not available,Rigorous computer analysis of the Chow-Robbins game,c357af20bf10a66857534881c09f0326,http://arxiv.org/abs/1201.0626v1
16984," The price of anarchy (PoA) has been widely used in static games to quantify
the loss of efficiency due to noncooperation. Here, we extend this concept to a
general differential games framework. In addition, we introduce the price of
information (PoI) to characterize comparative game performances under different
information structures, as well as the price of cooperation to capture the
extent of benefit or loss a player accrues as a result of altruistic behavior.
We further characterize PoA and PoI for a class of scalar linear quadratic
differential games under open-loop and closed-loop feedback information
structures. We also obtain some explicit bounds on these indices in a large
population regime.
",quanyan zhu,,2011.0,,arXiv,Basar2011,True,,arXiv,Not available,"Prices of Anarchy, Information, and Cooperation in Differential Games",b99994dc98f6268f776fd0f00d7a22f5,http://arxiv.org/abs/1103.2579v1
16985," We investigate the interrelation between graph searching games and games with
imperfect information. As key consequence we obtain that parity games with
bounded imperfect information can be solved in PTIME on graphs of bounded
DAG-width which generalizes several results for parity games on graphs of
bounded complexity. We use a new concept of graph searching where several cops
try to catch multiple robbers instead of just a single robber. The main
technical result is that the number of cops needed to catch r robbers
monotonously is at most r times the DAG-width of the graph. We also explore
aspects of this new concept as a refinement of directed path-width which
accentuates its connection to the concept of imperfect information.
",bernd puchala,,2011.0,,arXiv,Puchala2011,True,,arXiv,Not available,"Graph Searching, Parity Games and Imperfect Information",2b81d0cdea051c92b80715605ef2ec44,http://arxiv.org/abs/1110.5575v1
16986," We investigate the interrelation between graph searching games and games with
imperfect information. As key consequence we obtain that parity games with
bounded imperfect information can be solved in PTIME on graphs of bounded
DAG-width which generalizes several results for parity games on graphs of
bounded complexity. We use a new concept of graph searching where several cops
try to catch multiple robbers instead of just a single robber. The main
technical result is that the number of cops needed to catch r robbers
monotonously is at most r times the DAG-width of the graph. We also explore
aspects of this new concept as a refinement of directed path-width which
accentuates its connection to the concept of imperfect information.
",roman rabinovich,,2011.0,,arXiv,Puchala2011,True,,arXiv,Not available,"Graph Searching, Parity Games and Imperfect Information",2b81d0cdea051c92b80715605ef2ec44,http://arxiv.org/abs/1110.5575v1
16987," We consider transformations of normal form games by binding preplay offers of
players for payments of utility to other players conditional on them playing
designated in the offers strategies. The game-theoretic effect of such preplay
offers is transformation of the payoff matrix of the game by transferring
payoffs between players. Here we analyze and completely characterize the
possible transformations of the payoff matrix of a normal form game by sets of
preplay offers.
",valentin goranko,,2012.0,,arXiv,Goranko2012,True,,arXiv,Not available,"Transformations of normal form games by preplay offers for payments
among players",c17804668df80a1f226d35deeb45bdcd,http://arxiv.org/abs/1208.1758v1
16988," In two player bi-matrix games with partial monitoring, actions played are not
observed, only some messages are received. Those games satisfy a crucial
property of usual bi-matrix games: there are only a finite number of required
(mixed) best replies. This is very helpful while investigating sets of Nash
equilibria: for instance, in some cases, it allows to relate it to the set of
equilibria of some auxiliary game with full monitoring. In the general case,
the Lemke-Howson algorithm is extended and, under some genericity assumption,
its output are Nash equilibria of the original game. As a by product, we obtain
an oddness property on their number.
",vianney perchet,,2013.0,,arXiv,Perchet2013,True,,arXiv,Not available,"Nash equilibria with partial monitoring; Computation and Lemke-Howson
algorithm",c601115869f726d680761f795971fa9b,http://arxiv.org/abs/1301.2662v1
16989," We initiate a study of random instances of nonlocal games. We show that
quantum strategies are better than classical for almost any 2-player XOR game.
More precisely, for large n, the entangled value of a random 2-player XOR game
with n questions to every player is at least 1.21... times the classical value,
for 1-o(1) fraction of all 2-player XOR games.
",andris ambainis,,2011.0,,arXiv,Ambainis2011,True,,arXiv,Not available,Quantum strategies are better than classical in almost any XOR game,5b1f74386050c072c0b10551211d760f,http://arxiv.org/abs/1112.3330v1
16990," We initiate a study of random instances of nonlocal games. We show that
quantum strategies are better than classical for almost any 2-player XOR game.
More precisely, for large n, the entangled value of a random 2-player XOR game
with n questions to every player is at least 1.21... times the classical value,
for 1-o(1) fraction of all 2-player XOR games.
",arturs backurs,,2011.0,,arXiv,Ambainis2011,True,,arXiv,Not available,Quantum strategies are better than classical in almost any XOR game,5b1f74386050c072c0b10551211d760f,http://arxiv.org/abs/1112.3330v1
16991," We initiate a study of random instances of nonlocal games. We show that
quantum strategies are better than classical for almost any 2-player XOR game.
More precisely, for large n, the entangled value of a random 2-player XOR game
with n questions to every player is at least 1.21... times the classical value,
for 1-o(1) fraction of all 2-player XOR games.
",kaspars balodis,,2011.0,,arXiv,Ambainis2011,True,,arXiv,Not available,Quantum strategies are better than classical in almost any XOR game,5b1f74386050c072c0b10551211d760f,http://arxiv.org/abs/1112.3330v1
16992," We initiate a study of random instances of nonlocal games. We show that
quantum strategies are better than classical for almost any 2-player XOR game.
More precisely, for large n, the entangled value of a random 2-player XOR game
with n questions to every player is at least 1.21... times the classical value,
for 1-o(1) fraction of all 2-player XOR games.
",dmitry kravcenko,,2011.0,,arXiv,Ambainis2011,True,,arXiv,Not available,Quantum strategies are better than classical in almost any XOR game,5b1f74386050c072c0b10551211d760f,http://arxiv.org/abs/1112.3330v1
16993," We initiate a study of random instances of nonlocal games. We show that
quantum strategies are better than classical for almost any 2-player XOR game.
More precisely, for large n, the entangled value of a random 2-player XOR game
with n questions to every player is at least 1.21... times the classical value,
for 1-o(1) fraction of all 2-player XOR games.
",raitis ozols,,2011.0,,arXiv,Ambainis2011,True,,arXiv,Not available,Quantum strategies are better than classical in almost any XOR game,5b1f74386050c072c0b10551211d760f,http://arxiv.org/abs/1112.3330v1
16994," Flip a coin repeatedly, and stop whenever you want. Your payoff is the
proportion of heads, and you wish to maximize this payoff in expectation. This
so-called Chow-Robbins game is amenable to computer analysis, but while
simple-minded number crunching can show that it is best to continue in a given
position, establishing rigorously that stopping is optimal seems at first sight
to require ""backward induction from infinity"". We establish a simple upper
bound on the expected payoff in a given position, allowing efficient and
rigorous computer analysis of positions early in the game. In particular we
confirm that with 5 heads and 3 tails, stopping is optimal.
",johan wastlund,,2012.0,,arXiv,Häggström2012,True,,arXiv,Not available,Rigorous computer analysis of the Chow-Robbins game,c357af20bf10a66857534881c09f0326,http://arxiv.org/abs/1201.0626v1
16995," We initiate a study of random instances of nonlocal games. We show that
quantum strategies are better than classical for almost any 2-player XOR game.
More precisely, for large n, the entangled value of a random 2-player XOR game
with n questions to every player is at least 1.21... times the classical value,
for 1-o(1) fraction of all 2-player XOR games.
",juris smotrovs,,2011.0,,arXiv,Ambainis2011,True,,arXiv,Not available,Quantum strategies are better than classical in almost any XOR game,5b1f74386050c072c0b10551211d760f,http://arxiv.org/abs/1112.3330v1
16996," We initiate a study of random instances of nonlocal games. We show that
quantum strategies are better than classical for almost any 2-player XOR game.
More precisely, for large n, the entangled value of a random 2-player XOR game
with n questions to every player is at least 1.21... times the classical value,
for 1-o(1) fraction of all 2-player XOR games.
",madars virza,,2011.0,,arXiv,Ambainis2011,True,,arXiv,Not available,Quantum strategies are better than classical in almost any XOR game,5b1f74386050c072c0b10551211d760f,http://arxiv.org/abs/1112.3330v1
16997," For matrix games we study how small nonzero probability must be used in
optimal strategies. We show that for nxn win-lose-draw games (i.e. (-1,0,1)
matrix games) nonzero probabilities smaller than n^{-O(n)} are never needed. We
also construct an explicit nxn win-lose game such that the unique optimal
strategy uses a nonzero probability as small as n^{-Omega(n)}. This is done by
constructing an explicit (-1,1) nonsingular nxn matrix, for which the inverse
has only nonnegative entries and where some of the entries are of value
n^{Omega(n)}.
",kristoffer hansen,,2012.0,,arXiv,Hansen2012,True,,arXiv,Not available,Patience of Matrix Games,715677d3343d524a5b893f067d272590,http://arxiv.org/abs/1206.1751v1
16998," For matrix games we study how small nonzero probability must be used in
optimal strategies. We show that for nxn win-lose-draw games (i.e. (-1,0,1)
matrix games) nonzero probabilities smaller than n^{-O(n)} are never needed. We
also construct an explicit nxn win-lose game such that the unique optimal
strategy uses a nonzero probability as small as n^{-Omega(n)}. This is done by
constructing an explicit (-1,1) nonsingular nxn matrix, for which the inverse
has only nonnegative entries and where some of the entries are of value
n^{Omega(n)}.
",rasmus ibsen-jensen,,2012.0,,arXiv,Hansen2012,True,,arXiv,Not available,Patience of Matrix Games,715677d3343d524a5b893f067d272590,http://arxiv.org/abs/1206.1751v1
16999," For matrix games we study how small nonzero probability must be used in
optimal strategies. We show that for nxn win-lose-draw games (i.e. (-1,0,1)
matrix games) nonzero probabilities smaller than n^{-O(n)} are never needed. We
also construct an explicit nxn win-lose game such that the unique optimal
strategy uses a nonzero probability as small as n^{-Omega(n)}. This is done by
constructing an explicit (-1,1) nonsingular nxn matrix, for which the inverse
has only nonnegative entries and where some of the entries are of value
n^{Omega(n)}.
",vladimir podolskii,,2012.0,,arXiv,Hansen2012,True,,arXiv,Not available,Patience of Matrix Games,715677d3343d524a5b893f067d272590,http://arxiv.org/abs/1206.1751v1
17000," For matrix games we study how small nonzero probability must be used in
optimal strategies. We show that for nxn win-lose-draw games (i.e. (-1,0,1)
matrix games) nonzero probabilities smaller than n^{-O(n)} are never needed. We
also construct an explicit nxn win-lose game such that the unique optimal
strategy uses a nonzero probability as small as n^{-Omega(n)}. This is done by
constructing an explicit (-1,1) nonsingular nxn matrix, for which the inverse
has only nonnegative entries and where some of the entries are of value
n^{Omega(n)}.
",elias tsigaridas,,2012.0,,arXiv,Hansen2012,True,,arXiv,Not available,Patience of Matrix Games,715677d3343d524a5b893f067d272590,http://arxiv.org/abs/1206.1751v1
17001," We investigate newsvendor games whose payoff function is uncertain due to
ambiguity in demand distributions. We discuss the concept of stability under
uncertainty and introduce solution concepts for robust cooperative games which
could be applied to these newsvendor games. Properties and numerical schemes
for finding core solutions of robust newsvendor games are presented.
",xuan doan,,2014.0,,arXiv,Doan2014,True,,arXiv,Not available,Robust Newsvendor Games with Ambiguity in Demand Distributions,82faf9640af07a5781d5d9941d484c28,http://arxiv.org/abs/1403.5906v3
17002," We investigate newsvendor games whose payoff function is uncertain due to
ambiguity in demand distributions. We discuss the concept of stability under
uncertainty and introduce solution concepts for robust cooperative games which
could be applied to these newsvendor games. Properties and numerical schemes
for finding core solutions of robust newsvendor games are presented.
",tri-dung nguyen,,2014.0,,arXiv,Doan2014,True,,arXiv,Not available,Robust Newsvendor Games with Ambiguity in Demand Distributions,82faf9640af07a5781d5d9941d484c28,http://arxiv.org/abs/1403.5906v3
17003," Coevolutionary games cast players that may change their strategies as well as
their networks of interaction. In this paper a framework is introduced for
describing coevolutionary game dynamics by landscape models. It is shown that
coevolutionary games invoke dynamic landscapes. Numerical experiments are shown
for a prisoner's dilemma (PD) and a snow drift (SD) game that both use either
birth-death (BD) or death-birth (DB) strategy updating. The resulting
landscapes are analyzed with respect to modality and ruggedness
",hendrik richter,,2016.0,,"Proc. IEEE Congress on Evolutionary Computation, IEEE CEC 2016,
(Ed.: Y. S. Ong), IEEE Press, Piscataway, NJ, 2016, 610-616",Richter2016,True,,arXiv,Not available,Analyzing coevolutionary games with dynamic fitness landscapes,61f7ba2f6f23c7661c58ac079b8d0dc1,http://arxiv.org/abs/1603.06374v1
17004," We present several new characterizations of correlated equilibria in games
with continuous utility functions. These have the advantage of being more
computationally and analytically tractable than the standard definition in
terms of departure functions. We use these characterizations to construct
effective algorithms for approximating a single correlated equilibrium or the
entire set of correlated equilibria of a game with polynomial utility
functions.
",noah stein,,2008.0,10.1016/j.geb.2010.04.004,"Games and Economic Behavior, Vol. 71, No. 2, March 2011, Pages
436-455",Stein2008,True,,arXiv,Not available,"Correlated Equilibria in Continuous Games: Characterization and
Computation",e6aaff4454659720f735a84c91724451,http://arxiv.org/abs/0812.4279v2
17005," Player ranking can be used to determine the quality of the contributions of a
player to a collaborative community. However, collaborative games with no
explicit objectives do not support player ranking, as there is no metric to
measure the quality of player contributions. An implicit objective of such
communities is not being disruptive towards other players. In this paper, we
propose a parameterizable approach for real-time player ranking in
collaborative games with no explicit objectives. Our method computes a ranking
by applying a simple heuristic community quality function. We also demonstrate
the capabilities of our approach by applying several parameterizations of it to
a case study and comparing the obtained results.
",luis quesada,,2012.0,,arXiv,Quesada2012,True,,arXiv,Not available,"Community-Quality-Based Player Ranking in Collaborative Games with no
Explicit Objectives",aa2bf60c15664bae59cbf092b4c21ac0,http://arxiv.org/abs/1205.3180v1
17006," We present several new characterizations of correlated equilibria in games
with continuous utility functions. These have the advantage of being more
computationally and analytically tractable than the standard definition in
terms of departure functions. We use these characterizations to construct
effective algorithms for approximating a single correlated equilibrium or the
entire set of correlated equilibria of a game with polynomial utility
functions.
",pablo parrilo,,2008.0,10.1016/j.geb.2010.04.004,"Games and Economic Behavior, Vol. 71, No. 2, March 2011, Pages
436-455",Stein2008,True,,arXiv,Not available,"Correlated Equilibria in Continuous Games: Characterization and
Computation",e6aaff4454659720f735a84c91724451,http://arxiv.org/abs/0812.4279v2
17007," We present several new characterizations of correlated equilibria in games
with continuous utility functions. These have the advantage of being more
computationally and analytically tractable than the standard definition in
terms of departure functions. We use these characterizations to construct
effective algorithms for approximating a single correlated equilibrium or the
entire set of correlated equilibria of a game with polynomial utility
functions.
",asuman ozdaglar,,2008.0,10.1016/j.geb.2010.04.004,"Games and Economic Behavior, Vol. 71, No. 2, March 2011, Pages
436-455",Stein2008,True,,arXiv,Not available,"Correlated Equilibria in Continuous Games: Characterization and
Computation",e6aaff4454659720f735a84c91724451,http://arxiv.org/abs/0812.4279v2
17008," We propose a new all-pay auction format in which risk-loving bidders pay a
constant fee each time they bid for an object whose monetary value is common
knowledge among the bidders, and bidding fees are the only source of benefit
for the seller. We show that for the proposed model there exists a {unique}
Symmetric Subgame Perfect Equilibrium (SSPE). The characterized SSPE is
stationary when re-entry in the auction is allowed, and it is Markov perfect
when re-entry is forbidden. Furthermore, we fully characterize the expected
revenue of the seller. Generally, with or without re-entry, it is more
beneficial for the seller to choose $v$ (value of the object), $s$ (sale
price), and $c$ (bidding fee) such that $\frac{v-s}{c}$ becomes sufficiently
large. In particular, when re-entry is permitted: the expected revenue of the
seller is \emph{independent} of the number of bidders, decreasing in the sale
price, increasing in the value of the object, and decreasing in the bidding
fee; Moreover, the seller's revenue is equal to the value of the object when
players are risk neutral, and it is strictly greater than the value of the
object when bidders are risk-loving. We further show that allowing re-entry can
be important in practice. Because, if the seller were to run such an auction
without allowing re-entry, the auction would last a long time, and for almost
all of its duration have only two remaining players. Thus, the seller's revenue
relies on those two players being willing to participate, without any breaks,
in an auction that might last for thousands of rounds
",ali kakhbod,,2011.0,,arXiv,Kakhbod2011,True,,arXiv,Not available,Resource allocation with costly participation,572a491e2875b8bb2a1246cdc3ba2dae,http://arxiv.org/abs/1108.2018v6
17009," We develop a general duality-theory framework for revenue maximization in
additive Bayesian auctions. The framework extends linear programming duality
and complementarity to constraints with partial derivatives. The dual system
reveals the geometric nature of the problem and highlights its connection with
the theory of bipartite graph matchings. We demonstrate the power of the
framework by applying it to a multiple-good monopoly setting where the buyer
has uniformly distributed valuations for the items, the canonical long-standing
open problem in the area. We propose a deterministic selling mechanism called
Straight-Jacket Auction (SJA), which we prove to be exactly optimal for up to 6
items, and conjecture its optimality for any number of goods. The duality
framework is used not only for proving optimality, but perhaps more importantly
for deriving the optimal mechanism itself; as a result, SJA is defined by
natural geometric constraints.
",yiannis giannakopoulos,,2014.0,,arXiv,Giannakopoulos2014,True,,arXiv,Not available,Duality and Optimality of Auctions for Uniform Distributions,6ff74bf55d00aebc8c5390d70309be5f,http://arxiv.org/abs/1404.2329v4
17010," We develop a general duality-theory framework for revenue maximization in
additive Bayesian auctions. The framework extends linear programming duality
and complementarity to constraints with partial derivatives. The dual system
reveals the geometric nature of the problem and highlights its connection with
the theory of bipartite graph matchings. We demonstrate the power of the
framework by applying it to a multiple-good monopoly setting where the buyer
has uniformly distributed valuations for the items, the canonical long-standing
open problem in the area. We propose a deterministic selling mechanism called
Straight-Jacket Auction (SJA), which we prove to be exactly optimal for up to 6
items, and conjecture its optimality for any number of goods. The duality
framework is used not only for proving optimality, but perhaps more importantly
for deriving the optimal mechanism itself; as a result, SJA is defined by
natural geometric constraints.
",elias koutsoupias,,2014.0,,arXiv,Giannakopoulos2014,True,,arXiv,Not available,Duality and Optimality of Auctions for Uniform Distributions,6ff74bf55d00aebc8c5390d70309be5f,http://arxiv.org/abs/1404.2329v4
17011," The idea of this paper is an advanced game concept. This concept is expected
to model non-monetary bilateral cooperations between self-interested agents.
Such non-monetary cases are social cooperations like allocation of high level
jobs or sexual relationships among humans. In a barter double auction, there is
a big amount of agents. Every agent has a vector of parameters which specifies
his demand and a vector which specifies his offer. Two agents can achieve a
commitment through barter exchange. The subjective satisfaction level (a number
between 0% and 100%) of an agent is as high as small is the distance between
his demand and the accepted offer. This paper introduces some facets of this
complex game concept.
",rustam tagiew,,2009.0,,arXiv,Tagiew2009,True,,arXiv,Not available,Towards Barter Double Auction as Model for Bilateral Social Cooperations,c981e51ee87f644962b38c95e56cff71,http://arxiv.org/abs/0905.3709v1
17012," A fundamental result in mechanism design theory, the so-called revelation
principle, asserts that for many questions concerning the existence of
mechanisms with a given outcome one can restrict attention to truthful direct
revelation-mechanisms. In practice, however, many mechanism use a restricted
message space. This motivates the study of the tradeoffs involved in choosing
simplified mechanisms, which can sometimes bring benefits in precluding bad or
promoting good equilibria, and other times impose costs on welfare and revenue.
We study the simplicity-expressiveness tradeoff in two representative settings,
sponsored search auctions and combinatorial auctions, each being a canonical
example for complete information and incomplete information analysis,
respectively. We observe that the amount of information available to the agents
plays an important role for the tradeoff between simplicity and expressiveness.
",paul dutting,,2011.0,,arXiv,Dütting2011,True,,arXiv,Not available,Simplicity-Expressiveness Tradeoffs in Mechanism Design,4d3cdb36bcd896bd8517ceddd06bfa0c,http://arxiv.org/abs/1102.3632v1
17013," A fundamental result in mechanism design theory, the so-called revelation
principle, asserts that for many questions concerning the existence of
mechanisms with a given outcome one can restrict attention to truthful direct
revelation-mechanisms. In practice, however, many mechanism use a restricted
message space. This motivates the study of the tradeoffs involved in choosing
simplified mechanisms, which can sometimes bring benefits in precluding bad or
promoting good equilibria, and other times impose costs on welfare and revenue.
We study the simplicity-expressiveness tradeoff in two representative settings,
sponsored search auctions and combinatorial auctions, each being a canonical
example for complete information and incomplete information analysis,
respectively. We observe that the amount of information available to the agents
plays an important role for the tradeoff between simplicity and expressiveness.
",felix fischer,,2011.0,,arXiv,Dütting2011,True,,arXiv,Not available,Simplicity-Expressiveness Tradeoffs in Mechanism Design,4d3cdb36bcd896bd8517ceddd06bfa0c,http://arxiv.org/abs/1102.3632v1
17014," A fundamental result in mechanism design theory, the so-called revelation
principle, asserts that for many questions concerning the existence of
mechanisms with a given outcome one can restrict attention to truthful direct
revelation-mechanisms. In practice, however, many mechanism use a restricted
message space. This motivates the study of the tradeoffs involved in choosing
simplified mechanisms, which can sometimes bring benefits in precluding bad or
promoting good equilibria, and other times impose costs on welfare and revenue.
We study the simplicity-expressiveness tradeoff in two representative settings,
sponsored search auctions and combinatorial auctions, each being a canonical
example for complete information and incomplete information analysis,
respectively. We observe that the amount of information available to the agents
plays an important role for the tradeoff between simplicity and expressiveness.
",david parkes,,2011.0,,arXiv,Dütting2011,True,,arXiv,Not available,Simplicity-Expressiveness Tradeoffs in Mechanism Design,4d3cdb36bcd896bd8517ceddd06bfa0c,http://arxiv.org/abs/1102.3632v1
17015," We present an original theorem in auction theory: it specifies general
conditions under which the sum of the payments of all bidders is necessarily
not identically zero, and more generally not constant. Moreover, it explicitly
supplies a construction for a finite minimal set of possible bids on which such
a sum is not constant. In particular, this theorem applies to the important
case of a second-price Vickrey auction, where it reduces to a basic result of
which a novel proof is given. To enhance the confidence in this new theorem, it
has been formalized in Isabelle/HOL: the main results and definitions of the
formal proof are re- produced here in common mathematical language, and are
accompanied by an informal discussion about the underlying ideas.
",marco caminati,,2014.0,,arXiv,Caminati2014,True,,arXiv,Not available,Budget Imbalance Criteria for Auctions: A Formalized Theorem,13e50c2afc4150b74b88760002978e9e,http://arxiv.org/abs/1412.0542v1
17016," Player ranking can be used to determine the quality of the contributions of a
player to a collaborative community. However, collaborative games with no
explicit objectives do not support player ranking, as there is no metric to
measure the quality of player contributions. An implicit objective of such
communities is not being disruptive towards other players. In this paper, we
propose a parameterizable approach for real-time player ranking in
collaborative games with no explicit objectives. Our method computes a ranking
by applying a simple heuristic community quality function. We also demonstrate
the capabilities of our approach by applying several parameterizations of it to
a case study and comparing the obtained results.
",pablo villacorta,,2012.0,,arXiv,Quesada2012,True,,arXiv,Not available,"Community-Quality-Based Player Ranking in Collaborative Games with no
Explicit Objectives",aa2bf60c15664bae59cbf092b4c21ac0,http://arxiv.org/abs/1205.3180v1
17017," We present an original theorem in auction theory: it specifies general
conditions under which the sum of the payments of all bidders is necessarily
not identically zero, and more generally not constant. Moreover, it explicitly
supplies a construction for a finite minimal set of possible bids on which such
a sum is not constant. In particular, this theorem applies to the important
case of a second-price Vickrey auction, where it reduces to a basic result of
which a novel proof is given. To enhance the confidence in this new theorem, it
has been formalized in Isabelle/HOL: the main results and definitions of the
formal proof are re- produced here in common mathematical language, and are
accompanied by an informal discussion about the underlying ideas.
",manfred kerber,,2014.0,,arXiv,Caminati2014,True,,arXiv,Not available,Budget Imbalance Criteria for Auctions: A Formalized Theorem,13e50c2afc4150b74b88760002978e9e,http://arxiv.org/abs/1412.0542v1
17018," We present an original theorem in auction theory: it specifies general
conditions under which the sum of the payments of all bidders is necessarily
not identically zero, and more generally not constant. Moreover, it explicitly
supplies a construction for a finite minimal set of possible bids on which such
a sum is not constant. In particular, this theorem applies to the important
case of a second-price Vickrey auction, where it reduces to a basic result of
which a novel proof is given. To enhance the confidence in this new theorem, it
has been formalized in Isabelle/HOL: the main results and definitions of the
formal proof are re- produced here in common mathematical language, and are
accompanied by an informal discussion about the underlying ideas.
",colin rowat,,2014.0,,arXiv,Caminati2014,True,,arXiv,Not available,Budget Imbalance Criteria for Auctions: A Formalized Theorem,13e50c2afc4150b74b88760002978e9e,http://arxiv.org/abs/1412.0542v1
17019," Real-time bidding (RTB) has become a new norm in display advertising where a
publisher uses auction models to sell online user's page view to advertisers.
In RTB, the ad with the highest bid price will be displayed to the user. This
ad displaying process is biased towards the publisher. In fact, the benefits of
the advertiser and the user have been rarely discussed. Towards the global
optimization, we argue that all stakeholders' benefits should be considered. To
this end, we propose a novel computation framework where multimedia techniques
and auction theory are integrated. This doctoral research mainly focus on 1)
figuring out the multimedia metrics that affect the effectiveness of online
advertising; 2) integrating the discovered metrics into the RTB framework. We
have presented some preliminary results and discussed the future directions.
",xiang chen,,2018.0,10.1145/3123266.3123966,arXiv,Chen2018,True,,arXiv,Not available,"Towards Global Optimization in Display Advertising by Integrating
Multimedia Metrics with Real-Time Bidding",f07a7b8e69e075d3414f0b9e2c29c019,http://arxiv.org/abs/1805.08632v1
17020," We provide a Polynomial Time Approximation Scheme (PTAS) for the Bayesian
optimal multi-item multi-bidder auction problem under two conditions. First,
bidders are independent, have additive valuations and are from the same
population. Second, every bidder's value distributions of items are independent
but not necessarily identical monotone hazard rate (MHR) distributions. For
non-i.i.d. bidders, we also provide a PTAS when the number of bidders is small.
Prior to our work, even for a single bidder, only constant factor
approximations are known.
Another appealing feature of our mechanism is the simple allocation rule.
Indeed, the mechanism we use is either the second-price auction with reserve
price on every item individually, or VCG allocation with a few outlying items
that requires additional treatments. It is surprising that such simple
allocation rules suffice to obtain nearly optimal revenue.
",yang cai,,2012.0,,arXiv,Cai2012,True,,arXiv,Not available,Simple and Nearly Optimal Multi-Item Auctions,23a06439d9a36630ce3595f638453e8a,http://arxiv.org/abs/1210.3560v2
17021," We provide a Polynomial Time Approximation Scheme (PTAS) for the Bayesian
optimal multi-item multi-bidder auction problem under two conditions. First,
bidders are independent, have additive valuations and are from the same
population. Second, every bidder's value distributions of items are independent
but not necessarily identical monotone hazard rate (MHR) distributions. For
non-i.i.d. bidders, we also provide a PTAS when the number of bidders is small.
Prior to our work, even for a single bidder, only constant factor
approximations are known.
Another appealing feature of our mechanism is the simple allocation rule.
Indeed, the mechanism we use is either the second-price auction with reserve
price on every item individually, or VCG allocation with a few outlying items
that requires additional treatments. It is surprising that such simple
allocation rules suffice to obtain nearly optimal revenue.
",zhiyi huang,,2012.0,,arXiv,Cai2012,True,,arXiv,Not available,Simple and Nearly Optimal Multi-Item Auctions,23a06439d9a36630ce3595f638453e8a,http://arxiv.org/abs/1210.3560v2
17022," Market-based mechanisms such as auctions are being studied as an appropriate
means for resource allocation in distributed and mulitagent decision problems.
When agents value resources in combination rather than in isolation, they must
often deliberate about appropriate bidding strategies for a sequence of
auctions offering resources of interest. We briefly describe a discrete dynamic
programming model for constructing appropriate bidding policies for resources
exhibiting both complementarities and substitutability. We then introduce a
continuous approximation of this model, assuming that money (or the numeraire
good) is infinitely divisible. Though this has the potential to reduce the
computational cost of computing policies, value functions in the transformed
problem do not have a convenient closed form representation. We develop {em
grid-based} approximation for such value functions, representing value
functions using piecewise linear approximations. We show that these methods can
offer significant computational savings with relatively small cost in solution
quality.
",craig boutilier,,2013.0,,arXiv,Boutilier2013,True,,arXiv,Not available,Continuous Value Function Approximation for Sequential Bidding Policies,a3a7d0aab9b19eceaed8c6be1f19819d,http://arxiv.org/abs/1301.6682v1
17023," Market-based mechanisms such as auctions are being studied as an appropriate
means for resource allocation in distributed and mulitagent decision problems.
When agents value resources in combination rather than in isolation, they must
often deliberate about appropriate bidding strategies for a sequence of
auctions offering resources of interest. We briefly describe a discrete dynamic
programming model for constructing appropriate bidding policies for resources
exhibiting both complementarities and substitutability. We then introduce a
continuous approximation of this model, assuming that money (or the numeraire
good) is infinitely divisible. Though this has the potential to reduce the
computational cost of computing policies, value functions in the transformed
problem do not have a convenient closed form representation. We develop {em
grid-based} approximation for such value functions, representing value
functions using piecewise linear approximations. We show that these methods can
offer significant computational savings with relatively small cost in solution
quality.
",moises goldszmidt,,2013.0,,arXiv,Boutilier2013,True,,arXiv,Not available,Continuous Value Function Approximation for Sequential Bidding Policies,a3a7d0aab9b19eceaed8c6be1f19819d,http://arxiv.org/abs/1301.6682v1
17024," Market-based mechanisms such as auctions are being studied as an appropriate
means for resource allocation in distributed and mulitagent decision problems.
When agents value resources in combination rather than in isolation, they must
often deliberate about appropriate bidding strategies for a sequence of
auctions offering resources of interest. We briefly describe a discrete dynamic
programming model for constructing appropriate bidding policies for resources
exhibiting both complementarities and substitutability. We then introduce a
continuous approximation of this model, assuming that money (or the numeraire
good) is infinitely divisible. Though this has the potential to reduce the
computational cost of computing policies, value functions in the transformed
problem do not have a convenient closed form representation. We develop {em
grid-based} approximation for such value functions, representing value
functions using piecewise linear approximations. We show that these methods can
offer significant computational savings with relatively small cost in solution
quality.
",bikash sabata,,2013.0,,arXiv,Boutilier2013,True,,arXiv,Not available,Continuous Value Function Approximation for Sequential Bidding Policies,a3a7d0aab9b19eceaed8c6be1f19819d,http://arxiv.org/abs/1301.6682v1
17025," We report the results of a computational study of repacking in the FCC
Incentive Auctions. Our interest lies in the structure and constraints of the
solution space of feasible repackings. Our analyses are ""mechanism-free"", in
the sense that they identify constraints that must hold regardless of the
reverse auction mechanism chosen or the prices offered for broadcaster
clearing. We examine topics such as the amount of spectrum that can be cleared
nationwide, the geographic distribution of broadcaster clearings required to
reach a clearing target, and the likelihood of reaching clearing targets under
various models for broadcaster participation. Our study uses FCC interference
data and a satisfiability-checking approach, and elucidates both the
unavoidable mathematical constraints on solutions imposed by interference, as
well as additional constraints imposed by assumptions on the participation
decisions of broadcasters.
",michael kearns,,2014.0,,arXiv,Kearns2014,True,,arXiv,Not available,"A Computational Study of Feasible Repackings in the FCC Incentive
Auctions",529ff1e694e352d693e588f76bbc109d,http://arxiv.org/abs/1406.4837v1
17026," We report the results of a computational study of repacking in the FCC
Incentive Auctions. Our interest lies in the structure and constraints of the
solution space of feasible repackings. Our analyses are ""mechanism-free"", in
the sense that they identify constraints that must hold regardless of the
reverse auction mechanism chosen or the prices offered for broadcaster
clearing. We examine topics such as the amount of spectrum that can be cleared
nationwide, the geographic distribution of broadcaster clearings required to
reach a clearing target, and the likelihood of reaching clearing targets under
various models for broadcaster participation. Our study uses FCC interference
data and a satisfiability-checking approach, and elucidates both the
unavoidable mathematical constraints on solutions imposed by interference, as
well as additional constraints imposed by assumptions on the participation
decisions of broadcasters.
",lili dworkin,,2014.0,,arXiv,Kearns2014,True,,arXiv,Not available,"A Computational Study of Feasible Repackings in the FCC Incentive
Auctions",529ff1e694e352d693e588f76bbc109d,http://arxiv.org/abs/1406.4837v1
17027," We introduce a general representation of large-population games in which each
player s influence ON the others IS centralized AND limited, but may otherwise
be arbitrary.This representation significantly generalizes the class known AS
congestion games IN a natural way.Our main results are provably correct AND
efficient algorithms FOR computing AND learning approximate Nash equilibria IN
this general framework.
",michael kearns,,2012.0,,arXiv,Kearns2012,True,,arXiv,Not available,"Efficient Nash Computation in Large Population Games with Bounded
Influence",31a77c697d61031a4cf69450b1e93b59,http://arxiv.org/abs/1301.0577v1
17028," We present an extensive analysis of the key problem of learning optimal
reserve prices for generalized second price auctions. We describe two
algorithms for this task: one based on density estimation, and a novel
algorithm benefiting from solid theoretical guarantees and with a very
favorable running-time complexity of $O(n S \log (n S))$, where $n$ is the
sample size and $S$ the number of slots. Our theoretical guarantees are more
favorable than those previously presented in the literature. Additionally, we
show that even if bidders do not play at an equilibrium, our second algorithm
is still well defined and minimizes a quantity of interest. To our knowledge,
this is the first attempt to apply learning algorithms to the problem of
reserve price optimization in GSP auctions. Finally, we present the first
convergence analysis of empirical equilibrium bidding functions to the unique
symmetric Bayesian-Nash equilibrium of a GSP.
",mehryar mohri,,2015.0,,arXiv,Mohri2015,True,,arXiv,Not available,"Non-parametric Revenue Optimization for Generalized Second Price
Auctions",7721fac6429813da80effd0c21197ff4,http://arxiv.org/abs/1506.02719v1
17029," We present an extensive analysis of the key problem of learning optimal
reserve prices for generalized second price auctions. We describe two
algorithms for this task: one based on density estimation, and a novel
algorithm benefiting from solid theoretical guarantees and with a very
favorable running-time complexity of $O(n S \log (n S))$, where $n$ is the
sample size and $S$ the number of slots. Our theoretical guarantees are more
favorable than those previously presented in the literature. Additionally, we
show that even if bidders do not play at an equilibrium, our second algorithm
is still well defined and minimizes a quantity of interest. To our knowledge,
this is the first attempt to apply learning algorithms to the problem of
reserve price optimization in GSP auctions. Finally, we present the first
convergence analysis of empirical equilibrium bidding functions to the unique
symmetric Bayesian-Nash equilibrium of a GSP.
",andres medina,,2015.0,,arXiv,Mohri2015,True,,arXiv,Not available,"Non-parametric Revenue Optimization for Generalized Second Price
Auctions",7721fac6429813da80effd0c21197ff4,http://arxiv.org/abs/1506.02719v1
17030," Group-buying auction has become a popular marketing strategy in the last
decade. In this paper, a stochastic model is developed for an inventory system
subjects to demands from group-buying auctions. The model discussed here takes
into the account of the costs of inventory, transportation, dispatching and
re-order as well as the penalty cost of non-successful auctions. Since a new
cycle begins whenever there is a replenishment of products, the long-run
average costs of the model can be obtained by using the renewal theory. A
closed form solution of the optimal replenishment quantity is also derived.
",allen tai,,2012.0,,arXiv,Tai2012,True,,arXiv,Not available,An inventory model for group-buying auction,ffe5bb0a189b9f28af95ff3a5c606568,http://arxiv.org/abs/1212.3541v1
17031," We describe human-subject laboratory experiments on probabilistic auctions
based on previously proposed auction protocols involving the simulated
manipulation and communication of quantum states. These auctions are
probabilistic in determining which bidder wins, or having no winner, rather
than always having the highest bidder win. Comparing two quantum protocols in
the context of first-price sealed bid auctions, we find the one predicted to be
superior by game theory also performs better experimentally. We also compare
with a conventional first price auction, which gives higher performance. Thus
to provide benefits, the quantum protocol requires more complex economic
scenarios such as maintaining privacy of bids over a series of related auctions
or involving allocative externalities.
",kay-yut chen,,2007.0,,Quantum Information Processing 7:139-152 (2008),Chen2007,True,,arXiv,Not available,Experiments with Probabilistic Quantum Auctions,c67ee9e4f3e50704e5ba370d8e405bc6,http://arxiv.org/abs/0707.4195v2
17032," We describe human-subject laboratory experiments on probabilistic auctions
based on previously proposed auction protocols involving the simulated
manipulation and communication of quantum states. These auctions are
probabilistic in determining which bidder wins, or having no winner, rather
than always having the highest bidder win. Comparing two quantum protocols in
the context of first-price sealed bid auctions, we find the one predicted to be
superior by game theory also performs better experimentally. We also compare
with a conventional first price auction, which gives higher performance. Thus
to provide benefits, the quantum protocol requires more complex economic
scenarios such as maintaining privacy of bids over a series of related auctions
or involving allocative externalities.
",tad hogg,,2007.0,,Quantum Information Processing 7:139-152 (2008),Chen2007,True,,arXiv,Not available,Experiments with Probabilistic Quantum Auctions,c67ee9e4f3e50704e5ba370d8e405bc6,http://arxiv.org/abs/0707.4195v2
17036," We show that the multiplicative weight update method provides a simple recipe
for designing and analyzing optimal Bayesian Incentive Compatible (BIC)
auctions, and reduces the time complexity of the problem to pseudo-polynomial
in parameters that depend on single agent instead of depending on the size of
the joint type space. We use this framework to design computationally efficient
optimal auctions that satisfy ex-post Individual Rationality in the presence of
constraints such as (hard, private) budgets and envy-freeness. We also design
optimal auctions when buyers and a seller's utility functions are non-linear.
Scenarios with such functions include (a) auctions with ""quitting rights"", (b)
cost to borrow money beyond budget, (c) a seller's and buyers' risk aversion.
Finally, we show how our framework also yields optimal auctions for variety of
auction settings considered in Cai et al, Alaei et al, albeit with
pseudo-polynomial running times.
",anand bhalgat,,2012.0,,arXiv,Bhalgat2012,True,,arXiv,Not available,Optimal Auctions via the Multiplicative Weight Method,e16b45ae03f0524fe1c821c46f6dcc2b,http://arxiv.org/abs/1211.1699v3
17037," We show that the multiplicative weight update method provides a simple recipe
for designing and analyzing optimal Bayesian Incentive Compatible (BIC)
auctions, and reduces the time complexity of the problem to pseudo-polynomial
in parameters that depend on single agent instead of depending on the size of
the joint type space. We use this framework to design computationally efficient
optimal auctions that satisfy ex-post Individual Rationality in the presence of
constraints such as (hard, private) budgets and envy-freeness. We also design
optimal auctions when buyers and a seller's utility functions are non-linear.
Scenarios with such functions include (a) auctions with ""quitting rights"", (b)
cost to borrow money beyond budget, (c) a seller's and buyers' risk aversion.
Finally, we show how our framework also yields optimal auctions for variety of
auction settings considered in Cai et al, Alaei et al, albeit with
pseudo-polynomial running times.
",sreenivas gollapudi,,2012.0,,arXiv,Bhalgat2012,True,,arXiv,Not available,Optimal Auctions via the Multiplicative Weight Method,e16b45ae03f0524fe1c821c46f6dcc2b,http://arxiv.org/abs/1211.1699v3
17038," We introduce a general representation of large-population games in which each
player s influence ON the others IS centralized AND limited, but may otherwise
be arbitrary.This representation significantly generalizes the class known AS
congestion games IN a natural way.Our main results are provably correct AND
efficient algorithms FOR computing AND learning approximate Nash equilibria IN
this general framework.
",yishay mansour,,2012.0,,arXiv,Kearns2012,True,,arXiv,Not available,"Efficient Nash Computation in Large Population Games with Bounded
Influence",31a77c697d61031a4cf69450b1e93b59,http://arxiv.org/abs/1301.0577v1
17039," We show that the multiplicative weight update method provides a simple recipe
for designing and analyzing optimal Bayesian Incentive Compatible (BIC)
auctions, and reduces the time complexity of the problem to pseudo-polynomial
in parameters that depend on single agent instead of depending on the size of
the joint type space. We use this framework to design computationally efficient
optimal auctions that satisfy ex-post Individual Rationality in the presence of
constraints such as (hard, private) budgets and envy-freeness. We also design
optimal auctions when buyers and a seller's utility functions are non-linear.
Scenarios with such functions include (a) auctions with ""quitting rights"", (b)
cost to borrow money beyond budget, (c) a seller's and buyers' risk aversion.
Finally, we show how our framework also yields optimal auctions for variety of
auction settings considered in Cai et al, Alaei et al, albeit with
pseudo-polynomial running times.
",kamesh munagala,,2012.0,,arXiv,Bhalgat2012,True,,arXiv,Not available,Optimal Auctions via the Multiplicative Weight Method,e16b45ae03f0524fe1c821c46f6dcc2b,http://arxiv.org/abs/1211.1699v3
17040," Calibration is a basic property for prediction systems, and algorithms for
achieving it are well-studied in both statistics and machine learning. In many
applications, however, the predictions are used to make decisions that select
which observations are made. This makes calibration difficult, as adjusting
predictions to achieve calibration changes future data. We focus on
click-through-rate (CTR) prediction for search ad auctions. Here, CTR
predictions are used by an auction that determines which ads are shown, and we
want to maximize the value generated by the auction.
We show that certain natural notions of calibration can be impossible to
achieve, depending on the details of the auction. We also show that it can be
impossible to maximize auction efficiency while using calibrated predictions.
Finally, we give conditions under which calibration is achievable and
simultaneously maximizes auction efficiency: roughly speaking, bids and queries
must not contain information about CTRs that is not already captured by the
predictions.
",h. mcmahan,,2012.0,,arXiv,McMahan2012,True,,arXiv,Not available,On Calibrated Predictions for Auction Selection Mechanisms,ec434173b4875a963a3f635f26281c3e,http://arxiv.org/abs/1211.3955v1
17041," Calibration is a basic property for prediction systems, and algorithms for
achieving it are well-studied in both statistics and machine learning. In many
applications, however, the predictions are used to make decisions that select
which observations are made. This makes calibration difficult, as adjusting
predictions to achieve calibration changes future data. We focus on
click-through-rate (CTR) prediction for search ad auctions. Here, CTR
predictions are used by an auction that determines which ads are shown, and we
want to maximize the value generated by the auction.
We show that certain natural notions of calibration can be impossible to
achieve, depending on the details of the auction. We also show that it can be
impossible to maximize auction efficiency while using calibrated predictions.
Finally, we give conditions under which calibration is achievable and
simultaneously maximizes auction efficiency: roughly speaking, bids and queries
must not contain information about CTRs that is not already captured by the
predictions.
",omkar muralidharan,,2012.0,,arXiv,McMahan2012,True,,arXiv,Not available,On Calibrated Predictions for Auction Selection Mechanisms,ec434173b4875a963a3f635f26281c3e,http://arxiv.org/abs/1211.3955v1
17042," Using mechanised reasoning we prove that combinatorial Vickrey auctions are
soundly specified in that they associate a unique outcome (allocation and
transfers) to any valid input (bids). Having done so, we auto-generate verified
executable code from the formally defined auction. This removes a source of
error in implementing the auction design. We intend to use formal methods to
verify new auction designs. Here, our contribution is to introduce and
demonstrate the use of formal methods for auction verification in the familiar
setting of a well-known auction.
",marco caminati,,2013.0,,arXiv,Caminati2013,True,,arXiv,Not available,"Proving soundness of combinatorial Vickrey auctions and generating
verified executable code",a28fedb138c9ba1bb520a12eebb3e724,http://arxiv.org/abs/1308.1779v2
17043," Using mechanised reasoning we prove that combinatorial Vickrey auctions are
soundly specified in that they associate a unique outcome (allocation and
transfers) to any valid input (bids). Having done so, we auto-generate verified
executable code from the formally defined auction. This removes a source of
error in implementing the auction design. We intend to use formal methods to
verify new auction designs. Here, our contribution is to introduce and
demonstrate the use of formal methods for auction verification in the familiar
setting of a well-known auction.
",manfred kerber,,2013.0,,arXiv,Caminati2013,True,,arXiv,Not available,"Proving soundness of combinatorial Vickrey auctions and generating
verified executable code",a28fedb138c9ba1bb520a12eebb3e724,http://arxiv.org/abs/1308.1779v2
17044," Using mechanised reasoning we prove that combinatorial Vickrey auctions are
soundly specified in that they associate a unique outcome (allocation and
transfers) to any valid input (bids). Having done so, we auto-generate verified
executable code from the formally defined auction. This removes a source of
error in implementing the auction design. We intend to use formal methods to
verify new auction designs. Here, our contribution is to introduce and
demonstrate the use of formal methods for auction verification in the familiar
setting of a well-known auction.
",christoph lange,,2013.0,,arXiv,Caminati2013,True,,arXiv,Not available,"Proving soundness of combinatorial Vickrey auctions and generating
verified executable code",a28fedb138c9ba1bb520a12eebb3e724,http://arxiv.org/abs/1308.1779v2
17045," Using mechanised reasoning we prove that combinatorial Vickrey auctions are
soundly specified in that they associate a unique outcome (allocation and
transfers) to any valid input (bids). Having done so, we auto-generate verified
executable code from the formally defined auction. This removes a source of
error in implementing the auction design. We intend to use formal methods to
verify new auction designs. Here, our contribution is to introduce and
demonstrate the use of formal methods for auction verification in the familiar
setting of a well-known auction.
",colin rowat,,2013.0,,arXiv,Caminati2013,True,,arXiv,Not available,"Proving soundness of combinatorial Vickrey auctions and generating
verified executable code",a28fedb138c9ba1bb520a12eebb3e724,http://arxiv.org/abs/1308.1779v2
17046," We derive optimal strategies for a bidding agent that participates in
multiple, simultaneous second-price auctions with perfect substitutes. We prove
that, if everyone else bids locally in a single auction, the global bidder
should always place non-zero bids in all available auctions, provided there are
no budget constraints. With a budget, however, the optimal strategy is to bid
locally if this budget is equal or less than the valuation. Furthermore, for a
wide range of valuation distributions, we prove that the problem of finding the
optimal bids reduces to two dimensions if all auctions are identical. Finally,
we address markets with both sequential and simultaneous auctions,
non-identical auctions, and the allocative efficiency of the market.
",enrico gerding,,2014.0,10.1613/jair.2544,"Journal Of Artificial Intelligence Research, Volume 32, pages
939-982, 2008",Gerding2014,True,,arXiv,Not available,"Optimal Strategies for Simultaneous Vickrey Auctions with Perfect
Substitutes",aec809e941bbd718d64892a9d5afcf91,http://arxiv.org/abs/1401.3433v1
17047," We derive optimal strategies for a bidding agent that participates in
multiple, simultaneous second-price auctions with perfect substitutes. We prove
that, if everyone else bids locally in a single auction, the global bidder
should always place non-zero bids in all available auctions, provided there are
no budget constraints. With a budget, however, the optimal strategy is to bid
locally if this budget is equal or less than the valuation. Furthermore, for a
wide range of valuation distributions, we prove that the problem of finding the
optimal bids reduces to two dimensions if all auctions are identical. Finally,
we address markets with both sequential and simultaneous auctions,
non-identical auctions, and the allocative efficiency of the market.
",rajdeep dash,,2014.0,10.1613/jair.2544,"Journal Of Artificial Intelligence Research, Volume 32, pages
939-982, 2008",Gerding2014,True,,arXiv,Not available,"Optimal Strategies for Simultaneous Vickrey Auctions with Perfect
Substitutes",aec809e941bbd718d64892a9d5afcf91,http://arxiv.org/abs/1401.3433v1
17048," We derive optimal strategies for a bidding agent that participates in
multiple, simultaneous second-price auctions with perfect substitutes. We prove
that, if everyone else bids locally in a single auction, the global bidder
should always place non-zero bids in all available auctions, provided there are
no budget constraints. With a budget, however, the optimal strategy is to bid
locally if this budget is equal or less than the valuation. Furthermore, for a
wide range of valuation distributions, we prove that the problem of finding the
optimal bids reduces to two dimensions if all auctions are identical. Finally,
we address markets with both sequential and simultaneous auctions,
non-identical auctions, and the allocative efficiency of the market.
",andrew byde,,2014.0,10.1613/jair.2544,"Journal Of Artificial Intelligence Research, Volume 32, pages
939-982, 2008",Gerding2014,True,,arXiv,Not available,"Optimal Strategies for Simultaneous Vickrey Auctions with Perfect
Substitutes",aec809e941bbd718d64892a9d5afcf91,http://arxiv.org/abs/1401.3433v1
17049," In robot games on Z, two players add integers to a counter. Each player has a
finite set from which he picks the integer to add, and the objective of the
first player is to let the counter reach 0. We present an exponential-time
algorithm for deciding the winner of a robot game given the initial counter
value, and prove a matching lower bound.
",arjun arul,,2013.0,10.4204/EPTCS.117.9,"EPTCS 117, 2013, pp. 132-148",Arul2013,True,,arXiv,Not available,The Complexity of Robot Games on the Integer Line,8bc1a1591336fb6433657bba36f01b8f,http://arxiv.org/abs/1301.7700v4
17050," We derive optimal strategies for a bidding agent that participates in
multiple, simultaneous second-price auctions with perfect substitutes. We prove
that, if everyone else bids locally in a single auction, the global bidder
should always place non-zero bids in all available auctions, provided there are
no budget constraints. With a budget, however, the optimal strategy is to bid
locally if this budget is equal or less than the valuation. Furthermore, for a
wide range of valuation distributions, we prove that the problem of finding the
optimal bids reduces to two dimensions if all auctions are identical. Finally,
we address markets with both sequential and simultaneous auctions,
non-identical auctions, and the allocative efficiency of the market.
",nicholas jennings,,2014.0,10.1613/jair.2544,"Journal Of Artificial Intelligence Research, Volume 32, pages
939-982, 2008",Gerding2014,True,,arXiv,Not available,"Optimal Strategies for Simultaneous Vickrey Auctions with Perfect
Substitutes",aec809e941bbd718d64892a9d5afcf91,http://arxiv.org/abs/1401.3433v1
17051," Many spectrum auction mechanisms have been proposed for spectrum allocation
problem, and unfortunately, few of them protect the bid privacy of bidders and
achieve good social efficiency. In this paper, we propose PPS, a Privacy
Preserving Strategyproof spectrum auction framework. Then, we design two
schemes based on PPS separately for 1) the Single-Unit Auction model (SUA),
where only single channel to be sold in the spectrum market; and 2) the
Multi-Unit Auction model (MUA), where the primary user subleases multi-unit
channels to the secondary users and each of the secondary users wants to access
multi-unit channels either. Since the social efficiency maximization problem is
NP-hard in both auction models, we present allocation mechanisms with
approximation factors of $(1+\epsilon)$ and 32 separately for SUA and MUA, and
further judiciously design strategyproof auction mechanisms with privacy
preserving based on them. Our extensive evaluations show that our mechanisms
achieve good social efficiency and with low computation and communication
overhead.
",he huang,,2013.0,,arXiv,Huang2013,True,,arXiv,Not available,"PPS: Privacy-Preserving Strategyproof Social-Efficient Spectrum Auction
Mechanisms",39b3fa78e7107dad7c9d6617065dae41,http://arxiv.org/abs/1307.7792v1
17052," Many spectrum auction mechanisms have been proposed for spectrum allocation
problem, and unfortunately, few of them protect the bid privacy of bidders and
achieve good social efficiency. In this paper, we propose PPS, a Privacy
Preserving Strategyproof spectrum auction framework. Then, we design two
schemes based on PPS separately for 1) the Single-Unit Auction model (SUA),
where only single channel to be sold in the spectrum market; and 2) the
Multi-Unit Auction model (MUA), where the primary user subleases multi-unit
channels to the secondary users and each of the secondary users wants to access
multi-unit channels either. Since the social efficiency maximization problem is
NP-hard in both auction models, we present allocation mechanisms with
approximation factors of $(1+\epsilon)$ and 32 separately for SUA and MUA, and
further judiciously design strategyproof auction mechanisms with privacy
preserving based on them. Our extensive evaluations show that our mechanisms
achieve good social efficiency and with low computation and communication
overhead.
",xiang-yang li,,2013.0,,arXiv,Huang2013,True,,arXiv,Not available,"PPS: Privacy-Preserving Strategyproof Social-Efficient Spectrum Auction
Mechanisms",39b3fa78e7107dad7c9d6617065dae41,http://arxiv.org/abs/1307.7792v1
17053," Many spectrum auction mechanisms have been proposed for spectrum allocation
problem, and unfortunately, few of them protect the bid privacy of bidders and
achieve good social efficiency. In this paper, we propose PPS, a Privacy
Preserving Strategyproof spectrum auction framework. Then, we design two
schemes based on PPS separately for 1) the Single-Unit Auction model (SUA),
where only single channel to be sold in the spectrum market; and 2) the
Multi-Unit Auction model (MUA), where the primary user subleases multi-unit
channels to the secondary users and each of the secondary users wants to access
multi-unit channels either. Since the social efficiency maximization problem is
NP-hard in both auction models, we present allocation mechanisms with
approximation factors of $(1+\epsilon)$ and 32 separately for SUA and MUA, and
further judiciously design strategyproof auction mechanisms with privacy
preserving based on them. Our extensive evaluations show that our mechanisms
achieve good social efficiency and with low computation and communication
overhead.
",yu-e sun,,2013.0,,arXiv,Huang2013,True,,arXiv,Not available,"PPS: Privacy-Preserving Strategyproof Social-Efficient Spectrum Auction
Mechanisms",39b3fa78e7107dad7c9d6617065dae41,http://arxiv.org/abs/1307.7792v1
17054," Many spectrum auction mechanisms have been proposed for spectrum allocation
problem, and unfortunately, few of them protect the bid privacy of bidders and
achieve good social efficiency. In this paper, we propose PPS, a Privacy
Preserving Strategyproof spectrum auction framework. Then, we design two
schemes based on PPS separately for 1) the Single-Unit Auction model (SUA),
where only single channel to be sold in the spectrum market; and 2) the
Multi-Unit Auction model (MUA), where the primary user subleases multi-unit
channels to the secondary users and each of the secondary users wants to access
multi-unit channels either. Since the social efficiency maximization problem is
NP-hard in both auction models, we present allocation mechanisms with
approximation factors of $(1+\epsilon)$ and 32 separately for SUA and MUA, and
further judiciously design strategyproof auction mechanisms with privacy
preserving based on them. Our extensive evaluations show that our mechanisms
achieve good social efficiency and with low computation and communication
overhead.
",hongli xu,,2013.0,,arXiv,Huang2013,True,,arXiv,Not available,"PPS: Privacy-Preserving Strategyproof Social-Efficient Spectrum Auction
Mechanisms",39b3fa78e7107dad7c9d6617065dae41,http://arxiv.org/abs/1307.7792v1
17055," Many spectrum auction mechanisms have been proposed for spectrum allocation
problem, and unfortunately, few of them protect the bid privacy of bidders and
achieve good social efficiency. In this paper, we propose PPS, a Privacy
Preserving Strategyproof spectrum auction framework. Then, we design two
schemes based on PPS separately for 1) the Single-Unit Auction model (SUA),
where only single channel to be sold in the spectrum market; and 2) the
Multi-Unit Auction model (MUA), where the primary user subleases multi-unit
channels to the secondary users and each of the secondary users wants to access
multi-unit channels either. Since the social efficiency maximization problem is
NP-hard in both auction models, we present allocation mechanisms with
approximation factors of $(1+\epsilon)$ and 32 separately for SUA and MUA, and
further judiciously design strategyproof auction mechanisms with privacy
preserving based on them. Our extensive evaluations show that our mechanisms
achieve good social efficiency and with low computation and communication
overhead.
",liusheng huang,,2013.0,,arXiv,Huang2013,True,,arXiv,Not available,"PPS: Privacy-Preserving Strategyproof Social-Efficient Spectrum Auction
Mechanisms",39b3fa78e7107dad7c9d6617065dae41,http://arxiv.org/abs/1307.7792v1
17056," In this paper, we study online double auctions, where multiple sellers and
multiple buyers arrive and depart dynamically to exchange one commodity. We
show that there is no deterministic online double auction that is truthful and
competitive for maximising social welfare in an adversarial model. However,
given the prior information that sellers are patient and the demand is not more
than the supply, a deterministic and truthful greedy mechanism is actually
2-competitive, i.e. it guarantees that the social welfare of its allocation is
at least half of the optimal one achievable offline. Moreover, if the number of
incoming buyers is predictable, we demonstrate that an online double auction
can be reduced to an online one-sided auction, and the truthfulness and
competitiveness of the reduced online double auction follow that of the online
one-sided auction. Notably, by using the reduction, we find a truthful
mechanism that is almost 1-competitive, when buyers arrive randomly. Finally,
we argue that these mechanisms also have a promising applicability in more
general settings without assuming that sellers are patient, by decomposing a
market into multiple sub-markets.
",dengji zhao,,2013.0,,arXiv,Zhao2013,True,,arXiv,Not available,Decomposing Truthful and Competitive Online Double Auctions,152bfc45e8a40537cc0bc683a0781adf,http://arxiv.org/abs/1311.0198v1
17057," In this paper, we study online double auctions, where multiple sellers and
multiple buyers arrive and depart dynamically to exchange one commodity. We
show that there is no deterministic online double auction that is truthful and
competitive for maximising social welfare in an adversarial model. However,
given the prior information that sellers are patient and the demand is not more
than the supply, a deterministic and truthful greedy mechanism is actually
2-competitive, i.e. it guarantees that the social welfare of its allocation is
at least half of the optimal one achievable offline. Moreover, if the number of
incoming buyers is predictable, we demonstrate that an online double auction
can be reduced to an online one-sided auction, and the truthfulness and
competitiveness of the reduced online double auction follow that of the online
one-sided auction. Notably, by using the reduction, we find a truthful
mechanism that is almost 1-competitive, when buyers arrive randomly. Finally,
we argue that these mechanisms also have a promising applicability in more
general settings without assuming that sellers are patient, by decomposing a
market into multiple sub-markets.
",dongmo zhang,,2013.0,,arXiv,Zhao2013,True,,arXiv,Not available,Decomposing Truthful and Competitive Online Double Auctions,152bfc45e8a40537cc0bc683a0781adf,http://arxiv.org/abs/1311.0198v1
17058," In this paper, we study online double auctions, where multiple sellers and
multiple buyers arrive and depart dynamically to exchange one commodity. We
show that there is no deterministic online double auction that is truthful and
competitive for maximising social welfare in an adversarial model. However,
given the prior information that sellers are patient and the demand is not more
than the supply, a deterministic and truthful greedy mechanism is actually
2-competitive, i.e. it guarantees that the social welfare of its allocation is
at least half of the optimal one achievable offline. Moreover, if the number of
incoming buyers is predictable, we demonstrate that an online double auction
can be reduced to an online one-sided auction, and the truthfulness and
competitiveness of the reduced online double auction follow that of the online
one-sided auction. Notably, by using the reduction, we find a truthful
mechanism that is almost 1-competitive, when buyers arrive randomly. Finally,
we argue that these mechanisms also have a promising applicability in more
general settings without assuming that sellers are patient, by decomposing a
market into multiple sub-markets.
",laurent perrussel,,2013.0,,arXiv,Zhao2013,True,,arXiv,Not available,Decomposing Truthful and Competitive Online Double Auctions,152bfc45e8a40537cc0bc683a0781adf,http://arxiv.org/abs/1311.0198v1
17059," We study the efficiency guarantees in the simple auction environment where
the auctioneer has one unit of divisible good to be distributed among a number
of budget constrained agents. With budget constraints, the social welfare
cannot be approximated by a better factor than the number of agents by any
truthful mechanism. Thus, we follow a recent work by Dobzinski and Leme (ICALP
2014) to approximate the liquid welfare, which is the welfare of the agents
each capped by her/his own budget. We design a new truthful auction with an
approximation ratio of $\frac{\sqrt{5}+1}{2} \approx 1.618$, improving the best
previous ratio of $2$ when the budgets for agents are public knowledge and
their valuation is linear (additive). In private budget setting, we propose the
first constant approximation auction with approximation ratio of $34$.
Moreover, this auction works for any valuation function. Previously, only
$O(\log n)$ approximation was known for linear and decreasing marginal
(concave) valuations, and $O(\log^2 n)$ approximation was known for
sub-additive valuations.
",pinyan lu,,2014.0,,arXiv,Lu2014,True,,arXiv,Not available,Improved Efficiency Guarantees in Auctions with Budgets,a229d8cbe218f6b2e8bff76900396b6f,http://arxiv.org/abs/1407.8325v3
17060," In robot games on Z, two players add integers to a counter. Each player has a
finite set from which he picks the integer to add, and the objective of the
first player is to let the counter reach 0. We present an exponential-time
algorithm for deciding the winner of a robot game given the initial counter
value, and prove a matching lower bound.
",julien reichert,,2013.0,10.4204/EPTCS.117.9,"EPTCS 117, 2013, pp. 132-148",Arul2013,True,,arXiv,Not available,The Complexity of Robot Games on the Integer Line,8bc1a1591336fb6433657bba36f01b8f,http://arxiv.org/abs/1301.7700v4
17061," We study the efficiency guarantees in the simple auction environment where
the auctioneer has one unit of divisible good to be distributed among a number
of budget constrained agents. With budget constraints, the social welfare
cannot be approximated by a better factor than the number of agents by any
truthful mechanism. Thus, we follow a recent work by Dobzinski and Leme (ICALP
2014) to approximate the liquid welfare, which is the welfare of the agents
each capped by her/his own budget. We design a new truthful auction with an
approximation ratio of $\frac{\sqrt{5}+1}{2} \approx 1.618$, improving the best
previous ratio of $2$ when the budgets for agents are public knowledge and
their valuation is linear (additive). In private budget setting, we propose the
first constant approximation auction with approximation ratio of $34$.
Moreover, this auction works for any valuation function. Previously, only
$O(\log n)$ approximation was known for linear and decreasing marginal
(concave) valuations, and $O(\log^2 n)$ approximation was known for
sub-additive valuations.
",tao xiao,,2014.0,,arXiv,Lu2014,True,,arXiv,Not available,Improved Efficiency Guarantees in Auctions with Budgets,a229d8cbe218f6b2e8bff76900396b6f,http://arxiv.org/abs/1407.8325v3
17062," Parsimonious games are a subset of constant sum homogeneous weighted majority
games unequivocally described by their free type representation vector. We show
that the minimal winning quota of parsimonious games satisfies a second order,
linear, homogeneous, finite difference equation with nonconstant coefficients
except for uniform games. We provide the solution of such an equation which may
be thought as the generalized version of the polynomial expansion of a proper
k-Fibonacci sequence. In addition we show that the minimal winning quota is a
symmetric function of the representation vector; exploiting this property it is
straightforward to prove that twin Parsimonious games, i.e. a couple of games
whose free type representations are each other symmetric, share the same
minimal winning quota.
",flavio pressacco,,2014.0,,arXiv,Pressacco2014,True,,arXiv,Not available,K-Fibonacci sequences and minimal winning quota in Parsimonious game,8ee91110be0f5e3eaf001f536384be13,http://arxiv.org/abs/1402.5102v1
17063," Parsimonious games are a subset of constant sum homogeneous weighted majority
games unequivocally described by their free type representation vector. We show
that the minimal winning quota of parsimonious games satisfies a second order,
linear, homogeneous, finite difference equation with nonconstant coefficients
except for uniform games. We provide the solution of such an equation which may
be thought as the generalized version of the polynomial expansion of a proper
k-Fibonacci sequence. In addition we show that the minimal winning quota is a
symmetric function of the representation vector; exploiting this property it is
straightforward to prove that twin Parsimonious games, i.e. a couple of games
whose free type representations are each other symmetric, share the same
minimal winning quota.
",giacomo plazzotta,,2014.0,,arXiv,Pressacco2014,True,,arXiv,Not available,K-Fibonacci sequences and minimal winning quota in Parsimonious game,8ee91110be0f5e3eaf001f536384be13,http://arxiv.org/abs/1402.5102v1
17064," Parsimonious games are a subset of constant sum homogeneous weighted majority
games unequivocally described by their free type representation vector. We show
that the minimal winning quota of parsimonious games satisfies a second order,
linear, homogeneous, finite difference equation with nonconstant coefficients
except for uniform games. We provide the solution of such an equation which may
be thought as the generalized version of the polynomial expansion of a proper
k-Fibonacci sequence. In addition we show that the minimal winning quota is a
symmetric function of the representation vector; exploiting this property it is
straightforward to prove that twin Parsimonious games, i.e. a couple of games
whose free type representations are each other symmetric, share the same
minimal winning quota.
",laura ziani,,2014.0,,arXiv,Pressacco2014,True,,arXiv,Not available,K-Fibonacci sequences and minimal winning quota in Parsimonious game,8ee91110be0f5e3eaf001f536384be13,http://arxiv.org/abs/1402.5102v1
17065," Infinite games where several players seek to coordinate under imperfect
information are known to be intractable, unless the information flow is
severely restricted. Examples of undecidable cases typically feature a
situation where players become uncertain about the current state of the game,
and this uncertainty lasts forever. Here we consider games where the players
attain certainty about the current state over and over again along any play.
For finite-state games, we note that this kind of recurring certainty implies a
stronger condition of periodic certainty, that is, the events of state
certainty ultimately occur at uniform, regular intervals. We show that it is
decidable whether a given game presents recurring certainty, and that, if so,
the problem of synthesising coordination strategies under w-regular winning
conditions is solvable.
",dietmar berwanger,,2014.0,10.4204/EPTCS.146.12,"EPTCS 146, 2014, pp. 91-96",Berwanger2014,True,,arXiv,Not available,Games with recurring certainty,1ece7ce2a1648b45c2d1641806c81533,http://arxiv.org/abs/1404.7770v1
17066," Infinite games where several players seek to coordinate under imperfect
information are known to be intractable, unless the information flow is
severely restricted. Examples of undecidable cases typically feature a
situation where players become uncertain about the current state of the game,
and this uncertainty lasts forever. Here we consider games where the players
attain certainty about the current state over and over again along any play.
For finite-state games, we note that this kind of recurring certainty implies a
stronger condition of periodic certainty, that is, the events of state
certainty ultimately occur at uniform, regular intervals. We show that it is
decidable whether a given game presents recurring certainty, and that, if so,
the problem of synthesising coordination strategies under w-regular winning
conditions is solvable.
",anup mathew,,2014.0,10.4204/EPTCS.146.12,"EPTCS 146, 2014, pp. 91-96",Berwanger2014,True,,arXiv,Not available,Games with recurring certainty,1ece7ce2a1648b45c2d1641806c81533,http://arxiv.org/abs/1404.7770v1
17067," Secure equilibrium is a refinement of Nash equilibrium, which provides some
security to the players against deviations when a player changes his strategy
to another best response strategy. The concept of secure equilibrium is
specifically developed for assume-guarantee synthesis and has already been
applied in this context. Yet, not much is known about its existence in games
with more than two players. In this paper, we establish the existence of secure
equilibrium in two classes of multi-player perfect information turn-based
games: (1) in games with possibly probabilistic transitions, having countable
state and finite action spaces and bounded and continuous payoff functions, and
(2) in games with only deterministic transitions, having arbitrary state and
action spaces and Borel payoff functions with a finite range (in particular,
qualitative Borel payoff functions). We show that these results apply to
several types of games studied in the literature.
",julie pril,,2014.0,,arXiv,Pril2014,True,,arXiv,Not available,"Existence of Secure Equilibrium in Multi-Player Games with Perfect
Information",c226b1b98843430e5b1790ca8315dbe9,http://arxiv.org/abs/1405.1615v1
17068," Secure equilibrium is a refinement of Nash equilibrium, which provides some
security to the players against deviations when a player changes his strategy
to another best response strategy. The concept of secure equilibrium is
specifically developed for assume-guarantee synthesis and has already been
applied in this context. Yet, not much is known about its existence in games
with more than two players. In this paper, we establish the existence of secure
equilibrium in two classes of multi-player perfect information turn-based
games: (1) in games with possibly probabilistic transitions, having countable
state and finite action spaces and bounded and continuous payoff functions, and
(2) in games with only deterministic transitions, having arbitrary state and
action spaces and Borel payoff functions with a finite range (in particular,
qualitative Borel payoff functions). We show that these results apply to
several types of games studied in the literature.
",janos flesch,,2014.0,,arXiv,Pril2014,True,,arXiv,Not available,"Existence of Secure Equilibrium in Multi-Player Games with Perfect
Information",c226b1b98843430e5b1790ca8315dbe9,http://arxiv.org/abs/1405.1615v1
17069," Secure equilibrium is a refinement of Nash equilibrium, which provides some
security to the players against deviations when a player changes his strategy
to another best response strategy. The concept of secure equilibrium is
specifically developed for assume-guarantee synthesis and has already been
applied in this context. Yet, not much is known about its existence in games
with more than two players. In this paper, we establish the existence of secure
equilibrium in two classes of multi-player perfect information turn-based
games: (1) in games with possibly probabilistic transitions, having countable
state and finite action spaces and bounded and continuous payoff functions, and
(2) in games with only deterministic transitions, having arbitrary state and
action spaces and Borel payoff functions with a finite range (in particular,
qualitative Borel payoff functions). We show that these results apply to
several types of games studied in the literature.
",jeroen kuipers,,2014.0,,arXiv,Pril2014,True,,arXiv,Not available,"Existence of Secure Equilibrium in Multi-Player Games with Perfect
Information",c226b1b98843430e5b1790ca8315dbe9,http://arxiv.org/abs/1405.1615v1
17070," Secure equilibrium is a refinement of Nash equilibrium, which provides some
security to the players against deviations when a player changes his strategy
to another best response strategy. The concept of secure equilibrium is
specifically developed for assume-guarantee synthesis and has already been
applied in this context. Yet, not much is known about its existence in games
with more than two players. In this paper, we establish the existence of secure
equilibrium in two classes of multi-player perfect information turn-based
games: (1) in games with possibly probabilistic transitions, having countable
state and finite action spaces and bounded and continuous payoff functions, and
(2) in games with only deterministic transitions, having arbitrary state and
action spaces and Borel payoff functions with a finite range (in particular,
qualitative Borel payoff functions). We show that these results apply to
several types of games studied in the literature.
",gijs schoenmakers,,2014.0,,arXiv,Pril2014,True,,arXiv,Not available,"Existence of Secure Equilibrium in Multi-Player Games with Perfect
Information",c226b1b98843430e5b1790ca8315dbe9,http://arxiv.org/abs/1405.1615v1
17071," The Rock-Paper-Scissors (RPS) game is a widely used model system in game
theory. Evolutionary game theory predicts the existence of persistent cycles in
the evolutionary trajectories of the RPS game, but experimental evidence has
remained to be rather weak. In this work we performed laboratory experiments on
the RPS game and analyzed the social-state evolutionary trajectories of twelve
populations of N=6 players. We found strong evidence supporting the existence
of persistent cycles. The mean cycling frequency was measured to be $0.029 \pm
0.009$ period per experimental round. Our experimental observations can be
quantitatively explained by a simple non-equilibrium model, namely the
discrete-time logit dynamical process with a noise parameter. Our work
therefore favors the evolutionary game theory over the classical game theory
for describing the dynamical behavior of the RPS game.
",bin xu,,2013.0,10.1016/j.physa.2013.06.039,"Physica A: Statistical Mechanics and its Applications, Volume 392,
Issue 20, 15 October 2013, Pages 4997-5005",Xu2013,True,,arXiv,Not available,"Cycle frequency in standard Rock-Paper-Scissors games: Evidence from
experimental economics",2ca365565f2fef4334130062a868dbd3,http://arxiv.org/abs/1301.3238v3
17072," Secure equilibrium is a refinement of Nash equilibrium, which provides some
security to the players against deviations when a player changes his strategy
to another best response strategy. The concept of secure equilibrium is
specifically developed for assume-guarantee synthesis and has already been
applied in this context. Yet, not much is known about its existence in games
with more than two players. In this paper, we establish the existence of secure
equilibrium in two classes of multi-player perfect information turn-based
games: (1) in games with possibly probabilistic transitions, having countable
state and finite action spaces and bounded and continuous payoff functions, and
(2) in games with only deterministic transitions, having arbitrary state and
action spaces and Borel payoff functions with a finite range (in particular,
qualitative Borel payoff functions). We show that these results apply to
several types of games studied in the literature.
",koos vrieze,,2014.0,,arXiv,Pril2014,True,,arXiv,Not available,"Existence of Secure Equilibrium in Multi-Player Games with Perfect
Information",c226b1b98843430e5b1790ca8315dbe9,http://arxiv.org/abs/1405.1615v1
17073," We prove that finding an $\epsilon$-approximate Nash equilibrium is
PPAD-complete for constant $\epsilon$ and a particularly simple class of games:
polymatrix, degree 3 graphical games, in which each player has only two
actions.
As corollaries, we also prove similar inapproximability results for Bayesian
Nash equilibrium in a two-player incomplete information game with a constant
number of actions, for relative $\epsilon$-Well Supported Nash Equilibrium in a
two-player game, for market equilibrium in a non-monotone market, for the
generalized circuit problem defined by Chen, Deng, and Teng [CDT'09], and for
approximate competitive equilibrium from equal incomes with indivisible goods.
",aviad rubinstein,,2014.0,,arXiv,Rubinstein2014,True,,arXiv,Not available,Inapproximability of Nash Equilibrium,192eba199624170d96d743ddde9782f6,http://arxiv.org/abs/1405.3322v5
17074," Dynamic zero-sum games are an important class of problems with applications
ranging from evasion-pursuit and heads-up poker to certain adversarial versions
of control problems such as multi-armed bandit and multiclass queuing problems.
These games are generally very difficult to solve even when one player's
strategy is fixed, and so constructing and evaluating good sub-optimal policies
for each player is an important practical problem. In this paper, we propose
the use of information relaxations to construct dual lower and upper bounds on
the optimal value of the game. We note that the information relaxation
approach, which has been developed and applied successfully to many large-scale
dynamic programming problems, applies immediately to zero-sum game problems. We
provide some simple numerical examples and identify interesting issues and
complications that arise in the context of zero-sum games.
",martin haugh,,2014.0,,arXiv,Haugh2014,True,,arXiv,Not available,Information Relaxations and Dynamic Zero-Sum Games,e5d921a05e2efb95d47f46df0518c058,http://arxiv.org/abs/1405.4347v2
17075," Dynamic zero-sum games are an important class of problems with applications
ranging from evasion-pursuit and heads-up poker to certain adversarial versions
of control problems such as multi-armed bandit and multiclass queuing problems.
These games are generally very difficult to solve even when one player's
strategy is fixed, and so constructing and evaluating good sub-optimal policies
for each player is an important practical problem. In this paper, we propose
the use of information relaxations to construct dual lower and upper bounds on
the optimal value of the game. We note that the information relaxation
approach, which has been developed and applied successfully to many large-scale
dynamic programming problems, applies immediately to zero-sum game problems. We
provide some simple numerical examples and identify interesting issues and
complications that arise in the context of zero-sum games.
",chun wang,,2014.0,,arXiv,Haugh2014,True,,arXiv,Not available,Information Relaxations and Dynamic Zero-Sum Games,e5d921a05e2efb95d47f46df0518c058,http://arxiv.org/abs/1405.4347v2
17076," We present a new tool for verification of modal mu-calculus formulae for
process specifications, based on symbolic parity games. It enhances an existing
method, that first encodes the problem to a Parameterised Boolean Equation
System (PBES) and then instantiates the PBES to a parity game. We improved the
translation from specification to PBES to preserve the structure of the
specification in the PBES, we extended LTSmin to instantiate PBESs to symbolic
parity games, and implemented the recursive parity game solving algorithm by
Zielonka for symbolic parity games. We use Multi-valued Decision Diagrams
(MDDs) to represent sets and relations, thus enabling the tools to deal with
very large systems. The transition relation is partitioned based on the
structure of the specification, which allows for efficient manipulation of the
MDDs. We performed two case studies on modular specifications, that demonstrate
that the new method has better time and memory performance than existing PBES
based tools and can be faster (but slightly less memory efficient) than the
symbolic model checker NuSMV.
",gijs kant,,2014.0,10.4204/EPTCS.159.2,"EPTCS 159, 2014, pp. 2-14",Kant2014,True,,arXiv,Not available,Generating and Solving Symbolic Parity Games,a94e2c4dc1dc0474d18d23bc43c96372,http://arxiv.org/abs/1407.7928v1
17077," We present a new tool for verification of modal mu-calculus formulae for
process specifications, based on symbolic parity games. It enhances an existing
method, that first encodes the problem to a Parameterised Boolean Equation
System (PBES) and then instantiates the PBES to a parity game. We improved the
translation from specification to PBES to preserve the structure of the
specification in the PBES, we extended LTSmin to instantiate PBESs to symbolic
parity games, and implemented the recursive parity game solving algorithm by
Zielonka for symbolic parity games. We use Multi-valued Decision Diagrams
(MDDs) to represent sets and relations, thus enabling the tools to deal with
very large systems. The transition relation is partitioned based on the
structure of the specification, which allows for efficient manipulation of the
MDDs. We performed two case studies on modular specifications, that demonstrate
that the new method has better time and memory performance than existing PBES
based tools and can be faster (but slightly less memory efficient) than the
symbolic model checker NuSMV.
",jaco pol,,2014.0,10.4204/EPTCS.159.2,"EPTCS 159, 2014, pp. 2-14",Kant2014,True,,arXiv,Not available,Generating and Solving Symbolic Parity Games,a94e2c4dc1dc0474d18d23bc43c96372,http://arxiv.org/abs/1407.7928v1
17078," We study a game with \emph{strategic} vendors who own multiple items and a
single buyer with a submodular valuation function. The goal of the vendors is
to maximize their revenue via pricing of the items, given that the buyer will
buy the set of items that maximizes his net payoff.
We show this game may not always have a pure Nash equilibrium, in contrast to
previous results for the special case where each vendor owns a single item. We
do so by relating our game to an intermediate, discrete game in which the
vendors only choose the available items, and their prices are set exogenously
afterwards.
We further make use of the intermediate game to provide tight bounds on the
price of anarchy for the subset games that have pure Nash equilibria; we find
that the optimal PoA reached in the previous special cases does not hold, but
only a logarithmic one.
Finally, we show that for a special case of submodular functions, efficient
pure Nash equilibria always exist.
",omer lev,,2014.0,,arXiv,Lev2014,True,,arXiv,Not available,The Pricing War Continues: On Competitive Multi-Item Pricing,792f641016203119a075a13a64709e13,http://arxiv.org/abs/1408.0258v1
17079," We study a game with \emph{strategic} vendors who own multiple items and a
single buyer with a submodular valuation function. The goal of the vendors is
to maximize their revenue via pricing of the items, given that the buyer will
buy the set of items that maximizes his net payoff.
We show this game may not always have a pure Nash equilibrium, in contrast to
previous results for the special case where each vendor owns a single item. We
do so by relating our game to an intermediate, discrete game in which the
vendors only choose the available items, and their prices are set exogenously
afterwards.
We further make use of the intermediate game to provide tight bounds on the
price of anarchy for the subset games that have pure Nash equilibria; we find
that the optimal PoA reached in the previous special cases does not hold, but
only a logarithmic one.
Finally, we show that for a special case of submodular functions, efficient
pure Nash equilibria always exist.
",joel oren,,2014.0,,arXiv,Lev2014,True,,arXiv,Not available,The Pricing War Continues: On Competitive Multi-Item Pricing,792f641016203119a075a13a64709e13,http://arxiv.org/abs/1408.0258v1
17080," We study a game with \emph{strategic} vendors who own multiple items and a
single buyer with a submodular valuation function. The goal of the vendors is
to maximize their revenue via pricing of the items, given that the buyer will
buy the set of items that maximizes his net payoff.
We show this game may not always have a pure Nash equilibrium, in contrast to
previous results for the special case where each vendor owns a single item. We
do so by relating our game to an intermediate, discrete game in which the
vendors only choose the available items, and their prices are set exogenously
afterwards.
We further make use of the intermediate game to provide tight bounds on the
price of anarchy for the subset games that have pure Nash equilibria; we find
that the optimal PoA reached in the previous special cases does not hold, but
only a logarithmic one.
Finally, we show that for a special case of submodular functions, efficient
pure Nash equilibria always exist.
",craig boutilier,,2014.0,,arXiv,Lev2014,True,,arXiv,Not available,The Pricing War Continues: On Competitive Multi-Item Pricing,792f641016203119a075a13a64709e13,http://arxiv.org/abs/1408.0258v1
17081," We study a game with \emph{strategic} vendors who own multiple items and a
single buyer with a submodular valuation function. The goal of the vendors is
to maximize their revenue via pricing of the items, given that the buyer will
buy the set of items that maximizes his net payoff.
We show this game may not always have a pure Nash equilibrium, in contrast to
previous results for the special case where each vendor owns a single item. We
do so by relating our game to an intermediate, discrete game in which the
vendors only choose the available items, and their prices are set exogenously
afterwards.
We further make use of the intermediate game to provide tight bounds on the
price of anarchy for the subset games that have pure Nash equilibria; we find
that the optimal PoA reached in the previous special cases does not hold, but
only a logarithmic one.
Finally, we show that for a special case of submodular functions, efficient
pure Nash equilibria always exist.
",jeffery rosenschein,,2014.0,,arXiv,Lev2014,True,,arXiv,Not available,The Pricing War Continues: On Competitive Multi-Item Pricing,792f641016203119a075a13a64709e13,http://arxiv.org/abs/1408.0258v1
17082," We study an ensemble of individuals playing the two games of the so-called
Parrondo paradox. In our study, players are allowed to choose the game to be
played by the whole ensemble in each turn. The choice cannot conform to the
preferences of all the players and, consequently, they face a simple
frustration phenomenon that requires some strategy to make a collective
decision. We consider several such strategies and analyze how fluctuations can
be used to improve the performance of the system.
",e. garcia-torano,,2014.0,10.1140/epjst/e2007-00068-0,"Eur. Phys. J. Special Topics 143, 39 (2007)",Parrondo2014,True,,arXiv,Not available,Collective decision making and paradoxical games,c655f281edf34ee886edc2b09cf69f10,http://arxiv.org/abs/1410.0241v1
17083," The Rock-Paper-Scissors (RPS) game is a widely used model system in game
theory. Evolutionary game theory predicts the existence of persistent cycles in
the evolutionary trajectories of the RPS game, but experimental evidence has
remained to be rather weak. In this work we performed laboratory experiments on
the RPS game and analyzed the social-state evolutionary trajectories of twelve
populations of N=6 players. We found strong evidence supporting the existence
of persistent cycles. The mean cycling frequency was measured to be $0.029 \pm
0.009$ period per experimental round. Our experimental observations can be
quantitatively explained by a simple non-equilibrium model, namely the
discrete-time logit dynamical process with a noise parameter. Our work
therefore favors the evolutionary game theory over the classical game theory
for describing the dynamical behavior of the RPS game.
",hai-jun zhou,,2013.0,10.1016/j.physa.2013.06.039,"Physica A: Statistical Mechanics and its Applications, Volume 392,
Issue 20, 15 October 2013, Pages 4997-5005",Xu2013,True,,arXiv,Not available,"Cycle frequency in standard Rock-Paper-Scissors games: Evidence from
experimental economics",2ca365565f2fef4334130062a868dbd3,http://arxiv.org/abs/1301.3238v3
17084," Nonlocality enables two parties to win specific games with probabilities
strictly higher than allowed by any classical theory. Nevertheless, all known
such examples consider games where the two parties have a common interest,
since they jointly win or lose the game. The main question we ask here is
whether the nonlocal feature of quantum mechanics can offer an advantage in a
scenario where the two parties have conflicting interests. We answer this in
the affirmative by presenting a simple conflicting interest game, where quantum
strategies outperform classical ones. Moreover, we show that our game has a
fair quantum equilibrium with higher payoffs for both players than in any fair
classical equilibrium. Finally, we play the game using a commercial entangled
photon source and demonstrate experimentally the quantum advantage.
",anna pappa,,2014.0,10.1103/PhysRevLett.114.020401,"Phys. Rev. Lett. 114, 020401 (2015)",Pappa2014,True,,arXiv,Not available,Nonlocality and conflicting interest games,03d69aa3049628b271a8a8dfb4ebba7d,http://arxiv.org/abs/1408.3281v3
17085," Nonlocality enables two parties to win specific games with probabilities
strictly higher than allowed by any classical theory. Nevertheless, all known
such examples consider games where the two parties have a common interest,
since they jointly win or lose the game. The main question we ask here is
whether the nonlocal feature of quantum mechanics can offer an advantage in a
scenario where the two parties have conflicting interests. We answer this in
the affirmative by presenting a simple conflicting interest game, where quantum
strategies outperform classical ones. Moreover, we show that our game has a
fair quantum equilibrium with higher payoffs for both players than in any fair
classical equilibrium. Finally, we play the game using a commercial entangled
photon source and demonstrate experimentally the quantum advantage.
",niraj kumar,,2014.0,10.1103/PhysRevLett.114.020401,"Phys. Rev. Lett. 114, 020401 (2015)",Pappa2014,True,,arXiv,Not available,Nonlocality and conflicting interest games,03d69aa3049628b271a8a8dfb4ebba7d,http://arxiv.org/abs/1408.3281v3
17086," Nonlocality enables two parties to win specific games with probabilities
strictly higher than allowed by any classical theory. Nevertheless, all known
such examples consider games where the two parties have a common interest,
since they jointly win or lose the game. The main question we ask here is
whether the nonlocal feature of quantum mechanics can offer an advantage in a
scenario where the two parties have conflicting interests. We answer this in
the affirmative by presenting a simple conflicting interest game, where quantum
strategies outperform classical ones. Moreover, we show that our game has a
fair quantum equilibrium with higher payoffs for both players than in any fair
classical equilibrium. Finally, we play the game using a commercial entangled
photon source and demonstrate experimentally the quantum advantage.
",thomas lawson,,2014.0,10.1103/PhysRevLett.114.020401,"Phys. Rev. Lett. 114, 020401 (2015)",Pappa2014,True,,arXiv,Not available,Nonlocality and conflicting interest games,03d69aa3049628b271a8a8dfb4ebba7d,http://arxiv.org/abs/1408.3281v3
17087," Nonlocality enables two parties to win specific games with probabilities
strictly higher than allowed by any classical theory. Nevertheless, all known
such examples consider games where the two parties have a common interest,
since they jointly win or lose the game. The main question we ask here is
whether the nonlocal feature of quantum mechanics can offer an advantage in a
scenario where the two parties have conflicting interests. We answer this in
the affirmative by presenting a simple conflicting interest game, where quantum
strategies outperform classical ones. Moreover, we show that our game has a
fair quantum equilibrium with higher payoffs for both players than in any fair
classical equilibrium. Finally, we play the game using a commercial entangled
photon source and demonstrate experimentally the quantum advantage.
",miklos santha,,2014.0,10.1103/PhysRevLett.114.020401,"Phys. Rev. Lett. 114, 020401 (2015)",Pappa2014,True,,arXiv,Not available,Nonlocality and conflicting interest games,03d69aa3049628b271a8a8dfb4ebba7d,http://arxiv.org/abs/1408.3281v3
17088," Nonlocality enables two parties to win specific games with probabilities
strictly higher than allowed by any classical theory. Nevertheless, all known
such examples consider games where the two parties have a common interest,
since they jointly win or lose the game. The main question we ask here is
whether the nonlocal feature of quantum mechanics can offer an advantage in a
scenario where the two parties have conflicting interests. We answer this in
the affirmative by presenting a simple conflicting interest game, where quantum
strategies outperform classical ones. Moreover, we show that our game has a
fair quantum equilibrium with higher payoffs for both players than in any fair
classical equilibrium. Finally, we play the game using a commercial entangled
photon source and demonstrate experimentally the quantum advantage.
",shengyu zhang,,2014.0,10.1103/PhysRevLett.114.020401,"Phys. Rev. Lett. 114, 020401 (2015)",Pappa2014,True,,arXiv,Not available,Nonlocality and conflicting interest games,03d69aa3049628b271a8a8dfb4ebba7d,http://arxiv.org/abs/1408.3281v3
17089," Nonlocality enables two parties to win specific games with probabilities
strictly higher than allowed by any classical theory. Nevertheless, all known
such examples consider games where the two parties have a common interest,
since they jointly win or lose the game. The main question we ask here is
whether the nonlocal feature of quantum mechanics can offer an advantage in a
scenario where the two parties have conflicting interests. We answer this in
the affirmative by presenting a simple conflicting interest game, where quantum
strategies outperform classical ones. Moreover, we show that our game has a
fair quantum equilibrium with higher payoffs for both players than in any fair
classical equilibrium. Finally, we play the game using a commercial entangled
photon source and demonstrate experimentally the quantum advantage.
",eleni diamanti,,2014.0,10.1103/PhysRevLett.114.020401,"Phys. Rev. Lett. 114, 020401 (2015)",Pappa2014,True,,arXiv,Not available,Nonlocality and conflicting interest games,03d69aa3049628b271a8a8dfb4ebba7d,http://arxiv.org/abs/1408.3281v3
17090," Nonlocality enables two parties to win specific games with probabilities
strictly higher than allowed by any classical theory. Nevertheless, all known
such examples consider games where the two parties have a common interest,
since they jointly win or lose the game. The main question we ask here is
whether the nonlocal feature of quantum mechanics can offer an advantage in a
scenario where the two parties have conflicting interests. We answer this in
the affirmative by presenting a simple conflicting interest game, where quantum
strategies outperform classical ones. Moreover, we show that our game has a
fair quantum equilibrium with higher payoffs for both players than in any fair
classical equilibrium. Finally, we play the game using a commercial entangled
photon source and demonstrate experimentally the quantum advantage.
",iordanis kerenidis,,2014.0,10.1103/PhysRevLett.114.020401,"Phys. Rev. Lett. 114, 020401 (2015)",Pappa2014,True,,arXiv,Not available,Nonlocality and conflicting interest games,03d69aa3049628b271a8a8dfb4ebba7d,http://arxiv.org/abs/1408.3281v3
17091," We revisit in this paper the relation between evolution of species and the
mathematical tool of evolutionary games, which has been used to model and
predict it. We indicate known shortcoming of this model that restricts the
capacity of evolutionary games to model groups of individuals that share a
common gene or a common fitness function. In this paper we provide a new
concept to remedy this shortcoming in the standard evolutionary games in order
to cover this kind of behavior. Further, we explore the relationship between
this new concept and Nash equilibrium or ESS. We indicate through the study of
some example in the biology as Hawk and Dove game, Stag Hunt Game and Prisoner
Dilemma, that when taking into account a utility that is common to a group of
individuals, the equilibrium structure may change dramatically. We also study
the multiple access control in slotted Aloha based wireless networks. We
analyze the impact of the altruism behavior on the performance at the
equilibrium.
",ilaria brunetti,,2014.0,,arXiv,Brunetti2014,True,,arXiv,Not available,Altruism in groups: an evolutionary games approach,81fbe9038ff8a8c4e5bc83d2049535f4,http://arxiv.org/abs/1409.7288v2
17092," We revisit in this paper the relation between evolution of species and the
mathematical tool of evolutionary games, which has been used to model and
predict it. We indicate known shortcoming of this model that restricts the
capacity of evolutionary games to model groups of individuals that share a
common gene or a common fitness function. In this paper we provide a new
concept to remedy this shortcoming in the standard evolutionary games in order
to cover this kind of behavior. Further, we explore the relationship between
this new concept and Nash equilibrium or ESS. We indicate through the study of
some example in the biology as Hawk and Dove game, Stag Hunt Game and Prisoner
Dilemma, that when taking into account a utility that is common to a group of
individuals, the equilibrium structure may change dramatically. We also study
the multiple access control in slotted Aloha based wireless networks. We
analyze the impact of the altruism behavior on the performance at the
equilibrium.
",rachid el-azouzi,,2014.0,,arXiv,Brunetti2014,True,,arXiv,Not available,Altruism in groups: an evolutionary games approach,81fbe9038ff8a8c4e5bc83d2049535f4,http://arxiv.org/abs/1409.7288v2
17093," We revisit in this paper the relation between evolution of species and the
mathematical tool of evolutionary games, which has been used to model and
predict it. We indicate known shortcoming of this model that restricts the
capacity of evolutionary games to model groups of individuals that share a
common gene or a common fitness function. In this paper we provide a new
concept to remedy this shortcoming in the standard evolutionary games in order
to cover this kind of behavior. Further, we explore the relationship between
this new concept and Nash equilibrium or ESS. We indicate through the study of
some example in the biology as Hawk and Dove game, Stag Hunt Game and Prisoner
Dilemma, that when taking into account a utility that is common to a group of
individuals, the equilibrium structure may change dramatically. We also study
the multiple access control in slotted Aloha based wireless networks. We
analyze the impact of the altruism behavior on the performance at the
equilibrium.
",eitan altman,,2014.0,,arXiv,Brunetti2014,True,,arXiv,Not available,Altruism in groups: an evolutionary games approach,81fbe9038ff8a8c4e5bc83d2049535f4,http://arxiv.org/abs/1409.7288v2
17094," The Rock-Paper-Scissors (RPS) game is a widely used model system in game
theory. Evolutionary game theory predicts the existence of persistent cycles in
the evolutionary trajectories of the RPS game, but experimental evidence has
remained to be rather weak. In this work we performed laboratory experiments on
the RPS game and analyzed the social-state evolutionary trajectories of twelve
populations of N=6 players. We found strong evidence supporting the existence
of persistent cycles. The mean cycling frequency was measured to be $0.029 \pm
0.009$ period per experimental round. Our experimental observations can be
quantitatively explained by a simple non-equilibrium model, namely the
discrete-time logit dynamical process with a noise parameter. Our work
therefore favors the evolutionary game theory over the classical game theory
for describing the dynamical behavior of the RPS game.
",zhijian wang,,2013.0,10.1016/j.physa.2013.06.039,"Physica A: Statistical Mechanics and its Applications, Volume 392,
Issue 20, 15 October 2013, Pages 4997-5005",Xu2013,True,,arXiv,Not available,"Cycle frequency in standard Rock-Paper-Scissors games: Evidence from
experimental economics",2ca365565f2fef4334130062a868dbd3,http://arxiv.org/abs/1301.3238v3
17095," We study strong equilibria in symmetric capacitated cost-sharing games. In
these games, a graph with designated source $s$ and sink $t$ is given, and each
edge is associated with some cost. Each agent chooses strategically an $s$-$t$
path, knowing that the cost of each edge is shared equally between all agents
using it. Two variants of cost-sharing games have been previously studied: (i)
games where coalitions can form, and (ii) games where edges are associated with
capacities; both variants are inspired by real-life scenarios. In this work we
combine these variants and analyze strong equilibria (profiles where no
coalition can deviate) in capacitated games. This combination gives rise to new
phenomena that do not occur in the previous variants. Our contribution is
two-fold. First, we provide a topological characterization of networks that
always admit a strong equilibrium. Second, we establish tight bounds on the
efficiency loss that may be incurred due to strategic behavior, as quantified
by the strong price of anarchy (and stability) measures. Interestingly, our
results are qualitatively different than those obtained in the analysis of each
variant alone, and the combination of coalitions and capacities entails the
introduction of more refined topology classes than previously studied.
",michal feldman,,2014.0,,arXiv,Feldman2014,True,,arXiv,Not available,Do Capacity Constraints Constrain Coalitions?,2fcb0978720f53779aed16dd1933550c,http://arxiv.org/abs/1411.5712v3
17096," We study strong equilibria in symmetric capacitated cost-sharing games. In
these games, a graph with designated source $s$ and sink $t$ is given, and each
edge is associated with some cost. Each agent chooses strategically an $s$-$t$
path, knowing that the cost of each edge is shared equally between all agents
using it. Two variants of cost-sharing games have been previously studied: (i)
games where coalitions can form, and (ii) games where edges are associated with
capacities; both variants are inspired by real-life scenarios. In this work we
combine these variants and analyze strong equilibria (profiles where no
coalition can deviate) in capacitated games. This combination gives rise to new
phenomena that do not occur in the previous variants. Our contribution is
two-fold. First, we provide a topological characterization of networks that
always admit a strong equilibrium. Second, we establish tight bounds on the
efficiency loss that may be incurred due to strategic behavior, as quantified
by the strong price of anarchy (and stability) measures. Interestingly, our
results are qualitatively different than those obtained in the analysis of each
variant alone, and the combination of coalitions and capacities entails the
introduction of more refined topology classes than previously studied.
",ofir geri,,2014.0,,arXiv,Feldman2014,True,,arXiv,Not available,Do Capacity Constraints Constrain Coalitions?,2fcb0978720f53779aed16dd1933550c,http://arxiv.org/abs/1411.5712v3
17097," Emek et al. presented a model of probabilistic single-item second price
auctions where an auctioneer who is informed about the type of an item for
sale, broadcasts a signal about this type to uninformed bidders. They proved
that finding the optimal (for the purpose of generating revenue) {\em pure}
signaling scheme is strongly NP-hard. In contrast, we prove that finding the
optimal {\em mixed} signaling scheme can be done in polynomial time using
linear programming. For the proof, we show that the problem is strongly related
to a problem of optimally bundling divisible goods for auctioning. We also
prove that a mixed signaling scheme can in some cases generate twice as much
revenue as the best pure signaling scheme and we prove a generally applicable
lower bound on the revenue generated by the best mixed signaling scheme.
",peter miltersen,,2012.0,,arXiv,Miltersen2012,True,,arXiv,Not available,"Send Mixed Signals -- Earn More, Work Less",7b313c269e5871f1b8c7943eb1f6cd69,http://arxiv.org/abs/1202.1483v1
17098," Emek et al. presented a model of probabilistic single-item second price
auctions where an auctioneer who is informed about the type of an item for
sale, broadcasts a signal about this type to uninformed bidders. They proved
that finding the optimal (for the purpose of generating revenue) {\em pure}
signaling scheme is strongly NP-hard. In contrast, we prove that finding the
optimal {\em mixed} signaling scheme can be done in polynomial time using
linear programming. For the proof, we show that the problem is strongly related
to a problem of optimally bundling divisible goods for auctioning. We also
prove that a mixed signaling scheme can in some cases generate twice as much
revenue as the best pure signaling scheme and we prove a generally applicable
lower bound on the revenue generated by the best mixed signaling scheme.
",or sheffet,,2012.0,,arXiv,Miltersen2012,True,,arXiv,Not available,"Send Mixed Signals -- Earn More, Work Less",7b313c269e5871f1b8c7943eb1f6cd69,http://arxiv.org/abs/1202.1483v1
17099," We consider the unit-demand envy-free pricing problem, which is a unit-demand
auction where each bidder receives an item that maximizes his utility, and the
goal is to maximize the auctioneer's profit. This problem is NP-hard and
unlikely to be in APX. We present four new MIP formulations for it and
experimentally compare them to a previous one due to Shioda, Tun\c{c}el, and
Myklebust. We describe three models to generate different random instances for
general unit-demand auctions, that we designed for the computational
experiments. Each model has a nice economic interpretation. Aiming
approximation results, we consider the variant of the problem where the item
prices are restricted to be chosen from a geometric series, and prove that an
optimal solution for this variant has value that is a fraction (depending on
the series used) of the optimal value of the original problem. So this variant
is also unlikely to be in APX.
",cristina fernandes,,2013.0,,arXiv,Fernandes2013,True,,arXiv,Not available,The Unit-Demand Envy-Free Pricing Problem,845cbfcbe8e5bfc6a4e881b59c240328,http://arxiv.org/abs/1310.0038v1
17100," We consider the unit-demand envy-free pricing problem, which is a unit-demand
auction where each bidder receives an item that maximizes his utility, and the
goal is to maximize the auctioneer's profit. This problem is NP-hard and
unlikely to be in APX. We present four new MIP formulations for it and
experimentally compare them to a previous one due to Shioda, Tun\c{c}el, and
Myklebust. We describe three models to generate different random instances for
general unit-demand auctions, that we designed for the computational
experiments. Each model has a nice economic interpretation. Aiming
approximation results, we consider the variant of the problem where the item
prices are restricted to be chosen from a geometric series, and prove that an
optimal solution for this variant has value that is a fraction (depending on
the series used) of the optimal value of the original problem. So this variant
is also unlikely to be in APX.
",carlos ferreira,,2013.0,,arXiv,Fernandes2013,True,,arXiv,Not available,The Unit-Demand Envy-Free Pricing Problem,845cbfcbe8e5bfc6a4e881b59c240328,http://arxiv.org/abs/1310.0038v1
17101," We consider the unit-demand envy-free pricing problem, which is a unit-demand
auction where each bidder receives an item that maximizes his utility, and the
goal is to maximize the auctioneer's profit. This problem is NP-hard and
unlikely to be in APX. We present four new MIP formulations for it and
experimentally compare them to a previous one due to Shioda, Tun\c{c}el, and
Myklebust. We describe three models to generate different random instances for
general unit-demand auctions, that we designed for the computational
experiments. Each model has a nice economic interpretation. Aiming
approximation results, we consider the variant of the problem where the item
prices are restricted to be chosen from a geometric series, and prove that an
optimal solution for this variant has value that is a fraction (depending on
the series used) of the optimal value of the original problem. So this variant
is also unlikely to be in APX.
",alvaro franco,,2013.0,,arXiv,Fernandes2013,True,,arXiv,Not available,The Unit-Demand Envy-Free Pricing Problem,845cbfcbe8e5bfc6a4e881b59c240328,http://arxiv.org/abs/1310.0038v1
17102," We consider the unit-demand envy-free pricing problem, which is a unit-demand
auction where each bidder receives an item that maximizes his utility, and the
goal is to maximize the auctioneer's profit. This problem is NP-hard and
unlikely to be in APX. We present four new MIP formulations for it and
experimentally compare them to a previous one due to Shioda, Tun\c{c}el, and
Myklebust. We describe three models to generate different random instances for
general unit-demand auctions, that we designed for the computational
experiments. Each model has a nice economic interpretation. Aiming
approximation results, we consider the variant of the problem where the item
prices are restricted to be chosen from a geometric series, and prove that an
optimal solution for this variant has value that is a fraction (depending on
the series used) of the optimal value of the original problem. So this variant
is also unlikely to be in APX.
",rafael schouery,,2013.0,,arXiv,Fernandes2013,True,,arXiv,Not available,The Unit-Demand Envy-Free Pricing Problem,845cbfcbe8e5bfc6a4e881b59c240328,http://arxiv.org/abs/1310.0038v1
17103," In this paper we propose a two-stage protocol for resource management in a
hierarchically organized cloud. The first stage exploits spatial locality for
the formation of coalitions of supply agents; the second stage, a combinatorial
auction, is based on a modified proxy-based clock algorithm and has two phases,
a clock phase and a proxy phase. The clock phase supports price discovery; in
the second phase a proxy conducts multiple rounds of a combinatorial auction
for the package of services requested by each client. The protocol strikes a
balance between low-cost services for cloud clients and a decent profit for the
service providers. We also report the results of an empirical investigation of
the combinatorial auction stage of the protocol.
",dan marinescu,,2014.0,,arXiv,Marinescu2014,True,,arXiv,Not available,"Coalition Formation and Combinatorial Auctions; Applications to
Self-organization and Self-management in Utility Computing",6b78f2f71b122eb52dc94de14258cc96,http://arxiv.org/abs/1406.7487v3
17104," In this paper we propose a two-stage protocol for resource management in a
hierarchically organized cloud. The first stage exploits spatial locality for
the formation of coalitions of supply agents; the second stage, a combinatorial
auction, is based on a modified proxy-based clock algorithm and has two phases,
a clock phase and a proxy phase. The clock phase supports price discovery; in
the second phase a proxy conducts multiple rounds of a combinatorial auction
for the package of services requested by each client. The protocol strikes a
balance between low-cost services for cloud clients and a decent profit for the
service providers. We also report the results of an empirical investigation of
the combinatorial auction stage of the protocol.
",ashkan paya,,2014.0,,arXiv,Marinescu2014,True,,arXiv,Not available,"Coalition Formation and Combinatorial Auctions; Applications to
Self-organization and Self-management in Utility Computing",6b78f2f71b122eb52dc94de14258cc96,http://arxiv.org/abs/1406.7487v3
17105," The game-theoretic risk management framework put forth in the precursor work
""Towards a Theory of Games with Payoffs that are Probability-Distributions""
(arXiv:1506.07368 [q-fin.EC]) is herein extended by algorithmic details on how
to compute equilibria in games where the payoffs are probability distributions.
Our approach is ""data driven"" in the sense that we assume empirical data
(measurements, simulation, etc.) to be available that can be compiled into
distribution models, which are suitable for efficient decisions about
preferences, and setting up and solving games using these as payoffs. While
preferences among distributions turn out to be quite simple if nonparametric
methods (kernel density estimates) are used, computing Nash-equilibria in games
using such models is discovered as inefficient (if not impossible). In fact, we
give a counterexample in which fictitious play fails to converge for the
(specifically unfortunate) choice of payoff distributions in the game, and
introduce a suitable tail approximation of the payoff densities to tackle the
issue. The overall procedure is essentially a modified version of fictitious
play, and is herein described for standard and multicriteria games, to
iteratively deliver an (approximate) Nash-equilibrium.
",stefan rass,,2015.0,,arXiv,Rass2015,True,,arXiv,Not available,"On Game-Theoretic Risk Management (Part Two) - Algorithms to Compute
Nash-Equilibria in Games with Distributions as Payoffs",ef019ca1ec22eefe3c5d9313ac0f5c84,http://arxiv.org/abs/1511.08591v1
17106," In this paper we propose a two-stage protocol for resource management in a
hierarchically organized cloud. The first stage exploits spatial locality for
the formation of coalitions of supply agents; the second stage, a combinatorial
auction, is based on a modified proxy-based clock algorithm and has two phases,
a clock phase and a proxy phase. The clock phase supports price discovery; in
the second phase a proxy conducts multiple rounds of a combinatorial auction
for the package of services requested by each client. The protocol strikes a
balance between low-cost services for cloud clients and a decent profit for the
service providers. We also report the results of an empirical investigation of
the combinatorial auction stage of the protocol.
",john morrison,,2014.0,,arXiv,Marinescu2014,True,,arXiv,Not available,"Coalition Formation and Combinatorial Auctions; Applications to
Self-organization and Self-management in Utility Computing",6b78f2f71b122eb52dc94de14258cc96,http://arxiv.org/abs/1406.7487v3
17107," We provide a constructive proof of Border's theorem [Bor91, HR15a] and its
generalization to reduced-form auctions with asymmetric bidders [Bor07, MV10,
CKM13]. Given a reduced form, we identify a subset of Border constraints that
are necessary and sufficient to determine its feasibility. Importantly, the
number of these constraints is linear in the total number of bidder types. In
addition, we provide a characterization result showing that every feasible
reduced form can be induced by an ex-post allocation rule that is a
distribution over ironings of the same total ordering of the union of all
bidders' types.
We show how to leverage our results for single-item reduced forms to design
auctions with heterogeneous items and asymmetric bidders with valuations that
are additive over items. Appealing to our constructive Border's theorem, we
obtain polynomial-time algorithms for computing the revenue-optimal auction.
Appealing to our characterization of feasible reduced forms, we characterize
feasible multi-item allocation rules.
",yang cai,,2011.0,,arXiv,Cai2011,True,,arXiv,Not available,"A Constructive Approach to Reduced-Form Auctions with Applications to
Multi-Item Mechanism Design",c41e508484554972f471c2a58031ce6b,http://arxiv.org/abs/1112.4572v2
17108," We provide a constructive proof of Border's theorem [Bor91, HR15a] and its
generalization to reduced-form auctions with asymmetric bidders [Bor07, MV10,
CKM13]. Given a reduced form, we identify a subset of Border constraints that
are necessary and sufficient to determine its feasibility. Importantly, the
number of these constraints is linear in the total number of bidder types. In
addition, we provide a characterization result showing that every feasible
reduced form can be induced by an ex-post allocation rule that is a
distribution over ironings of the same total ordering of the union of all
bidders' types.
We show how to leverage our results for single-item reduced forms to design
auctions with heterogeneous items and asymmetric bidders with valuations that
are additive over items. Appealing to our constructive Border's theorem, we
obtain polynomial-time algorithms for computing the revenue-optimal auction.
Appealing to our characterization of feasible reduced forms, we characterize
feasible multi-item allocation rules.
",constantinos daskalakis,,2011.0,,arXiv,Cai2011,True,,arXiv,Not available,"A Constructive Approach to Reduced-Form Auctions with Applications to
Multi-Item Mechanism Design",c41e508484554972f471c2a58031ce6b,http://arxiv.org/abs/1112.4572v2
17109," We provide a constructive proof of Border's theorem [Bor91, HR15a] and its
generalization to reduced-form auctions with asymmetric bidders [Bor07, MV10,
CKM13]. Given a reduced form, we identify a subset of Border constraints that
are necessary and sufficient to determine its feasibility. Importantly, the
number of these constraints is linear in the total number of bidder types. In
addition, we provide a characterization result showing that every feasible
reduced form can be induced by an ex-post allocation rule that is a
distribution over ironings of the same total ordering of the union of all
bidders' types.
We show how to leverage our results for single-item reduced forms to design
auctions with heterogeneous items and asymmetric bidders with valuations that
are additive over items. Appealing to our constructive Border's theorem, we
obtain polynomial-time algorithms for computing the revenue-optimal auction.
Appealing to our characterization of feasible reduced forms, we characterize
feasible multi-item allocation rules.
",s. weinberg,,2011.0,,arXiv,Cai2011,True,,arXiv,Not available,"A Constructive Approach to Reduced-Form Auctions with Applications to
Multi-Item Mechanism Design",c41e508484554972f471c2a58031ce6b,http://arxiv.org/abs/1112.4572v2
17110," We study a class of iterative combinatorial auctions which can be viewed as
subgradient descent methods for the problem of pricing bundles to balance
supply and demand. We provide concrete convergence rates for auctions in this
class, bounding the number of auction rounds needed to reach clearing prices.
Our analysis allows for a variety of pricing schemes, including item, bundle,
and polynomial pricing, and the respective convergence rates confirm that more
expressive pricing schemes come at the cost of slower convergence. We consider
two models of bidder behavior. In the first model, bidders behave
stochastically according to a random utility model, which includes standard
best-response bidding as a special case. In the second model, bidders behave
arbitrarily (even adversarially), and meaningful convergence relies on properly
designed activity rules.
",jacob abernethy,,2015.0,,arXiv,Abernethy2015,True,,arXiv,Not available,Rate of Price Discovery in Iterative Combinatorial Auctions,c7334ae1c56eee1f13158373e466fc31,http://arxiv.org/abs/1511.06017v2
17111," We study a class of iterative combinatorial auctions which can be viewed as
subgradient descent methods for the problem of pricing bundles to balance
supply and demand. We provide concrete convergence rates for auctions in this
class, bounding the number of auction rounds needed to reach clearing prices.
Our analysis allows for a variety of pricing schemes, including item, bundle,
and polynomial pricing, and the respective convergence rates confirm that more
expressive pricing schemes come at the cost of slower convergence. We consider
two models of bidder behavior. In the first model, bidders behave
stochastically according to a random utility model, which includes standard
best-response bidding as a special case. In the second model, bidders behave
arbitrarily (even adversarially), and meaningful convergence relies on properly
designed activity rules.
",sebastien lahaie,,2015.0,,arXiv,Abernethy2015,True,,arXiv,Not available,Rate of Price Discovery in Iterative Combinatorial Auctions,c7334ae1c56eee1f13158373e466fc31,http://arxiv.org/abs/1511.06017v2
17112," We study a class of iterative combinatorial auctions which can be viewed as
subgradient descent methods for the problem of pricing bundles to balance
supply and demand. We provide concrete convergence rates for auctions in this
class, bounding the number of auction rounds needed to reach clearing prices.
Our analysis allows for a variety of pricing schemes, including item, bundle,
and polynomial pricing, and the respective convergence rates confirm that more
expressive pricing schemes come at the cost of slower convergence. We consider
two models of bidder behavior. In the first model, bidders behave
stochastically according to a random utility model, which includes standard
best-response bidding as a special case. In the second model, bidders behave
arbitrarily (even adversarially), and meaningful convergence relies on properly
designed activity rules.
",matus telgarsky,,2015.0,,arXiv,Abernethy2015,True,,arXiv,Not available,Rate of Price Discovery in Iterative Combinatorial Auctions,c7334ae1c56eee1f13158373e466fc31,http://arxiv.org/abs/1511.06017v2
17113," Investigating potential purchases is often a substantial investment under
uncertainty. Standard market designs, such as simultaneous or English auctions,
compound this with uncertainty about the price a bidder will have to pay in
order to win. As a result they tend to confuse the process of search both by
leading to wasteful information acquisition on goods that have already found a
good purchaser and by discouraging needed investigations of objects,
potentially eliminating all gains from trade. In contrast, we show that the
Dutch auction preserves all of its properties from a standard setting without
information costs because it guarantees, at the time of information
acquisition, a price at which the good can be purchased. Calibrations to
start-up acquisition and timber auctions suggest that in practice the social
losses through poor search coordination in standard formats are an order of
magnitude or two larger than the (negligible) inefficiencies arising from
ex-ante bidder asymmetries.
",robert kleinberg,,2016.0,,arXiv,Kleinberg2016,True,,arXiv,Not available,Descending Price Optimally Coordinates Search,e3da2b7ced1e9886b2705741790555d3,http://arxiv.org/abs/1603.07682v3
17114," Investigating potential purchases is often a substantial investment under
uncertainty. Standard market designs, such as simultaneous or English auctions,
compound this with uncertainty about the price a bidder will have to pay in
order to win. As a result they tend to confuse the process of search both by
leading to wasteful information acquisition on goods that have already found a
good purchaser and by discouraging needed investigations of objects,
potentially eliminating all gains from trade. In contrast, we show that the
Dutch auction preserves all of its properties from a standard setting without
information costs because it guarantees, at the time of information
acquisition, a price at which the good can be purchased. Calibrations to
start-up acquisition and timber auctions suggest that in practice the social
losses through poor search coordination in standard formats are an order of
magnitude or two larger than the (negligible) inefficiencies arising from
ex-ante bidder asymmetries.
",bo waggoner,,2016.0,,arXiv,Kleinberg2016,True,,arXiv,Not available,Descending Price Optimally Coordinates Search,e3da2b7ced1e9886b2705741790555d3,http://arxiv.org/abs/1603.07682v3
17115," Investigating potential purchases is often a substantial investment under
uncertainty. Standard market designs, such as simultaneous or English auctions,
compound this with uncertainty about the price a bidder will have to pay in
order to win. As a result they tend to confuse the process of search both by
leading to wasteful information acquisition on goods that have already found a
good purchaser and by discouraging needed investigations of objects,
potentially eliminating all gains from trade. In contrast, we show that the
Dutch auction preserves all of its properties from a standard setting without
information costs because it guarantees, at the time of information
acquisition, a price at which the good can be purchased. Calibrations to
start-up acquisition and timber auctions suggest that in practice the social
losses through poor search coordination in standard formats are an order of
magnitude or two larger than the (negligible) inefficiencies arising from
ex-ante bidder asymmetries.
",e. weyl,,2016.0,,arXiv,Kleinberg2016,True,,arXiv,Not available,Descending Price Optimally Coordinates Search,e3da2b7ced1e9886b2705741790555d3,http://arxiv.org/abs/1603.07682v3
17116," Recent development in quantum computation and quantum information theory
allows to extend the scope of game theory for the quantum world. The authors
have recently proposed a quantum description of financial market in terms of
quantum game theory. The paper contain an analysis of such markets that shows
that there would be advantage in using quantum computers and quantum
strategies.
",edward piotrowski,,2003.0,,arXiv,Piotrowski2003,True,,arXiv,Not available,Quantum computer: an appliance for playing market games,53e1f2dea8335a2a322383370a54f493,http://arxiv.org/abs/quant-ph/0305017v1
17117," Buyers (e.g., advertisers) often have limited financial and processing
resources, and so their participation in auctions is throttled. Changes to
auctions may affect bids or throttling and any change may affect what winners
pay. This paper shows that if an A/B experiment affects only bids, then the
observed treatment effect is unbiased when all the bidders in an auction are
randomly assigned to A or B but it can be severely biased otherwise, even in
the absence of throttling. Experiments that affect throttling algorithms can
also be badly biased, but the bias can be substantially reduced if the budget
for each advertiser in the experiment is allocated to separate pots for the A
and B arms of the experiment.
",guillaume basse,,2016.0,,arXiv,Basse2016,True,,arXiv,Not available,"Randomization and The Pernicious Effects of Limited Budgets on Auction
Experiments",769890534f4756b08d18512f5114bd90,http://arxiv.org/abs/1605.09171v1
17118," Buyers (e.g., advertisers) often have limited financial and processing
resources, and so their participation in auctions is throttled. Changes to
auctions may affect bids or throttling and any change may affect what winners
pay. This paper shows that if an A/B experiment affects only bids, then the
observed treatment effect is unbiased when all the bidders in an auction are
randomly assigned to A or B but it can be severely biased otherwise, even in
the absence of throttling. Experiments that affect throttling algorithms can
also be badly biased, but the bias can be substantially reduced if the budget
for each advertiser in the experiment is allocated to separate pots for the A
and B arms of the experiment.
",hossein soufiani,,2016.0,,arXiv,Basse2016,True,,arXiv,Not available,"Randomization and The Pernicious Effects of Limited Budgets on Auction
Experiments",769890534f4756b08d18512f5114bd90,http://arxiv.org/abs/1605.09171v1
17119," Buyers (e.g., advertisers) often have limited financial and processing
resources, and so their participation in auctions is throttled. Changes to
auctions may affect bids or throttling and any change may affect what winners
pay. This paper shows that if an A/B experiment affects only bids, then the
observed treatment effect is unbiased when all the bidders in an auction are
randomly assigned to A or B but it can be severely biased otherwise, even in
the absence of throttling. Experiments that affect throttling algorithms can
also be badly biased, but the bias can be substantially reduced if the budget
for each advertiser in the experiment is allocated to separate pots for the A
and B arms of the experiment.
",diane lambert,,2016.0,,arXiv,Basse2016,True,,arXiv,Not available,"Randomization and The Pernicious Effects of Limited Budgets on Auction
Experiments",769890534f4756b08d18512f5114bd90,http://arxiv.org/abs/1605.09171v1
17120," We study the problem of designing revenue-maximizing auctions for allocating
multiple goods to flexible consumers. In our model, each consumer is interested
in a subset of goods known as its flexibility set and wants to consume one good
from this set. A consumer's flexibility set and its utility from consuming a
good from its flexibility set are its private information. We focus on the case
of nested flexibility sets --- each consumer's flexibility set can be one of
$k$ nested sets. We provide several examples where such nested flexibility sets
may arise. We characterize the allocation rule for an incentive compatible,
individually rational and revenue-maximizing auction as the solution to an
integer program. The corresponding payment rule is described by an integral
equation. We then leverage the nestedness of flexibility sets to simplify the
optimal auction and provide a complete characterization of allocations and
payments in terms of simple thresholds.
",shiva navabi,,2016.0,,arXiv,Navabi2016,True,,arXiv,Not available,Optimal Auction Design for Flexible Consumers,241873d1871430910bdb5c2fdb9e9f0a,http://arxiv.org/abs/1607.02526v4
17121," We study the problem of designing revenue-maximizing auctions for allocating
multiple goods to flexible consumers. In our model, each consumer is interested
in a subset of goods known as its flexibility set and wants to consume one good
from this set. A consumer's flexibility set and its utility from consuming a
good from its flexibility set are its private information. We focus on the case
of nested flexibility sets --- each consumer's flexibility set can be one of
$k$ nested sets. We provide several examples where such nested flexibility sets
may arise. We characterize the allocation rule for an incentive compatible,
individually rational and revenue-maximizing auction as the solution to an
integer program. The corresponding payment rule is described by an integral
equation. We then leverage the nestedness of flexibility sets to simplify the
optimal auction and provide a complete characterization of allocations and
payments in terms of simple thresholds.
",ashutosh nayyar,,2016.0,,arXiv,Navabi2016,True,,arXiv,Not available,Optimal Auction Design for Flexible Consumers,241873d1871430910bdb5c2fdb9e9f0a,http://arxiv.org/abs/1607.02526v4
17122," In this work, we propose a multi-layer market for vehicle-to-grid energy
trading. In the macro layer, we consider a double auction mechanism, under
which the utility company act as an auctioneer and energy buyers and sellers
interact. This double auction mechanism is strategy-proof and converges
asymptotically. In the micro layer, the aggregators, which are the sellers in
the macro layer, are paid with commissions to sell the energy of plug-in hybrid
electric vehicles (PHEVs) and to maximize their utilities. We analyze the
interaction between the macro and micro layers and study some simplified cases.
Depending on the elasticity of supply and demand, the utility is analyzed under
different scenarios. Simulation results show that our approach can
significantly increase the utility of PHEVs.
",albert lam,,2016.0,10.1109/INFCOMW.2012.6193525,"IEEE INFOCOM Workshop on Green Networking and Smart Grids (CCSES),
Mar 2012, Orlando, Florida, United States. pp.85 - 90",Lam2016,True,,arXiv,Not available,"A multi-layer market for vehicle-to-grid energy trading in the smart
grid",4b313c31ee1a0701844226cd0863f8c2,http://arxiv.org/abs/1609.01437v1
17123," In this work, we propose a multi-layer market for vehicle-to-grid energy
trading. In the macro layer, we consider a double auction mechanism, under
which the utility company act as an auctioneer and energy buyers and sellers
interact. This double auction mechanism is strategy-proof and converges
asymptotically. In the micro layer, the aggregators, which are the sellers in
the macro layer, are paid with commissions to sell the energy of plug-in hybrid
electric vehicles (PHEVs) and to maximize their utilities. We analyze the
interaction between the macro and micro layers and study some simplified cases.
Depending on the elasticity of supply and demand, the utility is analyzed under
different scenarios. Simulation results show that our approach can
significantly increase the utility of PHEVs.
",longbo huang,,2016.0,10.1109/INFCOMW.2012.6193525,"IEEE INFOCOM Workshop on Green Networking and Smart Grids (CCSES),
Mar 2012, Orlando, Florida, United States. pp.85 - 90",Lam2016,True,,arXiv,Not available,"A multi-layer market for vehicle-to-grid energy trading in the smart
grid",4b313c31ee1a0701844226cd0863f8c2,http://arxiv.org/abs/1609.01437v1
17124," In this work, we propose a multi-layer market for vehicle-to-grid energy
trading. In the macro layer, we consider a double auction mechanism, under
which the utility company act as an auctioneer and energy buyers and sellers
interact. This double auction mechanism is strategy-proof and converges
asymptotically. In the micro layer, the aggregators, which are the sellers in
the macro layer, are paid with commissions to sell the energy of plug-in hybrid
electric vehicles (PHEVs) and to maximize their utilities. We analyze the
interaction between the macro and micro layers and study some simplified cases.
Depending on the elasticity of supply and demand, the utility is analyzed under
different scenarios. Simulation results show that our approach can
significantly increase the utility of PHEVs.
",alonso silva,,2016.0,10.1109/INFCOMW.2012.6193525,"IEEE INFOCOM Workshop on Green Networking and Smart Grids (CCSES),
Mar 2012, Orlando, Florida, United States. pp.85 - 90",Lam2016,True,,arXiv,Not available,"A multi-layer market for vehicle-to-grid energy trading in the smart
grid",4b313c31ee1a0701844226cd0863f8c2,http://arxiv.org/abs/1609.01437v1
17125," In this work, we propose a multi-layer market for vehicle-to-grid energy
trading. In the macro layer, we consider a double auction mechanism, under
which the utility company act as an auctioneer and energy buyers and sellers
interact. This double auction mechanism is strategy-proof and converges
asymptotically. In the micro layer, the aggregators, which are the sellers in
the macro layer, are paid with commissions to sell the energy of plug-in hybrid
electric vehicles (PHEVs) and to maximize their utilities. We analyze the
interaction between the macro and micro layers and study some simplified cases.
Depending on the elasticity of supply and demand, the utility is analyzed under
different scenarios. Simulation results show that our approach can
significantly increase the utility of PHEVs.
",walid saad,,2016.0,10.1109/INFCOMW.2012.6193525,"IEEE INFOCOM Workshop on Green Networking and Smart Grids (CCSES),
Mar 2012, Orlando, Florida, United States. pp.85 - 90",Lam2016,True,,arXiv,Not available,"A multi-layer market for vehicle-to-grid energy trading in the smart
grid",4b313c31ee1a0701844226cd0863f8c2,http://arxiv.org/abs/1609.01437v1
17127," Recent development in quantum computation and quantum information theory
allows to extend the scope of game theory for the quantum world. The authors
have recently proposed a quantum description of financial market in terms of
quantum game theory. The paper contain an analysis of such markets that shows
that there would be advantage in using quantum computers and quantum
strategies.
",jan sladkowski,,2003.0,,arXiv,Piotrowski2003,True,,arXiv,Not available,Quantum computer: an appliance for playing market games,53e1f2dea8335a2a322383370a54f493,http://arxiv.org/abs/quant-ph/0305017v1
17129," Blockchain has recently been applied in many applications such as bitcoin,
smart grid, and Internet of Things (IoT) as a public ledger of transactions.
However, the use of blockchain in mobile environments is still limited because
the mining process consumes too much computing and energy resources on mobile
devices. Edge computing offered by the Edge Computing Service Provider can be
adopted as a viable solution for offloading the mining tasks from the mobile
devices, i.e., miners, in the mobile blockchain environment. However, a
mechanism needs to be designed for edge resource allocation to maximize the
revenue for the Edge Computing Service Provider and to ensure incentive
compatibility and individual rationality is still open. In this paper, we
develop an optimal auction based on deep learning for the edge resource
allocation. Specifically, we construct a multi-layer neural network
architecture based on an analytical solution of the optimal auction. The neural
networks first perform monotone transformations of the miners' bids. Then, they
calculate allocation and conditional payment rules for the miners. We use
valuations of the miners as the data training to adjust parameters of the
neural networks so as to optimize the loss function which is the expected,
negated revenue of the Edge Computing Service Provider. We show the
experimental results to confirm the benefits of using the deep learning for
deriving the optimal auction for mobile blockchain with high revenue
",nguyen luong,,2017.0,,arXiv,Luong2017,True,,arXiv,Not available,"Optimal Auction For Edge Computing Resource Management in Mobile
Blockchain Networks: A Deep Learning Approach",a6cbb15dbd9c5cc471ceac74173011c9,http://arxiv.org/abs/1711.02844v2
17130," Blockchain has recently been applied in many applications such as bitcoin,
smart grid, and Internet of Things (IoT) as a public ledger of transactions.
However, the use of blockchain in mobile environments is still limited because
the mining process consumes too much computing and energy resources on mobile
devices. Edge computing offered by the Edge Computing Service Provider can be
adopted as a viable solution for offloading the mining tasks from the mobile
devices, i.e., miners, in the mobile blockchain environment. However, a
mechanism needs to be designed for edge resource allocation to maximize the
revenue for the Edge Computing Service Provider and to ensure incentive
compatibility and individual rationality is still open. In this paper, we
develop an optimal auction based on deep learning for the edge resource
allocation. Specifically, we construct a multi-layer neural network
architecture based on an analytical solution of the optimal auction. The neural
networks first perform monotone transformations of the miners' bids. Then, they
calculate allocation and conditional payment rules for the miners. We use
valuations of the miners as the data training to adjust parameters of the
neural networks so as to optimize the loss function which is the expected,
negated revenue of the Edge Computing Service Provider. We show the
experimental results to confirm the benefits of using the deep learning for
deriving the optimal auction for mobile blockchain with high revenue
",zehui xiong,,2017.0,,arXiv,Luong2017,True,,arXiv,Not available,"Optimal Auction For Edge Computing Resource Management in Mobile
Blockchain Networks: A Deep Learning Approach",a6cbb15dbd9c5cc471ceac74173011c9,http://arxiv.org/abs/1711.02844v2
17131," Blockchain has recently been applied in many applications such as bitcoin,
smart grid, and Internet of Things (IoT) as a public ledger of transactions.
However, the use of blockchain in mobile environments is still limited because
the mining process consumes too much computing and energy resources on mobile
devices. Edge computing offered by the Edge Computing Service Provider can be
adopted as a viable solution for offloading the mining tasks from the mobile
devices, i.e., miners, in the mobile blockchain environment. However, a
mechanism needs to be designed for edge resource allocation to maximize the
revenue for the Edge Computing Service Provider and to ensure incentive
compatibility and individual rationality is still open. In this paper, we
develop an optimal auction based on deep learning for the edge resource
allocation. Specifically, we construct a multi-layer neural network
architecture based on an analytical solution of the optimal auction. The neural
networks first perform monotone transformations of the miners' bids. Then, they
calculate allocation and conditional payment rules for the miners. We use
valuations of the miners as the data training to adjust parameters of the
neural networks so as to optimize the loss function which is the expected,
negated revenue of the Edge Computing Service Provider. We show the
experimental results to confirm the benefits of using the deep learning for
deriving the optimal auction for mobile blockchain with high revenue
",ping wang,,2017.0,,arXiv,Luong2017,True,,arXiv,Not available,"Optimal Auction For Edge Computing Resource Management in Mobile
Blockchain Networks: A Deep Learning Approach",a6cbb15dbd9c5cc471ceac74173011c9,http://arxiv.org/abs/1711.02844v2
17132," Blockchain has recently been applied in many applications such as bitcoin,
smart grid, and Internet of Things (IoT) as a public ledger of transactions.
However, the use of blockchain in mobile environments is still limited because
the mining process consumes too much computing and energy resources on mobile
devices. Edge computing offered by the Edge Computing Service Provider can be
adopted as a viable solution for offloading the mining tasks from the mobile
devices, i.e., miners, in the mobile blockchain environment. However, a
mechanism needs to be designed for edge resource allocation to maximize the
revenue for the Edge Computing Service Provider and to ensure incentive
compatibility and individual rationality is still open. In this paper, we
develop an optimal auction based on deep learning for the edge resource
allocation. Specifically, we construct a multi-layer neural network
architecture based on an analytical solution of the optimal auction. The neural
networks first perform monotone transformations of the miners' bids. Then, they
calculate allocation and conditional payment rules for the miners. We use
valuations of the miners as the data training to adjust parameters of the
neural networks so as to optimize the loss function which is the expected,
negated revenue of the Edge Computing Service Provider. We show the
experimental results to confirm the benefits of using the deep learning for
deriving the optimal auction for mobile blockchain with high revenue
",dusit niyato,,2017.0,,arXiv,Luong2017,True,,arXiv,Not available,"Optimal Auction For Edge Computing Resource Management in Mobile
Blockchain Networks: A Deep Learning Approach",a6cbb15dbd9c5cc471ceac74173011c9,http://arxiv.org/abs/1711.02844v2
17133," We consider the problem of a single seller repeatedly selling a single item
to a single buyer (specifically, the buyer has a value drawn fresh from known
distribution $D$ in every round). Prior work assumes that the buyer is fully
rational and will perfectly reason about how their bids today affect the
seller's decisions tomorrow. In this work we initiate a different direction:
the buyer simply runs a no-regret learning algorithm over possible bids. We
provide a fairly complete characterization of optimal auctions for the seller
in this domain. Specifically:
- If the buyer bids according to EXP3 (or any ""mean-based"" learning
algorithm), then the seller can extract expected revenue arbitrarily close to
the expected welfare. This auction is independent of the buyer's valuation $D$,
but somewhat unnatural as it is sometimes in the buyer's interest to overbid. -
There exists a learning algorithm $\mathcal{A}$ such that if the buyer bids
according to $\mathcal{A}$ then the optimal strategy for the seller is simply
to post the Myerson reserve for $D$ every round. - If the buyer bids according
to EXP3 (or any ""mean-based"" learning algorithm), but the seller is restricted
to ""natural"" auction formats where overbidding is dominated (e.g. Generalized
First-Price or Generalized Second-Price), then the optimal strategy for the
seller is a pay-your-bid format with decreasing reserves over time. Moreover,
the seller's optimal achievable revenue is characterized by a linear program,
and can be unboundedly better than the best truthful auction yet simultaneously
unboundedly worse than the expected welfare.
",mark braverman,,2017.0,,arXiv,Braverman2017,True,,arXiv,Not available,Selling to a No-Regret Buyer,afa9025a209f827cc164357e25036e37,http://arxiv.org/abs/1711.09176v1
17134," We consider the problem of a single seller repeatedly selling a single item
to a single buyer (specifically, the buyer has a value drawn fresh from known
distribution $D$ in every round). Prior work assumes that the buyer is fully
rational and will perfectly reason about how their bids today affect the
seller's decisions tomorrow. In this work we initiate a different direction:
the buyer simply runs a no-regret learning algorithm over possible bids. We
provide a fairly complete characterization of optimal auctions for the seller
in this domain. Specifically:
- If the buyer bids according to EXP3 (or any ""mean-based"" learning
algorithm), then the seller can extract expected revenue arbitrarily close to
the expected welfare. This auction is independent of the buyer's valuation $D$,
but somewhat unnatural as it is sometimes in the buyer's interest to overbid. -
There exists a learning algorithm $\mathcal{A}$ such that if the buyer bids
according to $\mathcal{A}$ then the optimal strategy for the seller is simply
to post the Myerson reserve for $D$ every round. - If the buyer bids according
to EXP3 (or any ""mean-based"" learning algorithm), but the seller is restricted
to ""natural"" auction formats where overbidding is dominated (e.g. Generalized
First-Price or Generalized Second-Price), then the optimal strategy for the
seller is a pay-your-bid format with decreasing reserves over time. Moreover,
the seller's optimal achievable revenue is characterized by a linear program,
and can be unboundedly better than the best truthful auction yet simultaneously
unboundedly worse than the expected welfare.
",jieming mao,,2017.0,,arXiv,Braverman2017,True,,arXiv,Not available,Selling to a No-Regret Buyer,afa9025a209f827cc164357e25036e37,http://arxiv.org/abs/1711.09176v1
17135," We consider the problem of a single seller repeatedly selling a single item
to a single buyer (specifically, the buyer has a value drawn fresh from known
distribution $D$ in every round). Prior work assumes that the buyer is fully
rational and will perfectly reason about how their bids today affect the
seller's decisions tomorrow. In this work we initiate a different direction:
the buyer simply runs a no-regret learning algorithm over possible bids. We
provide a fairly complete characterization of optimal auctions for the seller
in this domain. Specifically:
- If the buyer bids according to EXP3 (or any ""mean-based"" learning
algorithm), then the seller can extract expected revenue arbitrarily close to
the expected welfare. This auction is independent of the buyer's valuation $D$,
but somewhat unnatural as it is sometimes in the buyer's interest to overbid. -
There exists a learning algorithm $\mathcal{A}$ such that if the buyer bids
according to $\mathcal{A}$ then the optimal strategy for the seller is simply
to post the Myerson reserve for $D$ every round. - If the buyer bids according
to EXP3 (or any ""mean-based"" learning algorithm), but the seller is restricted
to ""natural"" auction formats where overbidding is dominated (e.g. Generalized
First-Price or Generalized Second-Price), then the optimal strategy for the
seller is a pay-your-bid format with decreasing reserves over time. Moreover,
the seller's optimal achievable revenue is characterized by a linear program,
and can be unboundedly better than the best truthful auction yet simultaneously
unboundedly worse than the expected welfare.
",jon schneider,,2017.0,,arXiv,Braverman2017,True,,arXiv,Not available,Selling to a No-Regret Buyer,afa9025a209f827cc164357e25036e37,http://arxiv.org/abs/1711.09176v1
17136," We consider the problem of a single seller repeatedly selling a single item
to a single buyer (specifically, the buyer has a value drawn fresh from known
distribution $D$ in every round). Prior work assumes that the buyer is fully
rational and will perfectly reason about how their bids today affect the
seller's decisions tomorrow. In this work we initiate a different direction:
the buyer simply runs a no-regret learning algorithm over possible bids. We
provide a fairly complete characterization of optimal auctions for the seller
in this domain. Specifically:
- If the buyer bids according to EXP3 (or any ""mean-based"" learning
algorithm), then the seller can extract expected revenue arbitrarily close to
the expected welfare. This auction is independent of the buyer's valuation $D$,
but somewhat unnatural as it is sometimes in the buyer's interest to overbid. -
There exists a learning algorithm $\mathcal{A}$ such that if the buyer bids
according to $\mathcal{A}$ then the optimal strategy for the seller is simply
to post the Myerson reserve for $D$ every round. - If the buyer bids according
to EXP3 (or any ""mean-based"" learning algorithm), but the seller is restricted
to ""natural"" auction formats where overbidding is dominated (e.g. Generalized
First-Price or Generalized Second-Price), then the optimal strategy for the
seller is a pay-your-bid format with decreasing reserves over time. Moreover,
the seller's optimal achievable revenue is characterized by a linear program,
and can be unboundedly better than the best truthful auction yet simultaneously
unboundedly worse than the expected welfare.
",s. weinberg,,2017.0,,arXiv,Braverman2017,True,,arXiv,Not available,Selling to a No-Regret Buyer,afa9025a209f827cc164357e25036e37,http://arxiv.org/abs/1711.09176v1
17137," Mean-field games have been studied under the assumption of very large number
of players. For such large systems, the basic idea consists to approximate
large games by a stylized game model with a continuum of players. The approach
has been shown to be useful in some applications. However, the stylized game
model with continuum of decision-makers is rarely observed in practice and the
approximation proposed in the asymptotic regime is meaningless for networks
with few entities. In this paper we propose a mean-field framework that is
suitable not only for large systems but also for a small world with few number
of entities. The applicability of the proposed framework is illustrated through
various examples including dynamic auction with asymmetric valuation
distributions, and spiteful bidders.
",hamidou tembine,,2014.0,,arXiv,Tembine2014,True,,arXiv,Not available,Non-Asymptotic Mean-Field Games,a974311a1f2e668db7edfb68e9a175f0,http://arxiv.org/abs/1404.1449v1
17138," Supermodular games find significant applications in a variety of models,
especially in operations research and economic applications of noncooperative
game theory, and feature pure strategy Nash equilibria characterized as fixed
points of multivalued functions on complete lattices. Pure strategy Nash
equilibria of supermodular games are here approximated by resorting to the
theory of abstract interpretation, a well established and known framework used
for designing static analyses of programming languages. This is obtained by
extending the theory of abstract interpretation in order to handle
approximations of multivalued functions and by providing some methods for
abstracting supermodular games, in order to obtain approximate Nash equilibria
which are shown to be correct within the abstract interpretation framework.
",francesco ranzato,,2015.0,,arXiv,Ranzato2015,True,,arXiv,Not available,Abstract Interpretation of Supermodular Games,fc25012572717a267b790712b1ddd2e2,http://arxiv.org/abs/1507.01423v1
17139," We present a quantum auction protocol using superpositions to represent bids
and distributed search to identify the winner(s). Measuring the final quantum
state gives the auction outcome while simultaneously destroying the
superposition. Thus non-winning bids are never revealed. Participants can use
entanglement to arrange for correlations among their bids, with the assurance
that this entanglement is not observable by others. The protocol is useful for
information hiding applications, such as partnership bidding with allocative
externality or concerns about revealing bidding preferences. The protocol
applies to a variety of auction types, e.g., first or second price, and to
auctions involving either a single item or arbitrary bundles of items (i.e.,
combinatorial auctions). We analyze the game-theoretical behavior of the
quantum protocol for the simple case of a sealed-bid quantum, and show how a
suitably designed adiabatic search reduces the possibilities for bidders to
game the auction. This design illustrates how incentive rather that
computational constraints affect quantum algorithm choices.
",tad hogg,,2007.0,,Intl. J. of Quantum Information 5:751-780 (2007),Hogg2007,True,,arXiv,Not available,Quantum Auctions,89ba49c524e6ae40e2d8a1de5b28991d,http://arxiv.org/abs/0704.0800v1
17140," We present a quantum auction protocol using superpositions to represent bids
and distributed search to identify the winner(s). Measuring the final quantum
state gives the auction outcome while simultaneously destroying the
superposition. Thus non-winning bids are never revealed. Participants can use
entanglement to arrange for correlations among their bids, with the assurance
that this entanglement is not observable by others. The protocol is useful for
information hiding applications, such as partnership bidding with allocative
externality or concerns about revealing bidding preferences. The protocol
applies to a variety of auction types, e.g., first or second price, and to
auctions involving either a single item or arbitrary bundles of items (i.e.,
combinatorial auctions). We analyze the game-theoretical behavior of the
quantum protocol for the simple case of a sealed-bid quantum, and show how a
suitably designed adiabatic search reduces the possibilities for bidders to
game the auction. This design illustrates how incentive rather that
computational constraints affect quantum algorithm choices.
",pavithra harsha,,2007.0,,Intl. J. of Quantum Information 5:751-780 (2007),Hogg2007,True,,arXiv,Not available,Quantum Auctions,89ba49c524e6ae40e2d8a1de5b28991d,http://arxiv.org/abs/0704.0800v1
17141," We present a quantum auction protocol using superpositions to represent bids
and distributed search to identify the winner(s). Measuring the final quantum
state gives the auction outcome while simultaneously destroying the
superposition. Thus non-winning bids are never revealed. Participants can use
entanglement to arrange for correlations among their bids, with the assurance
that this entanglement is not observable by others. The protocol is useful for
information hiding applications, such as partnership bidding with allocative
externality or concerns about revealing bidding preferences. The protocol
applies to a variety of auction types, e.g., first or second price, and to
auctions involving either a single item or arbitrary bundles of items (i.e.,
combinatorial auctions). We analyze the game-theoretical behavior of the
quantum protocol for the simple case of a sealed-bid quantum, and show how a
suitably designed adiabatic search reduces the possibilities for bidders to
game the auction. This design illustrates how incentive rather that
computational constraints affect quantum algorithm choices.
",kay-yut chen,,2007.0,,Intl. J. of Quantum Information 5:751-780 (2007),Hogg2007,True,,arXiv,Not available,Quantum Auctions,89ba49c524e6ae40e2d8a1de5b28991d,http://arxiv.org/abs/0704.0800v1
17142," Device-to-Device (D2D) communication is offering smart phone users a choice
to share files with each other without communicating with the cellular network.
In this paper, we discuss the behaviors of two characters in the D2D data
transaction model from an economic point of view: the data buyers who wish to
buy a certain quantity of data, as well as the data sellers who wish to sell
data through the D2D network. The optimal price and purchasing strategies are
analyzed and deduced based on game theory.
",jingjing wang,,2017.0,,arXiv,Wang2017,True,,arXiv,Not available,"Mobile Data Transactions in Device-to-Device Communication Networks:
Pricing and Auction",e9abb205121db3550aa2823e2d19656c,http://arxiv.org/abs/1701.00237v1
17143," Device-to-Device (D2D) communication is offering smart phone users a choice
to share files with each other without communicating with the cellular network.
In this paper, we discuss the behaviors of two characters in the D2D data
transaction model from an economic point of view: the data buyers who wish to
buy a certain quantity of data, as well as the data sellers who wish to sell
data through the D2D network. The optimal price and purchasing strategies are
analyzed and deduced based on game theory.
",chunxiao jiang,,2017.0,,arXiv,Wang2017,True,,arXiv,Not available,"Mobile Data Transactions in Device-to-Device Communication Networks:
Pricing and Auction",e9abb205121db3550aa2823e2d19656c,http://arxiv.org/abs/1701.00237v1
17144," Device-to-Device (D2D) communication is offering smart phone users a choice
to share files with each other without communicating with the cellular network.
In this paper, we discuss the behaviors of two characters in the D2D data
transaction model from an economic point of view: the data buyers who wish to
buy a certain quantity of data, as well as the data sellers who wish to sell
data through the D2D network. The optimal price and purchasing strategies are
analyzed and deduced based on game theory.
",zhi bie,,2017.0,,arXiv,Wang2017,True,,arXiv,Not available,"Mobile Data Transactions in Device-to-Device Communication Networks:
Pricing and Auction",e9abb205121db3550aa2823e2d19656c,http://arxiv.org/abs/1701.00237v1
17145," Device-to-Device (D2D) communication is offering smart phone users a choice
to share files with each other without communicating with the cellular network.
In this paper, we discuss the behaviors of two characters in the D2D data
transaction model from an economic point of view: the data buyers who wish to
buy a certain quantity of data, as well as the data sellers who wish to sell
data through the D2D network. The optimal price and purchasing strategies are
analyzed and deduced based on game theory.
",tony quek,,2017.0,,arXiv,Wang2017,True,,arXiv,Not available,"Mobile Data Transactions in Device-to-Device Communication Networks:
Pricing and Auction",e9abb205121db3550aa2823e2d19656c,http://arxiv.org/abs/1701.00237v1
17146," Device-to-Device (D2D) communication is offering smart phone users a choice
to share files with each other without communicating with the cellular network.
In this paper, we discuss the behaviors of two characters in the D2D data
transaction model from an economic point of view: the data buyers who wish to
buy a certain quantity of data, as well as the data sellers who wish to sell
data through the D2D network. The optimal price and purchasing strategies are
analyzed and deduced based on game theory.
",yong ren,,2017.0,,arXiv,Wang2017,True,,arXiv,Not available,"Mobile Data Transactions in Device-to-Device Communication Networks:
Pricing and Auction",e9abb205121db3550aa2823e2d19656c,http://arxiv.org/abs/1701.00237v1
17147," Myerson's seminal work provides a computationally efficient revenue-optimal
auction for selling one item to multiple bidders. Generalizing this work to
selling multiple items at once has been a central question in economics and
algorithmic game theory, but its complexity has remained poorly understood. We
answer this question by showing that a revenue-optimal auction in multi-item
settings cannot be found and implemented computationally efficiently, unless
ZPP contains P^#P. This is true even for a single additive bidder whose values
for the items are independently distributed on two rational numbers with
rational probabilities. Our result is very general: we show that it is hard to
compute any encoding of an optimal auction of any format (direct or indirect,
truthful or non-truthful) that can be implemented in expected polynomial time.
In particular, under well-believed complexity-theoretic assumptions,
revenue-optimization in very simple multi-item settings can only be tractably
approximated.
We note that our hardness result applies to randomized mechanisms in a very
simple setting, and is not an artifact of introducing combinatorial structure
to the problem by allowing correlation among item values, introducing
combinatorial valuations, or requiring the mechanism to be deterministic (whose
structure is readily combinatorial). Our proof is enabled by a
flow-interpretation of the solutions of an exponential-size linear program for
revenue maximization with an additional supermodularity constraint.
",constantinos daskalakis,,2012.0,,arXiv,Daskalakis2012,True,,arXiv,Not available,The Complexity of Optimal Mechanism Design,207ed970e98a0f23cde74338c6f537b6,http://arxiv.org/abs/1211.1703v2
17148," Myerson's seminal work provides a computationally efficient revenue-optimal
auction for selling one item to multiple bidders. Generalizing this work to
selling multiple items at once has been a central question in economics and
algorithmic game theory, but its complexity has remained poorly understood. We
answer this question by showing that a revenue-optimal auction in multi-item
settings cannot be found and implemented computationally efficiently, unless
ZPP contains P^#P. This is true even for a single additive bidder whose values
for the items are independently distributed on two rational numbers with
rational probabilities. Our result is very general: we show that it is hard to
compute any encoding of an optimal auction of any format (direct or indirect,
truthful or non-truthful) that can be implemented in expected polynomial time.
In particular, under well-believed complexity-theoretic assumptions,
revenue-optimization in very simple multi-item settings can only be tractably
approximated.
We note that our hardness result applies to randomized mechanisms in a very
simple setting, and is not an artifact of introducing combinatorial structure
to the problem by allowing correlation among item values, introducing
combinatorial valuations, or requiring the mechanism to be deterministic (whose
structure is readily combinatorial). Our proof is enabled by a
flow-interpretation of the solutions of an exponential-size linear program for
revenue maximization with an additional supermodularity constraint.
",alan deckelbaum,,2012.0,,arXiv,Daskalakis2012,True,,arXiv,Not available,The Complexity of Optimal Mechanism Design,207ed970e98a0f23cde74338c6f537b6,http://arxiv.org/abs/1211.1703v2
17149," We introduce string diagrams as a formal mathematical, graphical language to
represent, compose, program and reason about games. The language is well
established in quantum physics, quantum computing and quantum linguistic with
the semantics given by category theory. We apply this language to the game
theoretical setting and show examples how to use it for some economic games
where we highlight the compositional nature of our higher-order game theory.
",jules hedges,,2016.0,,arXiv,Hedges2016,True,,arXiv,Not available,Compositionality and String Diagrams for Game Theory,a757b8414bb8a2c0b093952af97c8b6c,http://arxiv.org/abs/1604.06061v1
17150," Myerson's seminal work provides a computationally efficient revenue-optimal
auction for selling one item to multiple bidders. Generalizing this work to
selling multiple items at once has been a central question in economics and
algorithmic game theory, but its complexity has remained poorly understood. We
answer this question by showing that a revenue-optimal auction in multi-item
settings cannot be found and implemented computationally efficiently, unless
ZPP contains P^#P. This is true even for a single additive bidder whose values
for the items are independently distributed on two rational numbers with
rational probabilities. Our result is very general: we show that it is hard to
compute any encoding of an optimal auction of any format (direct or indirect,
truthful or non-truthful) that can be implemented in expected polynomial time.
In particular, under well-believed complexity-theoretic assumptions,
revenue-optimization in very simple multi-item settings can only be tractably
approximated.
We note that our hardness result applies to randomized mechanisms in a very
simple setting, and is not an artifact of introducing combinatorial structure
to the problem by allowing correlation among item values, introducing
combinatorial valuations, or requiring the mechanism to be deterministic (whose
structure is readily combinatorial). Our proof is enabled by a
flow-interpretation of the solutions of an exponential-size linear program for
revenue maximization with an additional supermodularity constraint.
",christos tzamos,,2012.0,,arXiv,Daskalakis2012,True,,arXiv,Not available,The Complexity of Optimal Mechanism Design,207ed970e98a0f23cde74338c6f537b6,http://arxiv.org/abs/1211.1703v2
17151," We study combinatorial auctions with bidders that exhibit endowment effect.
In most of the previous work on cognitive biases in algorithmic game theory
(e.g., [Kleinberg and Oren, EC'14] and its follow-ups) the focus was on
analyzing the implications and mitigating their negative consequences. In
contrast, in this paper we show how in some cases cognitive biases can be
harnessed to obtain better outcomes.
Specifically, we study Walrasian equilibria in combinatorial markets. It is
well known that Walrasian equilibria exist only in limited settings, e.g., when
all valuations are gross substitutes, but fails to exist in more general
settings, e.g., when the valuations are submodular. We consider combinatorial
settings in which bidders exhibit the endowment effect, that is, their value
for items increases with ownership.
Our main result shows that when the valuations are submodular, even a mild
degree of endowment effect is sufficient to guarantee the existence of
Walrasian equilibria. In fact, we show that in contrast to Walrasian equilibria
with standard utility maximizing bidders -- in which the equilibrium allocation
must be efficient -- when bidders exhibit endowment effect any local optimum
can be an equilibrium allocation.
Our techniques reveal interesting connections between the LP relaxation of
combinatorial auctions and local maxima. We also provide lower bounds on the
intensity of the endowment effect that the bidders must have in order to
guarantee the existence of a Walrasian equilibrium in various settings.
",moshe babaioff,,2018.0,,arXiv,Babaioff2018,True,,arXiv,Not available,Combinatorial Auctions with Endowment Effect,0e3e75629390173277867257d71c0939,http://arxiv.org/abs/1805.10913v1
17152," We study combinatorial auctions with bidders that exhibit endowment effect.
In most of the previous work on cognitive biases in algorithmic game theory
(e.g., [Kleinberg and Oren, EC'14] and its follow-ups) the focus was on
analyzing the implications and mitigating their negative consequences. In
contrast, in this paper we show how in some cases cognitive biases can be
harnessed to obtain better outcomes.
Specifically, we study Walrasian equilibria in combinatorial markets. It is
well known that Walrasian equilibria exist only in limited settings, e.g., when
all valuations are gross substitutes, but fails to exist in more general
settings, e.g., when the valuations are submodular. We consider combinatorial
settings in which bidders exhibit the endowment effect, that is, their value
for items increases with ownership.
Our main result shows that when the valuations are submodular, even a mild
degree of endowment effect is sufficient to guarantee the existence of
Walrasian equilibria. In fact, we show that in contrast to Walrasian equilibria
with standard utility maximizing bidders -- in which the equilibrium allocation
must be efficient -- when bidders exhibit endowment effect any local optimum
can be an equilibrium allocation.
Our techniques reveal interesting connections between the LP relaxation of
combinatorial auctions and local maxima. We also provide lower bounds on the
intensity of the endowment effect that the bidders must have in order to
guarantee the existence of a Walrasian equilibrium in various settings.
",shahar dobzinski,,2018.0,,arXiv,Babaioff2018,True,,arXiv,Not available,Combinatorial Auctions with Endowment Effect,0e3e75629390173277867257d71c0939,http://arxiv.org/abs/1805.10913v1
17153," We study combinatorial auctions with bidders that exhibit endowment effect.
In most of the previous work on cognitive biases in algorithmic game theory
(e.g., [Kleinberg and Oren, EC'14] and its follow-ups) the focus was on
analyzing the implications and mitigating their negative consequences. In
contrast, in this paper we show how in some cases cognitive biases can be
harnessed to obtain better outcomes.
Specifically, we study Walrasian equilibria in combinatorial markets. It is
well known that Walrasian equilibria exist only in limited settings, e.g., when
all valuations are gross substitutes, but fails to exist in more general
settings, e.g., when the valuations are submodular. We consider combinatorial
settings in which bidders exhibit the endowment effect, that is, their value
for items increases with ownership.
Our main result shows that when the valuations are submodular, even a mild
degree of endowment effect is sufficient to guarantee the existence of
Walrasian equilibria. In fact, we show that in contrast to Walrasian equilibria
with standard utility maximizing bidders -- in which the equilibrium allocation
must be efficient -- when bidders exhibit endowment effect any local optimum
can be an equilibrium allocation.
Our techniques reveal interesting connections between the LP relaxation of
combinatorial auctions and local maxima. We also provide lower bounds on the
intensity of the endowment effect that the bidders must have in order to
guarantee the existence of a Walrasian equilibrium in various settings.
",sigal oren,,2018.0,,arXiv,Babaioff2018,True,,arXiv,Not available,Combinatorial Auctions with Endowment Effect,0e3e75629390173277867257d71c0939,http://arxiv.org/abs/1805.10913v1
17154," In this paper, spectrum access in cognitive radio networks is modeled as a
repeated auction game subject to monitoring and entry costs. For secondary
users, sensing costs are incurred as the result of primary users' activity.
Furthermore, each secondary user pays the cost of transmissions upon successful
bidding for a channel. Knowledge regarding other secondary users' activity is
limited due to the distributed nature of the network. The resulting formulation
is thus a dynamic game with incomplete information. In this paper, an efficient
bidding learning algorithm is proposed based on the outcome of past
transactions. As demonstrated through extensive simulations, the proposed
distributed scheme outperforms a myopic one-stage algorithm, and can achieve a
good balance between efficiency and fairness.
",zhu han,,2009.0,,arXiv,Han2009,True,,arXiv,Not available,"Repeated Auctions with Learning for Spectrum Access in Cognitive Radio
Networks",d4ffa0a1028bbe70d7035f6120451364,http://arxiv.org/abs/0910.2240v1
17155," In this paper, spectrum access in cognitive radio networks is modeled as a
repeated auction game subject to monitoring and entry costs. For secondary
users, sensing costs are incurred as the result of primary users' activity.
Furthermore, each secondary user pays the cost of transmissions upon successful
bidding for a channel. Knowledge regarding other secondary users' activity is
limited due to the distributed nature of the network. The resulting formulation
is thus a dynamic game with incomplete information. In this paper, an efficient
bidding learning algorithm is proposed based on the outcome of past
transactions. As demonstrated through extensive simulations, the proposed
distributed scheme outperforms a myopic one-stage algorithm, and can achieve a
good balance between efficiency and fairness.
",rong zheng,,2009.0,,arXiv,Han2009,True,,arXiv,Not available,"Repeated Auctions with Learning for Spectrum Access in Cognitive Radio
Networks",d4ffa0a1028bbe70d7035f6120451364,http://arxiv.org/abs/0910.2240v1
17156," In this paper, spectrum access in cognitive radio networks is modeled as a
repeated auction game subject to monitoring and entry costs. For secondary
users, sensing costs are incurred as the result of primary users' activity.
Furthermore, each secondary user pays the cost of transmissions upon successful
bidding for a channel. Knowledge regarding other secondary users' activity is
limited due to the distributed nature of the network. The resulting formulation
is thus a dynamic game with incomplete information. In this paper, an efficient
bidding learning algorithm is proposed based on the outcome of past
transactions. As demonstrated through extensive simulations, the proposed
distributed scheme outperforms a myopic one-stage algorithm, and can achieve a
good balance between efficiency and fairness.
",vincent poor,,2009.0,,arXiv,Han2009,True,,arXiv,Not available,"Repeated Auctions with Learning for Spectrum Access in Cognitive Radio
Networks",d4ffa0a1028bbe70d7035f6120451364,http://arxiv.org/abs/0910.2240v1
17157," Resource allocation is considered for cooperative transmissions in
multiple-relay wireless networks. Two auction mechanisms, SNR auctions and
power auctions, are proposed to distributively coordinate the allocation of
power among multiple relays. In the SNR auction, a user chooses the relay with
the lowest weighted price. In the power auction, a user may choose to use
multiple relays simultaneously, depending on the network topology and the
relays' prices. Sufficient conditions for the existence (in both auctions) and
uniqueness (in the SNR auction) of the Nash equilibrium are given. The fairness
of the SNR auction and efficiency of the power auction are further discussed.
It is also proven that users can achieve the unique Nash equilibrium
distributively via best response updates in a completely asynchronous manner.
",jianwei huang,,2008.0,10.1109/ICASSP.2008.4518870,arXiv,Huang2008,True,,arXiv,Not available,"Auction-based Resource Allocation for Multi-relay Asynchronous
Cooperative Networks",1256db38f0614260bb1861ed0911bdf4,http://arxiv.org/abs/0801.3097v1
17158," Resource allocation is considered for cooperative transmissions in
multiple-relay wireless networks. Two auction mechanisms, SNR auctions and
power auctions, are proposed to distributively coordinate the allocation of
power among multiple relays. In the SNR auction, a user chooses the relay with
the lowest weighted price. In the power auction, a user may choose to use
multiple relays simultaneously, depending on the network topology and the
relays' prices. Sufficient conditions for the existence (in both auctions) and
uniqueness (in the SNR auction) of the Nash equilibrium are given. The fairness
of the SNR auction and efficiency of the power auction are further discussed.
It is also proven that users can achieve the unique Nash equilibrium
distributively via best response updates in a completely asynchronous manner.
",zhu han,,2008.0,10.1109/ICASSP.2008.4518870,arXiv,Huang2008,True,,arXiv,Not available,"Auction-based Resource Allocation for Multi-relay Asynchronous
Cooperative Networks",1256db38f0614260bb1861ed0911bdf4,http://arxiv.org/abs/0801.3097v1
17159," Resource allocation is considered for cooperative transmissions in
multiple-relay wireless networks. Two auction mechanisms, SNR auctions and
power auctions, are proposed to distributively coordinate the allocation of
power among multiple relays. In the SNR auction, a user chooses the relay with
the lowest weighted price. In the power auction, a user may choose to use
multiple relays simultaneously, depending on the network topology and the
relays' prices. Sufficient conditions for the existence (in both auctions) and
uniqueness (in the SNR auction) of the Nash equilibrium are given. The fairness
of the SNR auction and efficiency of the power auction are further discussed.
It is also proven that users can achieve the unique Nash equilibrium
distributively via best response updates in a completely asynchronous manner.
",mung chiang,,2008.0,10.1109/ICASSP.2008.4518870,arXiv,Huang2008,True,,arXiv,Not available,"Auction-based Resource Allocation for Multi-relay Asynchronous
Cooperative Networks",1256db38f0614260bb1861ed0911bdf4,http://arxiv.org/abs/0801.3097v1
17160," We introduce string diagrams as a formal mathematical, graphical language to
represent, compose, program and reason about games. The language is well
established in quantum physics, quantum computing and quantum linguistic with
the semantics given by category theory. We apply this language to the game
theoretical setting and show examples how to use it for some economic games
where we highlight the compositional nature of our higher-order game theory.
",evguenia shprits,,2016.0,,arXiv,Hedges2016,True,,arXiv,Not available,Compositionality and String Diagrams for Game Theory,a757b8414bb8a2c0b093952af97c8b6c,http://arxiv.org/abs/1604.06061v1
17161," Resource allocation is considered for cooperative transmissions in
multiple-relay wireless networks. Two auction mechanisms, SNR auctions and
power auctions, are proposed to distributively coordinate the allocation of
power among multiple relays. In the SNR auction, a user chooses the relay with
the lowest weighted price. In the power auction, a user may choose to use
multiple relays simultaneously, depending on the network topology and the
relays' prices. Sufficient conditions for the existence (in both auctions) and
uniqueness (in the SNR auction) of the Nash equilibrium are given. The fairness
of the SNR auction and efficiency of the power auction are further discussed.
It is also proven that users can achieve the unique Nash equilibrium
distributively via best response updates in a completely asynchronous manner.
",h. poor,,2008.0,10.1109/ICASSP.2008.4518870,arXiv,Huang2008,True,,arXiv,Not available,"Auction-based Resource Allocation for Multi-relay Asynchronous
Cooperative Networks",1256db38f0614260bb1861ed0911bdf4,http://arxiv.org/abs/0801.3097v1
17162," Consider an abstract social choice setting with incomplete information, where
the number of alternatives is large. Albeit natural, implementing VCG
mechanisms may not be feasible due to the prohibitive communication
constraints. However, if players restrict attention to a subset of the
alternatives, feasibility may be recovered.
This paper characterizes the class of subsets which induce an ex-post
equilibrium in the original game. It turns out that a crucial condition for
such subsets to exist is the existence of a type-independent optimal social
alternative, for each player. We further analyze the welfare implications of
these restrictions.
This work follows work by Holzman, Kfir-Dahav, Monderer and Tennenholtz
(2004) and Holzman and Monderer (2004) where similar analysis is done for
combinatorial auctions.
",rakefet rozen,,2012.0,,arXiv,Rozen2012,True,,arXiv,Not available,Ex-Post Equilibrium and VCG Mechanisms,7cc9f8615d32634f22fac2aa8c185de7,http://arxiv.org/abs/1211.3293v1
17163," Consider an abstract social choice setting with incomplete information, where
the number of alternatives is large. Albeit natural, implementing VCG
mechanisms may not be feasible due to the prohibitive communication
constraints. However, if players restrict attention to a subset of the
alternatives, feasibility may be recovered.
This paper characterizes the class of subsets which induce an ex-post
equilibrium in the original game. It turns out that a crucial condition for
such subsets to exist is the existence of a type-independent optimal social
alternative, for each player. We further analyze the welfare implications of
these restrictions.
This work follows work by Holzman, Kfir-Dahav, Monderer and Tennenholtz
(2004) and Holzman and Monderer (2004) where similar analysis is done for
combinatorial auctions.
",rann smorodinsky,,2012.0,,arXiv,Rozen2012,True,,arXiv,Not available,Ex-Post Equilibrium and VCG Mechanisms,7cc9f8615d32634f22fac2aa8c185de7,http://arxiv.org/abs/1211.3293v1
17164," This paper presents recent results from Mean Field Game theory underlying the
introduction of common noise that imposes to incorporate the distribution of
the agents as a state variable. Starting from the usual mean field games
equations introduced by J.M. Lasry and P.L. Lions and adapting them to games on
graphs, we introduce a partial differential equation, often referred to as the
Master equation, from which the MFG equations can be deduced. Then, this Master
equation can be reinterpreted using a global control problem inducing the same
behaviors as in the non-cooperative initial mean field game.
",olivier gueant,,2011.0,,arXiv,Guéant2011,True,,arXiv,Not available,"From infinity to one: The reduction of some mean field games to a global
control problem",be29370f31b04559b1b04d520a3e0a7b,http://arxiv.org/abs/1110.3441v2
17165," In this paper, we study turn-based quantitative multiplayer non zero-sum
games played on finite graphs with both reachability and safety objectives. In
this framework a player with a reachability objective aims at reaching his own
goal as soon as possible, whereas a player with a safety objective aims at
avoiding his bad set or, if impossible, delaying its visit as long as possible.
We prove the existence of Nash equilibria with finite memory in quantitative
multiplayer reachability/safety games. Moreover, we prove the existence of
finite-memory secure equilibria for quantitative two-player reachability games.
",thomas brihaye,,2012.0,,arXiv,Brihaye2012,True,,arXiv,Not available,On Equilibria in Quantitative Games with Reachability/Safety Objectives,1d360bdd1f27a06d72866220d086bd96,http://arxiv.org/abs/1205.4889v1
17166," In this paper, we study turn-based quantitative multiplayer non zero-sum
games played on finite graphs with both reachability and safety objectives. In
this framework a player with a reachability objective aims at reaching his own
goal as soon as possible, whereas a player with a safety objective aims at
avoiding his bad set or, if impossible, delaying its visit as long as possible.
We prove the existence of Nash equilibria with finite memory in quantitative
multiplayer reachability/safety games. Moreover, we prove the existence of
finite-memory secure equilibria for quantitative two-player reachability games.
",veronique bruyere,,2012.0,,arXiv,Brihaye2012,True,,arXiv,Not available,On Equilibria in Quantitative Games with Reachability/Safety Objectives,1d360bdd1f27a06d72866220d086bd96,http://arxiv.org/abs/1205.4889v1
17167," In this paper, we study turn-based quantitative multiplayer non zero-sum
games played on finite graphs with both reachability and safety objectives. In
this framework a player with a reachability objective aims at reaching his own
goal as soon as possible, whereas a player with a safety objective aims at
avoiding his bad set or, if impossible, delaying its visit as long as possible.
We prove the existence of Nash equilibria with finite memory in quantitative
multiplayer reachability/safety games. Moreover, we prove the existence of
finite-memory secure equilibria for quantitative two-player reachability games.
",julie pril,,2012.0,,arXiv,Brihaye2012,True,,arXiv,Not available,On Equilibria in Quantitative Games with Reachability/Safety Objectives,1d360bdd1f27a06d72866220d086bd96,http://arxiv.org/abs/1205.4889v1
17168," The ordinary game of Nim has a long history and is well-known in the area of
combinatorial game theory. The solution to the ordinary game of Nim has been
known for many years and lends itself to numerous other solutions to
combinatorial games. Nim was extended to graphs by taking a fixed graph with a
playing piece on a given vertex and assigning positive integer weight to the
edges that correspond to a pile of stones in the ordinary game of Nim. Players
move alternately from the playing piece across incident edges, removing weight
from edges as they move. This paper solves Nim on hypercubes in the unit weight
case completely. We briefly discuss the arbitrary weight case and its ties to
known results.
",lindsay erickson,,2012.0,,arXiv,Erickson2012,True,,arXiv,Not available,Nim on hypercubes,75074079658354ad7a4a62e7c3f53f96,http://arxiv.org/abs/1208.5496v1
17169," The ordinary game of Nim has a long history and is well-known in the area of
combinatorial game theory. The solution to the ordinary game of Nim has been
known for many years and lends itself to numerous other solutions to
combinatorial games. Nim was extended to graphs by taking a fixed graph with a
playing piece on a given vertex and assigning positive integer weight to the
edges that correspond to a pile of stones in the ordinary game of Nim. Players
move alternately from the playing piece across incident edges, removing weight
from edges as they move. This paper solves Nim on hypercubes in the unit weight
case completely. We briefly discuss the arbitrary weight case and its ties to
known results.
",warren shreve,,2012.0,,arXiv,Erickson2012,True,,arXiv,Not available,Nim on hypercubes,75074079658354ad7a4a62e7c3f53f96,http://arxiv.org/abs/1208.5496v1
17170," We extend the study of the iterated elimination of strictly dominated
strategies (IESDS) from Nash strategic games to a class of qualitative games.
Also in this case, the IESDS process leads us to a kind of 'rationalizable'
result. We define several types of dominance relation and game reduction and
establish conditions under which a unique and nonempty maximal reduction
exists. We generalize, in this way, some results due to Dufwenberg and Stegeman
(2002) and Apt (2007).
",monica patriche,,2013.0,,arXiv,Patriche2013,True,,arXiv,Not available,The reduction of qualitative games,af3378ce93e914522405b35de06a576d,http://arxiv.org/abs/1303.6976v1
17171," We introduce string diagrams as a formal mathematical, graphical language to
represent, compose, program and reason about games. The language is well
established in quantum physics, quantum computing and quantum linguistic with
the semantics given by category theory. We apply this language to the game
theoretical setting and show examples how to use it for some economic games
where we highlight the compositional nature of our higher-order game theory.
",viktor winschel,,2016.0,,arXiv,Hedges2016,True,,arXiv,Not available,Compositionality and String Diagrams for Game Theory,a757b8414bb8a2c0b093952af97c8b6c,http://arxiv.org/abs/1604.06061v1
17172," We consider fractional linear programming production games for the
single-objective and multiobjective cases. We use the method of Chakraborty and
Gupta (2002) in order to transform the fractional linear programming problems
into linear programming problems. A cooperative game is attached and we prove
the non-emptiness of the core by using the duality theory from the linear
programming. In the multiobjective case, we give a characterization of the
Stable outcome of the associate cooperative game, which is balanced. We also
consider the cooperative game associated to an exchange economy with a finite
number of agents.
",monica patriche,,2013.0,,arXiv,Patriche2013,True,,arXiv,Not available,The core of the games with fractional linear utility functions,fc2e0ec9e309b123c2d54e0c10ca645a,http://arxiv.org/abs/1303.7041v1
17173," Two qubit quantum computations are viewed as two player, strictly competitive
games and a game-theoretic measure of optimality of these computations is
developed. To this end, the geometry of Hilbert space of quantum computations
is used to establish the equivalence of game-theoretic solution concepts of
Nash equilibrium and mini-max outcomes in games of this type, and quantum
mechanisms are designed for realizing these mini-max outcomes.
",faisal khan,,2013.0,10.1007/s11128-013-0640-7,arXiv,Khan2013,True,,arXiv,Not available,Mini-maximizing two qubit quantum computations,aee023740477da75c1fa5262b03b625d,http://arxiv.org/abs/1304.0748v2
17174," Two qubit quantum computations are viewed as two player, strictly competitive
games and a game-theoretic measure of optimality of these computations is
developed. To this end, the geometry of Hilbert space of quantum computations
is used to establish the equivalence of game-theoretic solution concepts of
Nash equilibrium and mini-max outcomes in games of this type, and quantum
mechanisms are designed for realizing these mini-max outcomes.
",simon phoenix,,2013.0,10.1007/s11128-013-0640-7,arXiv,Khan2013,True,,arXiv,Not available,Mini-maximizing two qubit quantum computations,aee023740477da75c1fa5262b03b625d,http://arxiv.org/abs/1304.0748v2
17175," In cooperative games, the core is the most popular solution concept, and its
properties are well known. In the classical setting of cooperative games, it is
generally assumed that all coalitions can form, i.e., they are all feasible. In
many situations, this assumption is too strong and one has to deal with some
unfeasible coalitions. Defining a game on a subcollection of the power set of
the set of players has many implications on the mathematical structure of the
core, depending on the precise structure of the subcollection of feasible
coalitions. Many authors have contributed to this topic, and we give a unified
view of these different results.
",michel grabisch,,2013.0,,Annals of Operations Research (2013) 33-64,Grabisch2013,True,,arXiv,Not available,The core of games on ordered structures and graphs,77e0b0509125c0f329c887d7902ea04b,http://arxiv.org/abs/1304.1075v1
17176," We extend the potential-based shaping method from Markov decision processes
to multi-player general-sum stochastic games. We prove that the Nash equilibria
in a stochastic game remains unchanged after potential-based shaping is applied
to the environment. The property of policy invariance provides a possible way
of speeding convergence when learning to play a stochastic game.
",xiaosong lu,,2014.0,10.1613/jair.3384,"Journal Of Artificial Intelligence Research, Volume 41, pages
397-406, 2011",Lu2014,True,,arXiv,Not available,"Policy Invariance under Reward Transformations for General-Sum
Stochastic Games",413837ab528e9f7fe5a8e466671bd557,http://arxiv.org/abs/1401.3907v1
17177," We extend the potential-based shaping method from Markov decision processes
to multi-player general-sum stochastic games. We prove that the Nash equilibria
in a stochastic game remains unchanged after potential-based shaping is applied
to the environment. The property of policy invariance provides a possible way
of speeding convergence when learning to play a stochastic game.
",howard schwartz,,2014.0,10.1613/jair.3384,"Journal Of Artificial Intelligence Research, Volume 41, pages
397-406, 2011",Lu2014,True,,arXiv,Not available,"Policy Invariance under Reward Transformations for General-Sum
Stochastic Games",413837ab528e9f7fe5a8e466671bd557,http://arxiv.org/abs/1401.3907v1
17178," We extend the potential-based shaping method from Markov decision processes
to multi-player general-sum stochastic games. We prove that the Nash equilibria
in a stochastic game remains unchanged after potential-based shaping is applied
to the environment. The property of policy invariance provides a possible way
of speeding convergence when learning to play a stochastic game.
",sidney jr,,2014.0,10.1613/jair.3384,"Journal Of Artificial Intelligence Research, Volume 41, pages
397-406, 2011",Lu2014,True,,arXiv,Not available,"Policy Invariance under Reward Transformations for General-Sum
Stochastic Games",413837ab528e9f7fe5a8e466671bd557,http://arxiv.org/abs/1401.3907v1
17179," We consider zero-sum stochastic games with perfect information and finitely
many states and actions. The payoff is computed by a payoff function which
associates to each infinite sequence of states and actions a real number. We
prove that if the the payoff function is both shift-invariant and submixing,
then the game is half-positional, i.e. the first player has an optimal strategy
which is both deterministic and stationary. This result relies on the existence
of $\epsilon$-subgame-perfect equilibria in shift-invariant games, a second
contribution of the paper.
",hugo gimbert,,2014.0,,arXiv,Gimbert2014,True,,arXiv,Not available,"Two-Player Perfect-Information Shift-Invariant Submixing Stochastic
Games Are Half-Positional",d026d39b6058ae74f210b0b766b1f111,http://arxiv.org/abs/1401.6575v2
17180," We consider zero-sum stochastic games with perfect information and finitely
many states and actions. The payoff is computed by a payoff function which
associates to each infinite sequence of states and actions a real number. We
prove that if the the payoff function is both shift-invariant and submixing,
then the game is half-positional, i.e. the first player has an optimal strategy
which is both deterministic and stationary. This result relies on the existence
of $\epsilon$-subgame-perfect equilibria in shift-invariant games, a second
contribution of the paper.
",edon kelmendi,,2014.0,,arXiv,Gimbert2014,True,,arXiv,Not available,"Two-Player Perfect-Information Shift-Invariant Submixing Stochastic
Games Are Half-Positional",d026d39b6058ae74f210b0b766b1f111,http://arxiv.org/abs/1401.6575v2
17181," In the early 1950s Lloyd Shapley proposed an ordinal and set-valued solution
concept for zero-sum games called \emph{weak saddle}. We show that all weak
saddles of a given zero-sum game are interchangeable and equivalent. As a
consequence, every such game possesses a unique set-based value.
",felix brandt,,2014.0,10.1016/j.geb.2015.12.010,Games and Economic Behavior (2016) 95:107-112,Brandt2014,True,,arXiv,Not available,An Ordinal Minimax Theorem,4b5f55a444b10659fcf4586ef0ef6156,http://arxiv.org/abs/1412.4198v5
17182," We introduce string diagrams as a formal mathematical, graphical language to
represent, compose, program and reason about games. The language is well
established in quantum physics, quantum computing and quantum linguistic with
the semantics given by category theory. We apply this language to the game
theoretical setting and show examples how to use it for some economic games
where we highlight the compositional nature of our higher-order game theory.
",philipp zahn,,2016.0,,arXiv,Hedges2016,True,,arXiv,Not available,Compositionality and String Diagrams for Game Theory,a757b8414bb8a2c0b093952af97c8b6c,http://arxiv.org/abs/1604.06061v1
17183," In the early 1950s Lloyd Shapley proposed an ordinal and set-valued solution
concept for zero-sum games called \emph{weak saddle}. We show that all weak
saddles of a given zero-sum game are interchangeable and equivalent. As a
consequence, every such game possesses a unique set-based value.
",markus brill,,2014.0,10.1016/j.geb.2015.12.010,Games and Economic Behavior (2016) 95:107-112,Brandt2014,True,,arXiv,Not available,An Ordinal Minimax Theorem,4b5f55a444b10659fcf4586ef0ef6156,http://arxiv.org/abs/1412.4198v5
17184," In the early 1950s Lloyd Shapley proposed an ordinal and set-valued solution
concept for zero-sum games called \emph{weak saddle}. We show that all weak
saddles of a given zero-sum game are interchangeable and equivalent. As a
consequence, every such game possesses a unique set-based value.
",warut suksompong,,2014.0,10.1016/j.geb.2015.12.010,Games and Economic Behavior (2016) 95:107-112,Brandt2014,True,,arXiv,Not available,An Ordinal Minimax Theorem,4b5f55a444b10659fcf4586ef0ef6156,http://arxiv.org/abs/1412.4198v5
17185," We study hedonic coalition formation games in which cooperation among the
players is restricted by a graph structure: a subset of players can form a
coalition if and only if they are connected in the given graph. We investigate
the complexity of finding stable outcomes in such games, for several notions of
stability. In particular, we provide an efficient algorithm that finds an
individually stable partition for an arbitrary hedonic game on an acyclic
graph. We also introduce a new stability concept -in-neighbor stability- which
is tailored for our setting. We show that the problem of finding an in-neighbor
stable outcome admits a polynomial-time algorithm if the underlying graph is a
path, but is NP-hard for arbitrary trees even for additively separable hedonic
games; for symmetric additively separable games we obtain a PLS-hardness
result.
",ayumi igarashi,,2016.0,,arXiv,Igarashi2016,True,,arXiv,Not available,Hedonic Games with Graph-restricted Communication,dae2819904074acf0f517c81f1e6c47a,http://arxiv.org/abs/1602.05342v2
17186," We study hedonic coalition formation games in which cooperation among the
players is restricted by a graph structure: a subset of players can form a
coalition if and only if they are connected in the given graph. We investigate
the complexity of finding stable outcomes in such games, for several notions of
stability. In particular, we provide an efficient algorithm that finds an
individually stable partition for an arbitrary hedonic game on an acyclic
graph. We also introduce a new stability concept -in-neighbor stability- which
is tailored for our setting. We show that the problem of finding an in-neighbor
stable outcome admits a polynomial-time algorithm if the underlying graph is a
path, but is NP-hard for arbitrary trees even for additively separable hedonic
games; for symmetric additively separable games we obtain a PLS-hardness
result.
",edith elkind,,2016.0,,arXiv,Igarashi2016,True,,arXiv,Not available,Hedonic Games with Graph-restricted Communication,dae2819904074acf0f517c81f1e6c47a,http://arxiv.org/abs/1602.05342v2
17187," We model parking in urban centers as a set of parallel queues and overlay a
game theoretic structure that allows us to compare the user-selected (Nash)
equilibrium to the socially optimal equilibrium. We model arriving drivers as
utility maximizers and consider the game in which observing the queue length is
free as well as the game in which drivers must pay to observe the queue length.
In both games, drivers must decide between balking and joining. We compare the
Nash induced welfare to the socially optimal welfare. We find that gains to
welfare do not require full information penetration---meaning, for social
welfare to increase, not everyone needs to pay to observe. Through simulation,
we explore a more complex scenario where drivers decide based the queueing game
whether or not to enter a collection of queues over a network. We examine the
occupancy-congestion relationship, an important relationship for determining
the impact of parking resources on overall traffic congestion. Our simulated
models use parameters informed by real-world data collected by the Seattle
Department of Transportation.
",lillian ratliff,,2016.0,,arXiv,Ratliff2016,True,,arXiv,Not available,To Observe or Not to Observe: Queuing Game Framework for Urban Parking,5faa743bc8c828896fb2bd558a5f851c,http://arxiv.org/abs/1603.08995v1
17188," We model parking in urban centers as a set of parallel queues and overlay a
game theoretic structure that allows us to compare the user-selected (Nash)
equilibrium to the socially optimal equilibrium. We model arriving drivers as
utility maximizers and consider the game in which observing the queue length is
free as well as the game in which drivers must pay to observe the queue length.
In both games, drivers must decide between balking and joining. We compare the
Nash induced welfare to the socially optimal welfare. We find that gains to
welfare do not require full information penetration---meaning, for social
welfare to increase, not everyone needs to pay to observe. Through simulation,
we explore a more complex scenario where drivers decide based the queueing game
whether or not to enter a collection of queues over a network. We examine the
occupancy-congestion relationship, an important relationship for determining
the impact of parking resources on overall traffic congestion. Our simulated
models use parameters informed by real-world data collected by the Seattle
Department of Transportation.
",chase dowling,,2016.0,,arXiv,Ratliff2016,True,,arXiv,Not available,To Observe or Not to Observe: Queuing Game Framework for Urban Parking,5faa743bc8c828896fb2bd558a5f851c,http://arxiv.org/abs/1603.08995v1
17189," We model parking in urban centers as a set of parallel queues and overlay a
game theoretic structure that allows us to compare the user-selected (Nash)
equilibrium to the socially optimal equilibrium. We model arriving drivers as
utility maximizers and consider the game in which observing the queue length is
free as well as the game in which drivers must pay to observe the queue length.
In both games, drivers must decide between balking and joining. We compare the
Nash induced welfare to the socially optimal welfare. We find that gains to
welfare do not require full information penetration---meaning, for social
welfare to increase, not everyone needs to pay to observe. Through simulation,
we explore a more complex scenario where drivers decide based the queueing game
whether or not to enter a collection of queues over a network. We examine the
occupancy-congestion relationship, an important relationship for determining
the impact of parking resources on overall traffic congestion. Our simulated
models use parameters informed by real-world data collected by the Seattle
Department of Transportation.
",eric mazumdar,,2016.0,,arXiv,Ratliff2016,True,,arXiv,Not available,To Observe or Not to Observe: Queuing Game Framework for Urban Parking,5faa743bc8c828896fb2bd558a5f851c,http://arxiv.org/abs/1603.08995v1
17190," We model parking in urban centers as a set of parallel queues and overlay a
game theoretic structure that allows us to compare the user-selected (Nash)
equilibrium to the socially optimal equilibrium. We model arriving drivers as
utility maximizers and consider the game in which observing the queue length is
free as well as the game in which drivers must pay to observe the queue length.
In both games, drivers must decide between balking and joining. We compare the
Nash induced welfare to the socially optimal welfare. We find that gains to
welfare do not require full information penetration---meaning, for social
welfare to increase, not everyone needs to pay to observe. Through simulation,
we explore a more complex scenario where drivers decide based the queueing game
whether or not to enter a collection of queues over a network. We examine the
occupancy-congestion relationship, an important relationship for determining
the impact of parking resources on overall traffic congestion. Our simulated
models use parameters informed by real-world data collected by the Seattle
Department of Transportation.
",baosen zhang,,2016.0,,arXiv,Ratliff2016,True,,arXiv,Not available,To Observe or Not to Observe: Queuing Game Framework for Urban Parking,5faa743bc8c828896fb2bd558a5f851c,http://arxiv.org/abs/1603.08995v1
17191," We study stochastic two-player turn-based games in which the objective of one
player is to ensure several infinite-horizon total reward objectives, while the
other player attempts to spoil at least one of the objectives. The games have
previously been shown not to be determined, and an approximation algorithm for
computing a Pareto curve has been given. The major drawback of the existing
algorithm is that it needs to compute Pareto curves for finite horizon
objectives (for increasing length of the horizon), and the size of these Pareto
curves can grow unboundedly, even when the infinite-horizon Pareto curve is
small. By adapting existing results, we first give an algorithm that computes
the Pareto curve for determined games. Then, as the main result of the paper,
we show that for the natural class of stopping games and when there are two
reward objectives, the problem of deciding whether a player can ensure
satisfaction of the objectives with given thresholds is decidable. The result
relies on intricate and novel proof which shows that the Pareto curves contain
only finitely many points. As a consequence, we get that the two-objective
discounted-reward problem for unrestricted class of stochastic games is
decidable.
",romain brenguier,,2016.0,,arXiv,Brenguier2016,True,,arXiv,Not available,Decidability Results for Multi-objective Stochastic Games,1d6b613c02c0a3e7e0f1b6c368717fff,http://arxiv.org/abs/1605.03811v1
17192," We study stochastic two-player turn-based games in which the objective of one
player is to ensure several infinite-horizon total reward objectives, while the
other player attempts to spoil at least one of the objectives. The games have
previously been shown not to be determined, and an approximation algorithm for
computing a Pareto curve has been given. The major drawback of the existing
algorithm is that it needs to compute Pareto curves for finite horizon
objectives (for increasing length of the horizon), and the size of these Pareto
curves can grow unboundedly, even when the infinite-horizon Pareto curve is
small. By adapting existing results, we first give an algorithm that computes
the Pareto curve for determined games. Then, as the main result of the paper,
we show that for the natural class of stopping games and when there are two
reward objectives, the problem of deciding whether a player can ensure
satisfaction of the objectives with given thresholds is decidable. The result
relies on intricate and novel proof which shows that the Pareto curves contain
only finitely many points. As a consequence, we get that the two-objective
discounted-reward problem for unrestricted class of stochastic games is
decidable.
",vojtech forejt,,2016.0,,arXiv,Brenguier2016,True,,arXiv,Not available,Decidability Results for Multi-objective Stochastic Games,1d6b613c02c0a3e7e0f1b6c368717fff,http://arxiv.org/abs/1605.03811v1
17193,"I and my brand names, as in iPhone and MySpace, are popular but poorly understood. Although there is an intuitive appeal to using pronouns to reference the consumer, little is known about why the naming tactic works and, in turn, the conditions under which each pronoun should be used. We propose a framework for consumer processing of such brand names, predicting that both I and my influence consumer preference under divergent conditions and psychological mechanisms. In referencing the self as an actor, I should induce narrative self-referencing, wherein one imagines oneself actively using the product. By contrast, my references the self as an owner, so we expect it to give rise to the more inert feeling of subjective ownership. These hypotheses were tested in an online experiment using a representative sample of US consumers. Findings indicate that I produces favorable consumer response via narrative self-referencing, but only when the root word of the brand is a verb (for example, iRead). Meanwhile, my produces favorable consumer response via feelings of subjective ownership, but only when the brand root word is a noun (for example, myReader). Mediation analyses support the proposed divergent psychological processes. Practical implications for branding are discussed, as are theoretical implications.
",luke kachersky,,2013.0,10.1057/bm.2012.61,Journal of Brand Management,Kachersky2013,Not available,,Nature,Not available,How personal pronouns influence brand name preference,eba40137a8ad09d7f4f86547395e2ee8,http://dx.doi.org/10.1057/bm.2012.61
17194,"I and my brand names, as in iPhone and MySpace, are popular but poorly understood. Although there is an intuitive appeal to using pronouns to reference the consumer, little is known about why the naming tactic works and, in turn, the conditions under which each pronoun should be used. We propose a framework for consumer processing of such brand names, predicting that both I and my influence consumer preference under divergent conditions and psychological mechanisms. In referencing the self as an actor, I should induce narrative self-referencing, wherein one imagines oneself actively using the product. By contrast, my references the self as an owner, so we expect it to give rise to the more inert feeling of subjective ownership. These hypotheses were tested in an online experiment using a representative sample of US consumers. Findings indicate that I produces favorable consumer response via narrative self-referencing, but only when the root word of the brand is a verb (for example, iRead). Meanwhile, my produces favorable consumer response via feelings of subjective ownership, but only when the brand root word is a noun (for example, myReader). Mediation analyses support the proposed divergent psychological processes. Practical implications for branding are discussed, as are theoretical implications.
",nicole palermo,,2013.0,10.1057/bm.2012.61,Journal of Brand Management,Kachersky2013,Not available,,Nature,Not available,How personal pronouns influence brand name preference,eba40137a8ad09d7f4f86547395e2ee8,http://dx.doi.org/10.1057/bm.2012.61
17195,"An effective Decision Support System (DSS) should help its users improve decision making in complex, information-rich environments. We present a feature gap analysis that shows that current decision support technologies lack important qualities for a new generation of agile business models that require easy, temporary integration across organisational boundaries. We enumerate these qualities as DSS Desiderata, properties that can contribute both effectiveness and flexibility to users in such environments. To address this gap, we describe a new design approach that enables users to compose decision behaviours from separate, configurable components, and allows dynamic construction of analysis and modelling tools from small, single-purpose evaluator services. The result is what we call an ‘evaluator service network’ that can easily be configured to test hypotheses and analyse the impact of various choices for elements of decision processes. We have implemented and tested this design in an interactive version of the MinneTAC trading agent, an agent designed for the Trading Agent Competition for Supply Chain Management.
",maria gini,,2010.0,10.1057/ejis.2010.24,European Journal of Information Systems,Collins2010,Not available,,Nature,Not available,Flexible decision support in dynamic inter-organisational networks,02fa88455c665ac55bd2b00324e9a0a2,http://dx.doi.org/10.1057/ejis.2010.24
17196,"This paper presents an analytical discussion of IMF conditionality based on the theory of special interest politics. We outline a simple political–economy model of special interest group politics, extended to include the interaction of the IMF with the government of a country making use of IMF resources. Conditional lending turns the IMF into a benevolent lobby that can exert beneficial impacts on the government's policy choices. In addition to addressing the international spillover effects of national economic policies, conditionality can help reduce policy inefficiencies generated by domestic conflicts of interest and limited ownership.
",alex mourmouras,,2004.0,10.1057/palgrave.ces.8100064,Comparative Economic Studies,Mayer2004,Not available,,Nature,Not available,IMF Conditionality and the Theory of Special Interest Politics1,327692dc30b5b89eb7bf157bad371fe3,http://dx.doi.org/10.1057/palgrave.ces.8100064
17197,"Philanthropic decision-making is important both for its potential to provide insight into human behaviour and for its economic significance. In recent years, investigations of charitable-giving behaviour have expanded substantially, including explorations from a variety of disciplinary perspectives such as economics, marketing, sociology, public administration, anthropology, evolutionary biology, political science and psychology. These investigations have resulted in a wealth of experimental results with each investigation accompanied by a discussion of potential theoretical implications. Most commonly, the various theories employed are helpful with regard to the narrow result of the investigation, but are not always useful in explaining the wider universe of results. Taking a comprehensive view of charitable-giving behaviour is thus limited to either employing a wide assortment of overlapping theoretical models, selectively applying each to fit individual phenomena, or merely referencing an ad hoc assortment of potential motivations. This circumstance suggests the value of a more unified, comprehensive approach to understanding the complete range of experimental and empirical results in charitable giving. This article proposes a comprehensive framework for philanthropic decision-making using a simple evolutionary approach incorporating interrelated fitness-enhancing strategies. The framework is then used in an extensive review of experimental and other empirical results in philanthropic decision-making. This review supports the framework proposition that giving depends on the tangibility of a gift’s impact on altruism (direct or code), reciprocity (transactional or friendship) and possessions relative to its alternatives. Five example principles of fundraising practice demonstrate the practical applicability of this proposition: advance the donor hero story (tangibility of direct or code altruism); make the charity like family (friendship reciprocity); provide compatible publicity and benefits (transactional reciprocity); minimize perceived loss (possessions); and manage decision avoidance (relative to its alternatives). Understanding philanthropic behaviour from this perspective provides explanation and guidance for a wide range of charitable-giving behaviours and fundraising practices even in areas less amenable to traditional experimental investigation, such as charitable bequests and major gifts.
",russell iii,Business and management,2017.0,10.1057/palcomms.2017.50,Palgrave Communications,III2017,Not available,,Nature,Not available,Natural philanthropy: a new evolutionary framework explaining diverse experimental results and informing fundraising practice,af14e0b32759a9e4d89ccd16b3ac2a2b,http://dx.doi.org/10.1057/palcomms.2017.50
17198,"This article considers the relationship between policy design and the pattern of interests attracted to the political arena. It examines legislation crafted by a large coalition of diverse interests that designed policy favorable to problem solving. This is the kind of policymaking that regime theorists identify with social production—but also one considered a rare circumstance. Previous attempts at passing similar legislation failed because the problem was defined narrowly and the political arena contained only two stakeholders, offering no opportunity to introduce a change that benefited one without harming the other. Success required redefining the problem and changing the nature of the political arena in a manner similar to that described by Schattschneider. By doing so, diverse interests discovered a way to benefit collectively. The present case therefore demonstrates the advantage of coupling the strategic insight of Schattschneider with the normative goals of regime theorists.
",b reno,,2007.0,10.1057/palgrave.polity.2300078,Polity,Reno2007,Not available,,Nature,Not available,A Floor Without a Ceiling: Balancing Normative and Strategic Goals in Policy Design*,1616409fe1ebb8efb4106d99af40c56e,http://dx.doi.org/10.1057/palgrave.polity.2300078
17199,"Thomas Schelling's two influential books, The Strategy of Conflict and Arms and Influence, remain foundational works for that thriving branch of realism that explores strategic bargaining. They illustrate the pitfalls of deduction in a political, cultural and ethical vacuum. In the real world, signals and reference points are only recognized and understood in context, and that context is a function of the history, culture and the prior experience of actors with one other. Schelling's works on bargaining — and many of the studies in the research program to which he contributed — are unwitting prisoners of a particular language and context: microeconomics and a parochial American Cold War view of the world. They lead Schelling to misrepresent the actual dynamics of the bargaining encounters (Cuba and Vietnam) that he uses to illustrate and justify his approach. Schelling's writing on bargaining is emblematic of a more general and still dominant American approach to the world that seeks, when possible, to substitute a combination of technical fixes and military muscle for political insight and diplomatic finesse.
",richard lebow,,2006.0,10.1057/palgrave.ip.8800164,International Politics,Lebow2006,Not available,,Nature,Not available,Reason Divorced from Reality: Thomas Schelling and Strategic Bargaining,3c37e649ee9de321c86272c17e5594a8,http://dx.doi.org/10.1057/palgrave.ip.8800164
17200,"A QUICK HISTORY OF THE USE OF COMPUTATION IN ECONOMICS The first digital computers were primarily used by scientists and engineers to solve mathematical equations numerically, that is, to approximate analytical solutions, most commonly for difficult-to-solve differential equations. The economics profession was also an early adopter of digital computing, and many of the first uses of computation by economists involved numerical solution of economic equations that were hard or impossible to solve analytically.
",rob axtell,,2008.0,10.1057/eej.2008.37,Eastern Economic Journal,Axtell2008,Not available,,Nature,Not available,The Rise of Computationally Enabled Economics: Introduction to the Special Issue of the Eastern Economic Journal on Agent-Based Modeling,fe1b4ee15931ca9acf9f956f09dd5d26,http://dx.doi.org/10.1057/eej.2008.37
17201,"This paper analyses the main consequences for the seaport efficiency of an access regime recently introduced by the Peruvian regulator for the public transportation infrastructure (OSITRAN). Its objective is to make competition viable for services that use, as input, transport infrastructure controlled by a monopolist. It is based on two theoretical contributions, the ‘Coase theorem’ and the ‘Demsetz approach’, and minimises the government intervention risk. Both port operators and providers of port services now have incentives to negotiate conditions of access, which permit competition, or to compete for an exclusivity right when this is desirable. If the parties do not reach an agreement within a reasonable time, the Regulator can enact an access mandate that may punish any of the parties, creating incentives for them to reach a Nash Equilibrium. The model seems to be generating productive and allocative efficiencies in port services, thus contributing to a potential reduction in Peru's maritime transport costs.
",lincoln flor,,2003.0,10.1057/palgrave.mel.9100075,Maritime Economics & Logistics,Flor2003,Not available,,Nature,Not available,Port Infrastructure: An Access Model for the Essential Facility,165f182b452ca6f76e60a9e6e2bb0b02,http://dx.doi.org/10.1057/palgrave.mel.9100075
17202,"This paper analyses the main consequences for the seaport efficiency of an access regime recently introduced by the Peruvian regulator for the public transportation infrastructure (OSITRAN). Its objective is to make competition viable for services that use, as input, transport infrastructure controlled by a monopolist. It is based on two theoretical contributions, the ‘Coase theorem’ and the ‘Demsetz approach’, and minimises the government intervention risk. Both port operators and providers of port services now have incentives to negotiate conditions of access, which permit competition, or to compete for an exclusivity right when this is desirable. If the parties do not reach an agreement within a reasonable time, the Regulator can enact an access mandate that may punish any of the parties, creating incentives for them to reach a Nash Equilibrium. The model seems to be generating productive and allocative efficiencies in port services, thus contributing to a potential reduction in Peru's maritime transport costs.
",enzo defilippi,,2003.0,10.1057/palgrave.mel.9100075,Maritime Economics & Logistics,Flor2003,Not available,,Nature,Not available,Port Infrastructure: An Access Model for the Essential Facility,165f182b452ca6f76e60a9e6e2bb0b02,http://dx.doi.org/10.1057/palgrave.mel.9100075
17203,"Subnational governments' access to credit is essential for smoothing out shocks to their revenue and expenditures, including those associated with large infrastructure projects. However, governments might pursue an unsustainable borrowing path unless they face appropriate incentives. Theoretically, credit markets can discourage excessive borrowing by charging risk premia rising with the level of indebtedness. We examine the robustness of this market mechanism under the evolving institutions of decentralised governance in a transitional country. Russia presents a perfect case for such analysis, for the market discipline was the only constraint on subnational borrowing there throughout the 1990s.
",andrey timofeev,,2007.0,10.1057/palgrave.ces.8100188,Comparative Economic Studies,Timofeev2007,Not available,,Nature,Not available,Market-Based Fiscal Discipline Under Evolving Decentralisation: The Case of Russian Regions1,64e7bc1c276be1c2a1a197f777beff9f,http://dx.doi.org/10.1057/palgrave.ces.8100188
17204,"Participants in international bargaining include different types (nation states, MNEs, NGOs, and multilateral organizations) and different numbers of these actors. Our theoretical contribution is to extend the bargaining power paradigm with a framework that models bargaining in this complex environment as a network. The configuration of supports and constraints among all participating actors in the bargaining environment is captured in the structure of the network. Antecedents of an actor's bargaining influence in the network include the actor's basis of power, network position, bargaining outcome preferences, and motivation to influence bargaining. The network bargaining power (NBP) model uses network theory to build upon and integrate insights from previous literature in a way that allows us to simultaneously apply these different insights to explain bargaining outcomes. These insights include effects of coalitions, strategies of less powerful actors leveraging more powerful allies, integration of international and domestic politics, and applicability to MNE-related issues beyond FDI. Finally, we illustrate NBP in a scenario of privatized utilities in the Dominican Republic, in which the bargaining power outcome predicted by NBP differs from that of the canonical bargaining power perspective.
",james nebus,,2009.0,10.1057/jibs.2009.43,Journal of International Business Studies,Nebus2009,Not available,,Nature,Not available,"Extending the bargaining power model: Explaining bargaining outcomes among nations, MNEs, and NGOs",888df2346689c9d799492df5299c5823,http://dx.doi.org/10.1057/jibs.2009.43
17205,"Participants in international bargaining include different types (nation states, MNEs, NGOs, and multilateral organizations) and different numbers of these actors. Our theoretical contribution is to extend the bargaining power paradigm with a framework that models bargaining in this complex environment as a network. The configuration of supports and constraints among all participating actors in the bargaining environment is captured in the structure of the network. Antecedents of an actor's bargaining influence in the network include the actor's basis of power, network position, bargaining outcome preferences, and motivation to influence bargaining. The network bargaining power (NBP) model uses network theory to build upon and integrate insights from previous literature in a way that allows us to simultaneously apply these different insights to explain bargaining outcomes. These insights include effects of coalitions, strategies of less powerful actors leveraging more powerful allies, integration of international and domestic politics, and applicability to MNE-related issues beyond FDI. Finally, we illustrate NBP in a scenario of privatized utilities in the Dominican Republic, in which the bargaining power outcome predicted by NBP differs from that of the canonical bargaining power perspective.
",carlos rufin,,2009.0,10.1057/jibs.2009.43,Journal of International Business Studies,Nebus2009,Not available,,Nature,Not available,"Extending the bargaining power model: Explaining bargaining outcomes among nations, MNEs, and NGOs",888df2346689c9d799492df5299c5823,http://dx.doi.org/10.1057/jibs.2009.43
17206,"We analyse a decentralized supply chain consisting of a supplier and a retailer, each with a satisficing objective, that is, to maximize the probability of achieving a predetermined target profit. The supply chain is examined under two types of commonly used contracts: linear tariff contracts (including wholesale price contracts as special cases) and buy-back contracts. First, we identify the Pareto-optimal contract(s) for each contractual form. In particular, it is shown that there is a unique wholesale price that is Pareto optimal for both contractual types. Second, we evaluate the performance of the Pareto-optimal contracts. In contrast to the well-known results for a supply chain with the traditional expected profit objectives, we show that wholesale price contracts can coordinate the supply chain whereas buy-back contracts cannot. This provides an additional justification for the popularity of wholesale price contracts besides their simplicities and lower administration costs.
",c shi,,2006.0,10.1057/palgrave.jors.2602186,Journal of the Operational Research Society,Shi2006,Not available,,Nature,Not available,Pareto-optimal contracts for a supply chain with satisficing objectives,e6b97a065eee369e735e71757ffa0947,http://dx.doi.org/10.1057/palgrave.jors.2602186
17207,"We study a sourcing problem where a buyer reserves capacity from a set of suppliers. The suppliers have finite capacity and their unit production cost is a decreasing function of their capacity, implying scale economies. The capacity of each supplier and therefore the cost is his private information. The buyer and other suppliers only know the probability distribution of the supplier’s capacity. The buyer’s demand is random and she has to decide how much capacity to reserve in advance from a subset of suppliers and how much to source from marketplace. In this study we determine the buyer’s optimum reservation quantity and the size of the supply base. We find the presence of such capacity cost correlation leads to supply base reduction.
",tarun jain,,2015.0,10.1057/jors.2015.70,Journal of the Operational Research Society,Jain2015,Not available,,Nature,Not available,Sourcing under incomplete information and negative capacity-cost correlation,6a34c033c059e802cf82622972239129,http://dx.doi.org/10.1057/jors.2015.70
17208,"We study a sourcing problem where a buyer reserves capacity from a set of suppliers. The suppliers have finite capacity and their unit production cost is a decreasing function of their capacity, implying scale economies. The capacity of each supplier and therefore the cost is his private information. The buyer and other suppliers only know the probability distribution of the supplier’s capacity. The buyer’s demand is random and she has to decide how much capacity to reserve in advance from a subset of suppliers and how much to source from marketplace. In this study we determine the buyer’s optimum reservation quantity and the size of the supply base. We find the presence of such capacity cost correlation leads to supply base reduction.
",jishnu hazra,,2015.0,10.1057/jors.2015.70,Journal of the Operational Research Society,Jain2015,Not available,,Nature,Not available,Sourcing under incomplete information and negative capacity-cost correlation,6a34c033c059e802cf82622972239129,http://dx.doi.org/10.1057/jors.2015.70
17209,"Small- and medium-sized enterprises (SMEs) play an important role in the European economy. A critical challenge faced by SME leaders, as a consequence of the continuing digital technology revolution, is how to optimally align business strategy with digital technology to fully leverage the potential offered by these technologies in pursuit of longevity and growth. There is a paucity of empirical research examining how e-leadership in SMEs drives successful alignment between business strategy and digital technology fostering longevity and growth. To address this gap, in this paper we develop an empirically derived e-leadership model. Initially we develop a theoretical model of e-leadership drawing on strategic alignment theory. This provides a theoretical foundation on how SMEs can harness digital technology in support of their business strategy enabling sustainable growth. An in-depth empirical study was undertaken interviewing 42 successful European SME leaders to validate, advance and substantiate our theoretically driven model. The outcome of the two stage process – inductive development of a theoretically driven e-leadership model and deductive advancement to develop a complete model through in-depth interviews with successful European SME leaders – is an e-leadership model with specific constructs fostering effective strategic alignment. The resulting diagnostic model enables SME decision makers to exercise effective e-leadership by creating productive alignment between business strategy and digital technology improving longevity and growth prospects.
",weizi li,,2016.0,10.1057/jit.2016.10,Journal of Information Technology,Li2016,Not available,,Nature,Not available,e-Leadership through strategic alignment: an empirical study of small- and medium-sized enterprises in the digital age,71f2081718dd7e05dcc921e86199f022,http://dx.doi.org/10.1057/jit.2016.10
17210,"Small- and medium-sized enterprises (SMEs) play an important role in the European economy. A critical challenge faced by SME leaders, as a consequence of the continuing digital technology revolution, is how to optimally align business strategy with digital technology to fully leverage the potential offered by these technologies in pursuit of longevity and growth. There is a paucity of empirical research examining how e-leadership in SMEs drives successful alignment between business strategy and digital technology fostering longevity and growth. To address this gap, in this paper we develop an empirically derived e-leadership model. Initially we develop a theoretical model of e-leadership drawing on strategic alignment theory. This provides a theoretical foundation on how SMEs can harness digital technology in support of their business strategy enabling sustainable growth. An in-depth empirical study was undertaken interviewing 42 successful European SME leaders to validate, advance and substantiate our theoretically driven model. The outcome of the two stage process – inductive development of a theoretically driven e-leadership model and deductive advancement to develop a complete model through in-depth interviews with successful European SME leaders – is an e-leadership model with specific constructs fostering effective strategic alignment. The resulting diagnostic model enables SME decision makers to exercise effective e-leadership by creating productive alignment between business strategy and digital technology improving longevity and growth prospects.
",kecheng liu,,2016.0,10.1057/jit.2016.10,Journal of Information Technology,Li2016,Not available,,Nature,Not available,e-Leadership through strategic alignment: an empirical study of small- and medium-sized enterprises in the digital age,71f2081718dd7e05dcc921e86199f022,http://dx.doi.org/10.1057/jit.2016.10
17211,"Small- and medium-sized enterprises (SMEs) play an important role in the European economy. A critical challenge faced by SME leaders, as a consequence of the continuing digital technology revolution, is how to optimally align business strategy with digital technology to fully leverage the potential offered by these technologies in pursuit of longevity and growth. There is a paucity of empirical research examining how e-leadership in SMEs drives successful alignment between business strategy and digital technology fostering longevity and growth. To address this gap, in this paper we develop an empirically derived e-leadership model. Initially we develop a theoretical model of e-leadership drawing on strategic alignment theory. This provides a theoretical foundation on how SMEs can harness digital technology in support of their business strategy enabling sustainable growth. An in-depth empirical study was undertaken interviewing 42 successful European SME leaders to validate, advance and substantiate our theoretically driven model. The outcome of the two stage process – inductive development of a theoretically driven e-leadership model and deductive advancement to develop a complete model through in-depth interviews with successful European SME leaders – is an e-leadership model with specific constructs fostering effective strategic alignment. The resulting diagnostic model enables SME decision makers to exercise effective e-leadership by creating productive alignment between business strategy and digital technology improving longevity and growth prospects.
",maksim belitski,,2016.0,10.1057/jit.2016.10,Journal of Information Technology,Li2016,Not available,,Nature,Not available,e-Leadership through strategic alignment: an empirical study of small- and medium-sized enterprises in the digital age,71f2081718dd7e05dcc921e86199f022,http://dx.doi.org/10.1057/jit.2016.10
17212,"Small- and medium-sized enterprises (SMEs) play an important role in the European economy. A critical challenge faced by SME leaders, as a consequence of the continuing digital technology revolution, is how to optimally align business strategy with digital technology to fully leverage the potential offered by these technologies in pursuit of longevity and growth. There is a paucity of empirical research examining how e-leadership in SMEs drives successful alignment between business strategy and digital technology fostering longevity and growth. To address this gap, in this paper we develop an empirically derived e-leadership model. Initially we develop a theoretical model of e-leadership drawing on strategic alignment theory. This provides a theoretical foundation on how SMEs can harness digital technology in support of their business strategy enabling sustainable growth. An in-depth empirical study was undertaken interviewing 42 successful European SME leaders to validate, advance and substantiate our theoretically driven model. The outcome of the two stage process – inductive development of a theoretically driven e-leadership model and deductive advancement to develop a complete model through in-depth interviews with successful European SME leaders – is an e-leadership model with specific constructs fostering effective strategic alignment. The resulting diagnostic model enables SME decision makers to exercise effective e-leadership by creating productive alignment between business strategy and digital technology improving longevity and growth prospects.
",abby ghobadian,,2016.0,10.1057/jit.2016.10,Journal of Information Technology,Li2016,Not available,,Nature,Not available,e-Leadership through strategic alignment: an empirical study of small- and medium-sized enterprises in the digital age,71f2081718dd7e05dcc921e86199f022,http://dx.doi.org/10.1057/jit.2016.10
17213,"Small- and medium-sized enterprises (SMEs) play an important role in the European economy. A critical challenge faced by SME leaders, as a consequence of the continuing digital technology revolution, is how to optimally align business strategy with digital technology to fully leverage the potential offered by these technologies in pursuit of longevity and growth. There is a paucity of empirical research examining how e-leadership in SMEs drives successful alignment between business strategy and digital technology fostering longevity and growth. To address this gap, in this paper we develop an empirically derived e-leadership model. Initially we develop a theoretical model of e-leadership drawing on strategic alignment theory. This provides a theoretical foundation on how SMEs can harness digital technology in support of their business strategy enabling sustainable growth. An in-depth empirical study was undertaken interviewing 42 successful European SME leaders to validate, advance and substantiate our theoretically driven model. The outcome of the two stage process – inductive development of a theoretically driven e-leadership model and deductive advancement to develop a complete model through in-depth interviews with successful European SME leaders – is an e-leadership model with specific constructs fostering effective strategic alignment. The resulting diagnostic model enables SME decision makers to exercise effective e-leadership by creating productive alignment between business strategy and digital technology improving longevity and growth prospects.
",nicholas o'regan,,2016.0,10.1057/jit.2016.10,Journal of Information Technology,Li2016,Not available,,Nature,Not available,e-Leadership through strategic alignment: an empirical study of small- and medium-sized enterprises in the digital age,71f2081718dd7e05dcc921e86199f022,http://dx.doi.org/10.1057/jit.2016.10
17214,"Long queues during holiday shopping events seem undesirable for both shoppers and retailers. However, the following article shows that, under some conditions, long queues benefit retailers for two reasons. First, long queues, by turning away high-time-cost shoppers, serve as a device of segmentation and targeting. Consequently, retailers deliver promotions only to low-time-cost shoppers. High-time-cost shoppers choose to purchase at a regular time (non-holiday shopping event) and pay the full price without having to make the wait. Second, longer queues prompt shoppers who stay in the line buy more products. In addition, the article shows that shoppers tend to wait longer when price discounts are greater. Accounting for the above findings, this article provides a numerical solution to jointly optimizing retailers’ promotional and operational decisions on holiday promotional sales.
",chun qiu,,2015.0,10.1057/rpm.2015.46,Journal of Revenue and Pricing Management,Qiu2015,Not available,,Nature,Not available,Managing long queues for holiday sales shopping,512236111a4b383483898cc21fad0218,http://dx.doi.org/10.1057/rpm.2015.46
17215,"Long queues during holiday shopping events seem undesirable for both shoppers and retailers. However, the following article shows that, under some conditions, long queues benefit retailers for two reasons. First, long queues, by turning away high-time-cost shoppers, serve as a device of segmentation and targeting. Consequently, retailers deliver promotions only to low-time-cost shoppers. High-time-cost shoppers choose to purchase at a regular time (non-holiday shopping event) and pay the full price without having to make the wait. Second, longer queues prompt shoppers who stay in the line buy more products. In addition, the article shows that shoppers tend to wait longer when price discounts are greater. Accounting for the above findings, this article provides a numerical solution to jointly optimizing retailers’ promotional and operational decisions on holiday promotional sales.
",wenqing zhang,,2015.0,10.1057/rpm.2015.46,Journal of Revenue and Pricing Management,Qiu2015,Not available,,Nature,Not available,Managing long queues for holiday sales shopping,512236111a4b383483898cc21fad0218,http://dx.doi.org/10.1057/rpm.2015.46
17216,"We analyse a decentralized supply chain consisting of a supplier and a retailer, each with a satisficing objective, that is, to maximize the probability of achieving a predetermined target profit. The supply chain is examined under two types of commonly used contracts: linear tariff contracts (including wholesale price contracts as special cases) and buy-back contracts. First, we identify the Pareto-optimal contract(s) for each contractual form. In particular, it is shown that there is a unique wholesale price that is Pareto optimal for both contractual types. Second, we evaluate the performance of the Pareto-optimal contracts. In contrast to the well-known results for a supply chain with the traditional expected profit objectives, we show that wholesale price contracts can coordinate the supply chain whereas buy-back contracts cannot. This provides an additional justification for the popularity of wholesale price contracts besides their simplicities and lower administration costs.
",b chen,,2006.0,10.1057/palgrave.jors.2602186,Journal of the Operational Research Society,Shi2006,Not available,,Nature,Not available,Pareto-optimal contracts for a supply chain with satisficing objectives,e6b97a065eee369e735e71757ffa0947,http://dx.doi.org/10.1057/palgrave.jors.2602186
17217,Limiting climate change without damaging the world economy depends on stronger and smarter market signals to regulate carbon dioxide,david victor,,2007.0,10.1038/scientificamerican1207-70,Scientific American,Victor2007,Not available,,Nature,Not available,Making Carbon Markets Work,e5839cc21c493a4b15ff50765c7b91b6,http://dx.doi.org/10.1038/scientificamerican1207-70
17218,Limiting climate change without damaging the world economy depends on stronger and smarter market signals to regulate carbon dioxide,danny cullenward,,2007.0,10.1038/scientificamerican1207-70,Scientific American,Victor2007,Not available,,Nature,Not available,Making Carbon Markets Work,e5839cc21c493a4b15ff50765c7b91b6,http://dx.doi.org/10.1038/scientificamerican1207-70
17219,"This paper studies under what circumstances creditworthy sovereign borrowers may be denied liquidity by rational creditors. It is shown that, when the creditor side of the market consists of many small investors, multiple rational expectations equilibria may exist. In one equilibrium, creditors' pessimistic expectations about the borrower's creditworthiness become self-fulfilling, and the borrower experiences a liquidity crisis. Multiple equilibria can be avoided by marketing the loan appropriately or by developing a reputation for following good policies. Liquidity problems can also arise because international bond markets are temporarily disrupted owing to events unrelated to the borrower's circumstances. Policy responses are discussed.
",enrica detragiache,,1996.0,10.2307/3867553,Staff Papers - International Monetary Fund,Detragiache1996,Not available,,Nature,Not available,Rational Liquidity Crises in the Sovereign Debt Market: In Search of a Theory,534c055ae2932e6a4996b0ed0b1cd9f4,http://dx.doi.org/10.2307/3867553
17220,"The speed at which the inflation rate adjusts to a reduction in nominal aggregate demand is the central issue in the ongoing debate over the cost of disinflation. Any reduction in the growth of nominal aggregate demand must be divided between a decline in the rate of inflation and a decline in the growth of output. When inflation adjusts sluggishly because of short-run inertia, a slowdown in the growth of nominal spending reduces output growth rather than inflation. In contrast, if inflation adjusts rapidly, the output cost of a disinflationary policy could be negligible. The paper addresses empirically the question of whether inflation adjusts rapidly or sluggishly to changes in nominal aggregate demand. Two alternative hypotheses are considered: the rational expectations market-clearing hypothesis (RE-MC) and the long-run natural rate hypothesis combined with short-run gradual adjustment of prices (NRH-GAP). One version of the RE-MC hypothesis states that the inflation rate responds instantaneously and equiproportionately to an anticipated change in nominal aggregate demand, implying that an expected disinflation could reduce inflation quickly with virtually no loss in output. According to the NRH-GAP hypothesis, however, inflation responds gradually in the short run and fully in the long run to nominal aggregate demand disturbances, whether these disturbances are anticipated or not. This hypothesis implies that the short-run output cost of disinflation could be substantial. A single reduced-form equation for the inflation rate is presented in the paper. The RE-MC and NRH-GAP hypotheses appear as special cases of this equation, which allows coefficient estimates to distinguish between the two. Tests of the two hypotheses, involving a general cross-country analysis of 13 developing countries, offer some support for the view that inflation may persist because of inertia, even if the anti-inflationary demand policy is anticipated. Thus, the control of inflation may be achieved only at the cost of a loss in output, since the adjustment of the inflation rate may be slow rather than instantaneous. This conclusion does not undermine the basic point that to cut inflation it is essential to reduce the growth rate of nominal aggregate demand or the money supply. Furthermore, the tests conducted are not capable of determining the response of the inflation rate to a credible change in the policy regime.
",ajai chopra,,1985.0,10.2307/3866744,Staff Papers - International Monetary Fund,Chopra1985,Not available,,Nature,Not available,The Speed of Adjustment of the Inflation Rate in Developing Countries: A Study of InertiaVitesse d'adjustement du taux d'inflation dans les pays en développement: étude de l'inertieVelocidad de ajuste de la tasa de inflación en los países en desarrollo: Examen de la inercia,ef2c2ae8d1630269acc6ca7e929ce8a6,http://dx.doi.org/10.2307/3866744
17221,"The paper discusses the cybernetic mechanisms whereby our institutions fail to translate the will of the people into effective policies, and those by which the will of the people is an attenuated version of human potentiality in the first place. A systemic model is developed to account for the observed phenomena in terms of a cybernetic theory of the management process, and this is then exemplified from current dilemmas facing humankind. The model is subsequently extended to encompass the theory of viable systems, the principle of self-reference, and a model of self-hood which promotes new concepts that close the model into its starting point of human potential. The total approach bears on the capability of individuals, groups, institutions, societies and nations to realize themselves, and to thwart the dangers in which our civilization is plunged.
",stafford beer,,1983.0,10.1057/jors.1983.173,Journal of the Operational Research Society,Beer1983,Not available,,Nature,Not available,The Will of the People,64ec263fbeb8c86177d4682a021f3daf,http://dx.doi.org/10.1057/jors.1983.173
17222,"Over the past 100 years, social science has generated a tremendous number of theories on the topics of individual and collective human behaviour. However, it has been much less successful at reconciling the innumerable inconsistencies and contradictions among these competing explanations, a situation that has not been resolved by recent advances in ‘computational social science’. In this Perspective, I argue that this ‘incoherency problem’ has been perpetuated by an historical emphasis in social science on the advancement of theories over the solution of practical problems. I argue that one way for social science to make progress is to adopt a more solution-oriented approach, starting first with a practical problem and then asking what theories (and methods) must be brought to bear to solve it. Finally, I conclude with a few suggestions regarding the sort of problems on which progress might be made and how we might organize ourselves to solve them.
",duncan watts,,2017.0,10.1038/s41562-016-0015,Nature Human Behaviour,Watts2017,Not available,,Nature,Not available,Should social science be more solution-oriented?,3d71041263608ea9a614d0fa55d2665d,http://dx.doi.org/10.1038/s41562-016-0015
17223,"In this paper, we extend the general definition of Customer Lifetime Value (CLV) to include the value of influence associated with a consumer's network or connections. We introduce Connected Customer Lifetime Value (CCLV) as the present value of the net contribution associated with purchases made by a customer, plus the present value of the net contribution made by other customers due to the influence of that customer. We highlight social media engagement as an important process associated with that influence, and propose Customer Social Media Value (CSMV) to represent the value derived through said media engagement. Examples to illustrate CSMV and CCLV are provided.
",bruce weinberg,,2011.0,10.1057/dddmp.2011.2,"Journal of Direct, Data and Digital Marketing Practice",Weinberg2011,Not available,,Nature,Not available,Connected customer lifetime value: The impact of social media,cf16d2132f301521c4a4ac0e7c8f9a7e,http://dx.doi.org/10.1057/dddmp.2011.2
17224,"Natural selection is conventionally assumed to favour the strong and selfish who maximize their own resources at the expense of others. But many biological systems, and especially human societies, are organized around altruistic, cooperative interactions. How can natural selection promote unselfish behaviour? Various mechanisms have been proposed, and a rich analysis of indirect reciprocity has recently emerged: I help you and somebody else helps me. The evolution of cooperation by indirect reciprocity leads to reputation building, morality judgement and complex social interactions with ever-increasing cognitive demands.
",martin nowak,,2005.0,10.1038/nature04131,Nature,Nowak2005,Not available,,Nature,Not available,Evolution of indirect reciprocity,d77c8fecce82a3d7c3eecf49e2d0e5aa,http://dx.doi.org/10.1038/nature04131
17225,"Natural selection is conventionally assumed to favour the strong and selfish who maximize their own resources at the expense of others. But many biological systems, and especially human societies, are organized around altruistic, cooperative interactions. How can natural selection promote unselfish behaviour? Various mechanisms have been proposed, and a rich analysis of indirect reciprocity has recently emerged: I help you and somebody else helps me. The evolution of cooperation by indirect reciprocity leads to reputation building, morality judgement and complex social interactions with ever-increasing cognitive demands.
",karl sigmund,,2005.0,10.1038/nature04131,Nature,Nowak2005,Not available,,Nature,Not available,Evolution of indirect reciprocity,d77c8fecce82a3d7c3eecf49e2d0e5aa,http://dx.doi.org/10.1038/nature04131
17226,"Although economics has long been considered as a non-experimental science, the development of experimental economics and behavioral economics is amazingly rapid and affects most fields of research. This paper first attempts at defining the main contributions of experiments to economics. It also identifies four main trends in the development of experimental research in economics. The third contribution of this paper is to identify the major theoretical and methodological challenges faced by behavioral and experimental economics.
",marie-claire villeval,,2007.0,10.1057/palgrave.fp.8200119,French Politics,Villeval2007,Not available,,Nature,Not available,"Experimental Economics: Contributions, Recent Developments, and New Challenges",6f9fdbd26a0bade3652c0115dcf404c0,http://dx.doi.org/10.1057/palgrave.fp.8200119
17227,"The primary goal of these introductory notes is to promote the clear presentation and rigorous analysis of dynamic economic models, whether expressed in equation or agent-based form. A secondary goal is to promote the use of initial-value state-space modeling with its regard for historical process, for cause leading to effect without the external imposition of global coordination constraints on agent actions. Economists who claim to respect individual rationality should not be doing for their modeled economic agents what in reality these agents must do for themselves.
Eastern Economic Journal advance online publication, 14 March 2016; doi:10.1057/eej.2016.2",leigh tesfatsion,,2016.0,10.1057/eej.2016.2,Eastern Economic Journal,Tesfatsion2016,Not available,,Nature,Not available,Elements of Dynamic Economic Modeling: Presentation and Analysis,559c0b79bb30b269db7117a2808f7b65,http://dx.doi.org/10.1057/eej.2016.2
17228,"A retailer places a certain product (eg compact rental cars) for sale on the internet. Customers are invited to ‘name-their-own price’ for the product. The retailer will accept a given bid x with probability equal to p(.). It is assumed that customers know the function p(.) and will place bids that maximise their individual expected profits. Knowing that customers will behave this way, the retailer wants to choose the function p(.) that maximises the retailer's expected profit. We demonstrate that there is an explicit ɛ-optimal solution to this problem.
",john wilson,,2008.0,10.1057/rpm.2008.13,Journal of Revenue and Pricing Management,Wilson2008,Not available,,Nature,Not available,Optimal design of a name-your-own price channel,86368f40b88727835a6a7f37f9a67b0e,http://dx.doi.org/10.1057/rpm.2008.13
17229,"A retailer places a certain product (eg compact rental cars) for sale on the internet. Customers are invited to ‘name-their-own price’ for the product. The retailer will accept a given bid x with probability equal to p(.). It is assumed that customers know the function p(.) and will place bids that maximise their individual expected profits. Knowing that customers will behave this way, the retailer wants to choose the function p(.) that maximises the retailer's expected profit. We demonstrate that there is an explicit ɛ-optimal solution to this problem.
",guoren zhang,,2008.0,10.1057/rpm.2008.13,Journal of Revenue and Pricing Management,Wilson2008,Not available,,Nature,Not available,Optimal design of a name-your-own price channel,86368f40b88727835a6a7f37f9a67b0e,http://dx.doi.org/10.1057/rpm.2008.13
17230,An 'essentially unbeatable' algorithm for the popular card game points to strategies for solving real-life problems without having complete information.,philip ball,,2015.0,10.1038/nature.2015.16683,Nature News,Ball2015,Not available,,Nature,Not available,Game theorists crack poker,052354fb6d4754883bb1a590826e86ac,http://dx.doi.org/10.1038/nature.2015.16683
17231,"This article deals with the problem of coordinating a vertically separated channel under consignment contracts with a price-dependent revenue-sharing (R-S) function. We consider the retailer being a channel leader who offers the vendor a leave-it-or-take-it contract, and the vendor being a price-setting firm who sells the one-of-a-kind goods through the exclusive channel. Under such a setting, the retailer decides on the term of R-S contract, and the vendor determines the retail price of the product. For each item sold, the retailer deducts an agreed-upon percentage from the price and remits the balance to the vendor. We model the decision-making of the two firms as a Stackelberg game, and carry out equilibrium analysis for both the centralized and decentralized regimes of the channel with consideration of three kinds of contracts: the fixed, the price-increasing, and the price-decreasing R-S percentage. Our analysis reveals that the contract with a price-decreasing R-S function, for example, the fee structure adopted by eBay.com, performs worse than the others. It persists in a consistent bias: the price-decreasing R-S induces the vendor to choose a higher price, and the retailer tends to receive a lower R-S percentage, which leads to less demand quantity, less profit, and channel inefficiency.
",j-m chen,,2010.0,10.1057/jors.2010.174,Journal of the Operational Research Society,Chen2010,Not available,,Nature,Not available,On channel coordination under price-dependent revenue-sharing: can eBay's fee structure coordinate the channel?,e806e372fa5fbd9918851bc733781684,http://dx.doi.org/10.1057/jors.2010.174
17232,"This article deals with the problem of coordinating a vertically separated channel under consignment contracts with a price-dependent revenue-sharing (R-S) function. We consider the retailer being a channel leader who offers the vendor a leave-it-or-take-it contract, and the vendor being a price-setting firm who sells the one-of-a-kind goods through the exclusive channel. Under such a setting, the retailer decides on the term of R-S contract, and the vendor determines the retail price of the product. For each item sold, the retailer deducts an agreed-upon percentage from the price and remits the balance to the vendor. We model the decision-making of the two firms as a Stackelberg game, and carry out equilibrium analysis for both the centralized and decentralized regimes of the channel with consideration of three kinds of contracts: the fixed, the price-increasing, and the price-decreasing R-S percentage. Our analysis reveals that the contract with a price-decreasing R-S function, for example, the fee structure adopted by eBay.com, performs worse than the others. It persists in a consistent bias: the price-decreasing R-S induces the vendor to choose a higher price, and the retailer tends to receive a lower R-S percentage, which leads to less demand quantity, less profit, and channel inefficiency.
",h-l cheng,,2010.0,10.1057/jors.2010.174,Journal of the Operational Research Society,Chen2010,Not available,,Nature,Not available,On channel coordination under price-dependent revenue-sharing: can eBay's fee structure coordinate the channel?,e806e372fa5fbd9918851bc733781684,http://dx.doi.org/10.1057/jors.2010.174
17233,"This article deals with the problem of coordinating a vertically separated channel under consignment contracts with a price-dependent revenue-sharing (R-S) function. We consider the retailer being a channel leader who offers the vendor a leave-it-or-take-it contract, and the vendor being a price-setting firm who sells the one-of-a-kind goods through the exclusive channel. Under such a setting, the retailer decides on the term of R-S contract, and the vendor determines the retail price of the product. For each item sold, the retailer deducts an agreed-upon percentage from the price and remits the balance to the vendor. We model the decision-making of the two firms as a Stackelberg game, and carry out equilibrium analysis for both the centralized and decentralized regimes of the channel with consideration of three kinds of contracts: the fixed, the price-increasing, and the price-decreasing R-S percentage. Our analysis reveals that the contract with a price-decreasing R-S function, for example, the fee structure adopted by eBay.com, performs worse than the others. It persists in a consistent bias: the price-decreasing R-S induces the vendor to choose a higher price, and the retailer tends to receive a lower R-S percentage, which leads to less demand quantity, less profit, and channel inefficiency.
",i-c lin,,2010.0,10.1057/jors.2010.174,Journal of the Operational Research Society,Chen2010,Not available,,Nature,Not available,On channel coordination under price-dependent revenue-sharing: can eBay's fee structure coordinate the channel?,e806e372fa5fbd9918851bc733781684,http://dx.doi.org/10.1057/jors.2010.174
17234,"In this paper, we extend the general definition of Customer Lifetime Value (CLV) to include the value of influence associated with a consumer's network or connections. We introduce Connected Customer Lifetime Value (CCLV) as the present value of the net contribution associated with purchases made by a customer, plus the present value of the net contribution made by other customers due to the influence of that customer. We highlight social media engagement as an important process associated with that influence, and propose Customer Social Media Value (CSMV) to represent the value derived through said media engagement. Examples to illustrate CSMV and CCLV are provided.
",paul berger,,2011.0,10.1057/dddmp.2011.2,"Journal of Direct, Data and Digital Marketing Practice",Weinberg2011,Not available,,Nature,Not available,Connected customer lifetime value: The impact of social media,cf16d2132f301521c4a4ac0e7c8f9a7e,http://dx.doi.org/10.1057/dddmp.2011.2
17235,"Information Systems enjoyment has been identified as a desirable phenomenon, because it can drive various aspects of system use. In this study, we argue that it can also be a key ingredient in the formation of adverse outcomes, such as technology-related addictions, through the positive reinforcement it generates. We rely on several theoretical mechanisms and, consistent with previous studies, suggest that enjoyment can lead to presumably positive outcomes, such as high engagement. Nevertheless, it can also facilitate the development of a strong habit and reinforce it until it becomes a ‘bad habit’, that can help forming a strong pathological and maladaptive psychological dependency on the use of the IT artifact (i.e., technology addiction). We test and validate this dual effect of enjoyment, with a data set of 194 social networking website users analyzed with SEM techniques. The potential duality of MIS constructs and other implications for research and practice are discussed.
",ofir turel,,2012.0,10.1057/ejis.2012.1,European Journal of Information Systems,Turel2012,Not available,,Nature,Not available,The benefits and dangers of enjoyment with social networking websites,054ff6afc8452cb7a333749e5216db97,http://dx.doi.org/10.1057/ejis.2012.1
17236,"Information Systems enjoyment has been identified as a desirable phenomenon, because it can drive various aspects of system use. In this study, we argue that it can also be a key ingredient in the formation of adverse outcomes, such as technology-related addictions, through the positive reinforcement it generates. We rely on several theoretical mechanisms and, consistent with previous studies, suggest that enjoyment can lead to presumably positive outcomes, such as high engagement. Nevertheless, it can also facilitate the development of a strong habit and reinforce it until it becomes a ‘bad habit’, that can help forming a strong pathological and maladaptive psychological dependency on the use of the IT artifact (i.e., technology addiction). We test and validate this dual effect of enjoyment, with a data set of 194 social networking website users analyzed with SEM techniques. The potential duality of MIS constructs and other implications for research and practice are discussed.
",alexander serenko,,2012.0,10.1057/ejis.2012.1,European Journal of Information Systems,Turel2012,Not available,,Nature,Not available,The benefits and dangers of enjoyment with social networking websites,054ff6afc8452cb7a333749e5216db97,http://dx.doi.org/10.1057/ejis.2012.1
17237,"Social dilemmas are central to human society. Depletion of natural resources, climate protection, security of energy supply, and workplace collaborations are all examples of social dilemmas. Since cooperative behaviour in a social dilemma is individually costly, Nash equilibrium predicts that humans should not cooperate. Yet experimental studies show that people do cooperate even in anonymous one-shot interactions. In spite of the large number of participants in many modern social dilemmas, little is known about the effect of group size on cooperation. Does larger group size favour or prevent cooperation? We address this problem both experimentally and theoretically. Experimentally, we find that there is no general answer: it depends on the strategic situation. Specifically, we find that larger groups are more cooperative in the Public Goods game, but less cooperative in the N-person Prisoner's dilemma. Theoretically, we show that this behaviour is not consistent with either the Fehr & Schmidt model or (a one-parameter version of) the Charness & Rabin model, but it is consistent with the cooperative equilibrium model introduced by the second author.
",helene barcelo,Human behaviour,2015.0,10.1038/srep07937,Scientific Reports,Barcelo2015,Not available,,Nature,Not available,Group size effect on cooperation in one-shot social dilemmas,2eeb239be2db76bd8e7531754b422bce,http://dx.doi.org/10.1038/srep07937
17238,"Social dilemmas are central to human society. Depletion of natural resources, climate protection, security of energy supply, and workplace collaborations are all examples of social dilemmas. Since cooperative behaviour in a social dilemma is individually costly, Nash equilibrium predicts that humans should not cooperate. Yet experimental studies show that people do cooperate even in anonymous one-shot interactions. In spite of the large number of participants in many modern social dilemmas, little is known about the effect of group size on cooperation. Does larger group size favour or prevent cooperation? We address this problem both experimentally and theoretically. Experimentally, we find that there is no general answer: it depends on the strategic situation. Specifically, we find that larger groups are more cooperative in the Public Goods game, but less cooperative in the N-person Prisoner's dilemma. Theoretically, we show that this behaviour is not consistent with either the Fehr & Schmidt model or (a one-parameter version of) the Charness & Rabin model, but it is consistent with the cooperative equilibrium model introduced by the second author.
",helene barcelo,Social evolution,2015.0,10.1038/srep07937,Scientific Reports,Barcelo2015,Not available,,Nature,Not available,Group size effect on cooperation in one-shot social dilemmas,2eeb239be2db76bd8e7531754b422bce,http://dx.doi.org/10.1038/srep07937
17239,"Social dilemmas are central to human society. Depletion of natural resources, climate protection, security of energy supply, and workplace collaborations are all examples of social dilemmas. Since cooperative behaviour in a social dilemma is individually costly, Nash equilibrium predicts that humans should not cooperate. Yet experimental studies show that people do cooperate even in anonymous one-shot interactions. In spite of the large number of participants in many modern social dilemmas, little is known about the effect of group size on cooperation. Does larger group size favour or prevent cooperation? We address this problem both experimentally and theoretically. Experimentally, we find that there is no general answer: it depends on the strategic situation. Specifically, we find that larger groups are more cooperative in the Public Goods game, but less cooperative in the N-person Prisoner's dilemma. Theoretically, we show that this behaviour is not consistent with either the Fehr & Schmidt model or (a one-parameter version of) the Charness & Rabin model, but it is consistent with the cooperative equilibrium model introduced by the second author.
",valerio capraro,Human behaviour,2015.0,10.1038/srep07937,Scientific Reports,Barcelo2015,Not available,,Nature,Not available,Group size effect on cooperation in one-shot social dilemmas,2eeb239be2db76bd8e7531754b422bce,http://dx.doi.org/10.1038/srep07937
17240,"Social dilemmas are central to human society. Depletion of natural resources, climate protection, security of energy supply, and workplace collaborations are all examples of social dilemmas. Since cooperative behaviour in a social dilemma is individually costly, Nash equilibrium predicts that humans should not cooperate. Yet experimental studies show that people do cooperate even in anonymous one-shot interactions. In spite of the large number of participants in many modern social dilemmas, little is known about the effect of group size on cooperation. Does larger group size favour or prevent cooperation? We address this problem both experimentally and theoretically. Experimentally, we find that there is no general answer: it depends on the strategic situation. Specifically, we find that larger groups are more cooperative in the Public Goods game, but less cooperative in the N-person Prisoner's dilemma. Theoretically, we show that this behaviour is not consistent with either the Fehr & Schmidt model or (a one-parameter version of) the Charness & Rabin model, but it is consistent with the cooperative equilibrium model introduced by the second author.
",valerio capraro,Social evolution,2015.0,10.1038/srep07937,Scientific Reports,Barcelo2015,Not available,,Nature,Not available,Group size effect on cooperation in one-shot social dilemmas,2eeb239be2db76bd8e7531754b422bce,http://dx.doi.org/10.1038/srep07937
17241,We provide an overview of the paths taken to understand existence and efficiency of equilibrium in competitive insurance markets with adverse selection since the seminal work by Rothschild and Stiglitz (1976). A stream of recent work reconsiders the strategic foundations of competitive equilibrium by carefully modelling the market game.
,wanda mimra,,2014.0,10.1057/grir.2014.11,The Geneva Risk and Insurance Review,Mimra2014,Not available,,Nature,Not available,New Developments in the Theory of Adverse Selection in Competitive Insurance,b7314c0d99638c57821f946581ff0e27,http://dx.doi.org/10.1057/grir.2014.11
17242,We provide an overview of the paths taken to understand existence and efficiency of equilibrium in competitive insurance markets with adverse selection since the seminal work by Rothschild and Stiglitz (1976). A stream of recent work reconsiders the strategic foundations of competitive equilibrium by carefully modelling the market game.
,achim wambach,,2014.0,10.1057/grir.2014.11,The Geneva Risk and Insurance Review,Mimra2014,Not available,,Nature,Not available,New Developments in the Theory of Adverse Selection in Competitive Insurance,b7314c0d99638c57821f946581ff0e27,http://dx.doi.org/10.1057/grir.2014.11
17243,"An increasingly popular venue for advertisers is the keyword advertising on the web pages of search engines. Advertisers bid for keywords, where bid price determines ad placement, which in turn affects the response function, defined as the click-through rate. Advertisers typically have a fixed daily budget that should not be exceeded, so an advertiser must allocate the budget as productively as possible by selecting which keywords to use and then deciding how much to allocate for each keyword. We construct and examine a model for this selection and allocation process.
",ozgur ozluk,,2007.0,10.1057/palgrave.rpm.5160110,Journal of Revenue and Pricing Management,Özlük2007,Not available,,Nature,Not available,Allocating expenditures across keywords in search advertising,0f132c04bbf12b59733229d12ff02531,http://dx.doi.org/10.1057/palgrave.rpm.5160110
17244,"Bank regulation is supposed to reduce the probability of bank failure and, if a failure occurs, to contain the damage so that system-wide problems are unlikely. The current regulatory framework, known as Basel II, is based, among other things, on risk-adjusted capital requirements. This framework has failed in the recent global financial crisis. Some believe that one of the culprits is the exclusion of so-called shadow banks (for example, hedge funds and investment banks) from regulation. We find little support for this assumption in our simulations. To the contrary, our simulations reveal that extending the same regulation to more entities is likely to produce very synchronous behaviour and thus exacerbate contagion and market crashes. On the other hand, the new size-adjusted regulation (the so-called leverage ratio) that has been proposed in Basel III appears to be more robust.
",yvan lengwiler,,2013.0,10.1057/jbr.2013.20,Journal of Banking Regulation,Lengwiler2013,Not available,,Nature,Not available,Regulation and contagion of banks,5667f86a716bd2edbf4e3b26bc28e80f,http://dx.doi.org/10.1057/jbr.2013.20
17245,"An increasingly popular venue for advertisers is the keyword advertising on the web pages of search engines. Advertisers bid for keywords, where bid price determines ad placement, which in turn affects the response function, defined as the click-through rate. Advertisers typically have a fixed daily budget that should not be exceeded, so an advertiser must allocate the budget as productively as possible by selecting which keywords to use and then deciding how much to allocate for each keyword. We construct and examine a model for this selection and allocation process.
",susan cholette,,2007.0,10.1057/palgrave.rpm.5160110,Journal of Revenue and Pricing Management,Özlük2007,Not available,,Nature,Not available,Allocating expenditures across keywords in search advertising,0f132c04bbf12b59733229d12ff02531,http://dx.doi.org/10.1057/palgrave.rpm.5160110
17246,"We are now suffering through economic problems that are worse than those that buffeted us 35 years ago, when the Eastern Economic Association was born. Since then, we have not made a great deal of progress toward methods of observation and analysis that would make economics a truly empirical science and would provide a means to better policy. Much if not most of the profession is still mired in the traditional ways of doing micro (sitting in a chair and making it up) and macro (pretending the economy is a single person, writ large). Factionalism in the profession, based on political leanings, is still rife. Behavioral and experimental economists assume we can learn what we need to know about businesses from watching students playing games made up by their professors. Neuroeconomics is arguably nothing but a diversion from what we should be doing. Very few economists are engaging in direct observation of businesses, as they actually operate. More of such work is needed.
",barbara bergmann,,2009.0,10.1057/eej.2008.49,Eastern Economic Journal,Bergmann2009,Not available,,Nature,Not available,The Economy and the Economics Profession: Both Need Work,d008b6b36acd1ce55b34182ceedbdb1e,http://dx.doi.org/10.1057/eej.2008.49
17247,"This paper investigates the price formation of an artificial futures market with zero-intelligence traders. It extends the zero-intelligence model to speculative agents trading for immediacy on a futures exchange with open outcry, margin constraints, and real-time settlement. Like prior studies it finds that the imposition of scarcity, not intelligent optimization, is surprisingly good at producing allocative efficiency. The double auction trading mechanism even with open outcry and real-time settlement anchors prices to a dynamic Walrasian equilibrium, even when it is not unique. This study supports zero-intelligence agent-based methodology as a tool to isolate the impact of market microstructure, as opposed to information, on price formation.
",leanne ussher,,2008.0,10.1057/eej.2008.34,Eastern Economic Journal,Ussher2008,Not available,,Nature,Not available,A Speculative Futures Market with Zero-Intelligence,6d4cd1a71ab2274a0ee67de7cd612191,http://dx.doi.org/10.1057/eej.2008.34
17248,This paper examines the role and impact of taxation on sustainable forest management. It is shown that fiscal instruments neither reinforce nor substitute for traditional regulatory approaches and can actually undermine sustainability. The paper uses the reasoning at the root of the Faustmann solution to draw conclusions on the incentives for sustainable tropical forest exploitation. It proposes a bond mechanism as an alternative market-based instrument to encourage sustainable forest logging while reducing monitoring costs.
,luc leruth,,2001.0,10.2307/4621675,IMF Staff Papers,Leruth2001,Not available,,Nature,Not available,The Complier Pays Principle: The Limits of Fiscal Approaches toward Sustainable Forest Management,d028b95b0dabbfc1bd1a176ecd8e5c4c,http://dx.doi.org/10.2307/4621675
17249,This paper examines the role and impact of taxation on sustainable forest management. It is shown that fiscal instruments neither reinforce nor substitute for traditional regulatory approaches and can actually undermine sustainability. The paper uses the reasoning at the root of the Faustmann solution to draw conclusions on the incentives for sustainable tropical forest exploitation. It proposes a bond mechanism as an alternative market-based instrument to encourage sustainable forest logging while reducing monitoring costs.
,remi paris,,2001.0,10.2307/4621675,IMF Staff Papers,Leruth2001,Not available,,Nature,Not available,The Complier Pays Principle: The Limits of Fiscal Approaches toward Sustainable Forest Management,d028b95b0dabbfc1bd1a176ecd8e5c4c,http://dx.doi.org/10.2307/4621675
17250,This paper examines the role and impact of taxation on sustainable forest management. It is shown that fiscal instruments neither reinforce nor substitute for traditional regulatory approaches and can actually undermine sustainability. The paper uses the reasoning at the root of the Faustmann solution to draw conclusions on the incentives for sustainable tropical forest exploitation. It proposes a bond mechanism as an alternative market-based instrument to encourage sustainable forest logging while reducing monitoring costs.
,ivan ruzicka,,2001.0,10.2307/4621675,IMF Staff Papers,Leruth2001,Not available,,Nature,Not available,The Complier Pays Principle: The Limits of Fiscal Approaches toward Sustainable Forest Management,d028b95b0dabbfc1bd1a176ecd8e5c4c,http://dx.doi.org/10.2307/4621675
17251,"We consider an environment where players are involved in a public goods game and must decide repeatedly whether to make an individual contribution or not. However, players lack strategically relevant information about the game and about the other players in the population. The resulting behavior of players is completely uncoupled from such information, and the individual strategy adjustment dynamics are driven only by reinforcement feedbacks from each player's own past. We show that the resulting “directional learning” is sufficient to explain cooperative deviations away from the Nash equilibrium. We introduce the concept of k–strong equilibria, which nest both the Nash equilibrium and the Aumann-strong equilibrium as two special cases, and we show that, together with the parameters of the learning model, the maximal k–strength of equilibrium determines the stationary distribution. The provisioning of public goods can be secured even under adverse conditions, as long as players are sufficiently responsive to the changes in their own payoffs and adjust their actions accordingly. Substantial levels of public cooperation can thus be explained without arguments involving selflessness or social preferences, solely on the basis of uncoordinated directional (mis)learning.
",heinrich nax,Statistical physics,2015.0,10.1038/srep08010,Scientific Reports,Nax2015,Not available,,Nature,Not available,Directional learning and the provisioning of public goods,b0c9a480142a8a2c5482c1e31d4f5ffe,http://dx.doi.org/10.1038/srep08010
17252,"We consider an environment where players are involved in a public goods game and must decide repeatedly whether to make an individual contribution or not. However, players lack strategically relevant information about the game and about the other players in the population. The resulting behavior of players is completely uncoupled from such information, and the individual strategy adjustment dynamics are driven only by reinforcement feedbacks from each player's own past. We show that the resulting “directional learning” is sufficient to explain cooperative deviations away from the Nash equilibrium. We introduce the concept of k–strong equilibria, which nest both the Nash equilibrium and the Aumann-strong equilibrium as two special cases, and we show that, together with the parameters of the learning model, the maximal k–strength of equilibrium determines the stationary distribution. The provisioning of public goods can be secured even under adverse conditions, as long as players are sufficiently responsive to the changes in their own payoffs and adjust their actions accordingly. Substantial levels of public cooperation can thus be explained without arguments involving selflessness or social preferences, solely on the basis of uncoordinated directional (mis)learning.
",heinrich nax,Climate-change mitigation,2015.0,10.1038/srep08010,Scientific Reports,Nax2015,Not available,,Nature,Not available,Directional learning and the provisioning of public goods,b0c9a480142a8a2c5482c1e31d4f5ffe,http://dx.doi.org/10.1038/srep08010
17253,"We consider an environment where players are involved in a public goods game and must decide repeatedly whether to make an individual contribution or not. However, players lack strategically relevant information about the game and about the other players in the population. The resulting behavior of players is completely uncoupled from such information, and the individual strategy adjustment dynamics are driven only by reinforcement feedbacks from each player's own past. We show that the resulting “directional learning” is sufficient to explain cooperative deviations away from the Nash equilibrium. We introduce the concept of k–strong equilibria, which nest both the Nash equilibrium and the Aumann-strong equilibrium as two special cases, and we show that, together with the parameters of the learning model, the maximal k–strength of equilibrium determines the stationary distribution. The provisioning of public goods can be secured even under adverse conditions, as long as players are sufficiently responsive to the changes in their own payoffs and adjust their actions accordingly. Substantial levels of public cooperation can thus be explained without arguments involving selflessness or social preferences, solely on the basis of uncoordinated directional (mis)learning.
",heinrich nax,Phase transitions and critical phenomena,2015.0,10.1038/srep08010,Scientific Reports,Nax2015,Not available,,Nature,Not available,Directional learning and the provisioning of public goods,b0c9a480142a8a2c5482c1e31d4f5ffe,http://dx.doi.org/10.1038/srep08010
17254,"We consider an environment where players are involved in a public goods game and must decide repeatedly whether to make an individual contribution or not. However, players lack strategically relevant information about the game and about the other players in the population. The resulting behavior of players is completely uncoupled from such information, and the individual strategy adjustment dynamics are driven only by reinforcement feedbacks from each player's own past. We show that the resulting “directional learning” is sufficient to explain cooperative deviations away from the Nash equilibrium. We introduce the concept of k–strong equilibria, which nest both the Nash equilibrium and the Aumann-strong equilibrium as two special cases, and we show that, together with the parameters of the learning model, the maximal k–strength of equilibrium determines the stationary distribution. The provisioning of public goods can be secured even under adverse conditions, as long as players are sufficiently responsive to the changes in their own payoffs and adjust their actions accordingly. Substantial levels of public cooperation can thus be explained without arguments involving selflessness or social preferences, solely on the basis of uncoordinated directional (mis)learning.
",heinrich nax,Sustainability,2015.0,10.1038/srep08010,Scientific Reports,Nax2015,Not available,,Nature,Not available,Directional learning and the provisioning of public goods,b0c9a480142a8a2c5482c1e31d4f5ffe,http://dx.doi.org/10.1038/srep08010
17255,"Bank regulation is supposed to reduce the probability of bank failure and, if a failure occurs, to contain the damage so that system-wide problems are unlikely. The current regulatory framework, known as Basel II, is based, among other things, on risk-adjusted capital requirements. This framework has failed in the recent global financial crisis. Some believe that one of the culprits is the exclusion of so-called shadow banks (for example, hedge funds and investment banks) from regulation. We find little support for this assumption in our simulations. To the contrary, our simulations reveal that extending the same regulation to more entities is likely to produce very synchronous behaviour and thus exacerbate contagion and market crashes. On the other hand, the new size-adjusted regulation (the so-called leverage ratio) that has been proposed in Basel III appears to be more robust.
",dietmar maringer,,2013.0,10.1057/jbr.2013.20,Journal of Banking Regulation,Lengwiler2013,Not available,,Nature,Not available,Regulation and contagion of banks,5667f86a716bd2edbf4e3b26bc28e80f,http://dx.doi.org/10.1057/jbr.2013.20
17256,"We consider an environment where players are involved in a public goods game and must decide repeatedly whether to make an individual contribution or not. However, players lack strategically relevant information about the game and about the other players in the population. The resulting behavior of players is completely uncoupled from such information, and the individual strategy adjustment dynamics are driven only by reinforcement feedbacks from each player's own past. We show that the resulting “directional learning” is sufficient to explain cooperative deviations away from the Nash equilibrium. We introduce the concept of k–strong equilibria, which nest both the Nash equilibrium and the Aumann-strong equilibrium as two special cases, and we show that, together with the parameters of the learning model, the maximal k–strength of equilibrium determines the stationary distribution. The provisioning of public goods can be secured even under adverse conditions, as long as players are sufficiently responsive to the changes in their own payoffs and adjust their actions accordingly. Substantial levels of public cooperation can thus be explained without arguments involving selflessness or social preferences, solely on the basis of uncoordinated directional (mis)learning.
",matjaz perc,Statistical physics,2015.0,10.1038/srep08010,Scientific Reports,Nax2015,Not available,,Nature,Not available,Directional learning and the provisioning of public goods,b0c9a480142a8a2c5482c1e31d4f5ffe,http://dx.doi.org/10.1038/srep08010
17257,"We consider an environment where players are involved in a public goods game and must decide repeatedly whether to make an individual contribution or not. However, players lack strategically relevant information about the game and about the other players in the population. The resulting behavior of players is completely uncoupled from such information, and the individual strategy adjustment dynamics are driven only by reinforcement feedbacks from each player's own past. We show that the resulting “directional learning” is sufficient to explain cooperative deviations away from the Nash equilibrium. We introduce the concept of k–strong equilibria, which nest both the Nash equilibrium and the Aumann-strong equilibrium as two special cases, and we show that, together with the parameters of the learning model, the maximal k–strength of equilibrium determines the stationary distribution. The provisioning of public goods can be secured even under adverse conditions, as long as players are sufficiently responsive to the changes in their own payoffs and adjust their actions accordingly. Substantial levels of public cooperation can thus be explained without arguments involving selflessness or social preferences, solely on the basis of uncoordinated directional (mis)learning.
",matjaz perc,Climate-change mitigation,2015.0,10.1038/srep08010,Scientific Reports,Nax2015,Not available,,Nature,Not available,Directional learning and the provisioning of public goods,b0c9a480142a8a2c5482c1e31d4f5ffe,http://dx.doi.org/10.1038/srep08010
17258,"We consider an environment where players are involved in a public goods game and must decide repeatedly whether to make an individual contribution or not. However, players lack strategically relevant information about the game and about the other players in the population. The resulting behavior of players is completely uncoupled from such information, and the individual strategy adjustment dynamics are driven only by reinforcement feedbacks from each player's own past. We show that the resulting “directional learning” is sufficient to explain cooperative deviations away from the Nash equilibrium. We introduce the concept of k–strong equilibria, which nest both the Nash equilibrium and the Aumann-strong equilibrium as two special cases, and we show that, together with the parameters of the learning model, the maximal k–strength of equilibrium determines the stationary distribution. The provisioning of public goods can be secured even under adverse conditions, as long as players are sufficiently responsive to the changes in their own payoffs and adjust their actions accordingly. Substantial levels of public cooperation can thus be explained without arguments involving selflessness or social preferences, solely on the basis of uncoordinated directional (mis)learning.
",matjaz perc,Phase transitions and critical phenomena,2015.0,10.1038/srep08010,Scientific Reports,Nax2015,Not available,,Nature,Not available,Directional learning and the provisioning of public goods,b0c9a480142a8a2c5482c1e31d4f5ffe,http://dx.doi.org/10.1038/srep08010
17259,"We consider an environment where players are involved in a public goods game and must decide repeatedly whether to make an individual contribution or not. However, players lack strategically relevant information about the game and about the other players in the population. The resulting behavior of players is completely uncoupled from such information, and the individual strategy adjustment dynamics are driven only by reinforcement feedbacks from each player's own past. We show that the resulting “directional learning” is sufficient to explain cooperative deviations away from the Nash equilibrium. We introduce the concept of k–strong equilibria, which nest both the Nash equilibrium and the Aumann-strong equilibrium as two special cases, and we show that, together with the parameters of the learning model, the maximal k–strength of equilibrium determines the stationary distribution. The provisioning of public goods can be secured even under adverse conditions, as long as players are sufficiently responsive to the changes in their own payoffs and adjust their actions accordingly. Substantial levels of public cooperation can thus be explained without arguments involving selflessness or social preferences, solely on the basis of uncoordinated directional (mis)learning.
",matjaz perc,Sustainability,2015.0,10.1038/srep08010,Scientific Reports,Nax2015,Not available,,Nature,Not available,Directional learning and the provisioning of public goods,b0c9a480142a8a2c5482c1e31d4f5ffe,http://dx.doi.org/10.1038/srep08010
17260,"As the recipients of the 2012 science Nobel prizes gather in Stockholm to celebrate and be celebrated, News & Views shares some expert opinions on the achievements honoured.",yan chen,Economics,2012.0,10.1038/492054a,Nature,Chen2012,Not available,,Nature,Not available,NOBEL 2012 Economics: Stable allocations and market design,59c20c284c8e66f42398c69455fb5d49,http://dx.doi.org/10.1038/492054a
17261,"As the recipients of the 2012 science Nobel prizes gather in Stockholm to celebrate and be celebrated, News & Views shares some expert opinions on the achievements honoured.",jacob goeree,Economics,2012.0,10.1038/492054a,Nature,Chen2012,Not available,,Nature,Not available,NOBEL 2012 Economics: Stable allocations and market design,59c20c284c8e66f42398c69455fb5d49,http://dx.doi.org/10.1038/492054a
17262,"The paper provides a state-of-the-art review of several innovative advances in culture and international business (IB) to stimulate new avenues for future research. We first review the issues surrounding cultural convergence and divergence, and the processes underlying cultural changes. We then examine novel constructs for characterizing cultures, and how to enhance the precision of cultural models by pinpointing when cultural effects are important. Finally, we examine the usefulness of experimental methods, which are rarely used by IB researchers. Implications of these path-breaking approaches for future research on culture and IB are discussed.
",kwok leung,,2005.0,10.1057/palgrave.jibs.8400150,Journal of International Business Studies,Leung2005,Not available,,Nature,Not available,Culture and international business: recent advances and their implications for future research,3f6893df61dc1eed599c85e84bb30853,http://dx.doi.org/10.1057/palgrave.jibs.8400150
17263,"The paper provides a state-of-the-art review of several innovative advances in culture and international business (IB) to stimulate new avenues for future research. We first review the issues surrounding cultural convergence and divergence, and the processes underlying cultural changes. We then examine novel constructs for characterizing cultures, and how to enhance the precision of cultural models by pinpointing when cultural effects are important. Finally, we examine the usefulness of experimental methods, which are rarely used by IB researchers. Implications of these path-breaking approaches for future research on culture and IB are discussed.
",rabi bhagat,,2005.0,10.1057/palgrave.jibs.8400150,Journal of International Business Studies,Leung2005,Not available,,Nature,Not available,Culture and international business: recent advances and their implications for future research,3f6893df61dc1eed599c85e84bb30853,http://dx.doi.org/10.1057/palgrave.jibs.8400150
17264,"The paper provides a state-of-the-art review of several innovative advances in culture and international business (IB) to stimulate new avenues for future research. We first review the issues surrounding cultural convergence and divergence, and the processes underlying cultural changes. We then examine novel constructs for characterizing cultures, and how to enhance the precision of cultural models by pinpointing when cultural effects are important. Finally, we examine the usefulness of experimental methods, which are rarely used by IB researchers. Implications of these path-breaking approaches for future research on culture and IB are discussed.
",nancy buchan,,2005.0,10.1057/palgrave.jibs.8400150,Journal of International Business Studies,Leung2005,Not available,,Nature,Not available,Culture and international business: recent advances and their implications for future research,3f6893df61dc1eed599c85e84bb30853,http://dx.doi.org/10.1057/palgrave.jibs.8400150
17265,"The paper provides a state-of-the-art review of several innovative advances in culture and international business (IB) to stimulate new avenues for future research. We first review the issues surrounding cultural convergence and divergence, and the processes underlying cultural changes. We then examine novel constructs for characterizing cultures, and how to enhance the precision of cultural models by pinpointing when cultural effects are important. Finally, we examine the usefulness of experimental methods, which are rarely used by IB researchers. Implications of these path-breaking approaches for future research on culture and IB are discussed.
",miriam erez,,2005.0,10.1057/palgrave.jibs.8400150,Journal of International Business Studies,Leung2005,Not available,,Nature,Not available,Culture and international business: recent advances and their implications for future research,3f6893df61dc1eed599c85e84bb30853,http://dx.doi.org/10.1057/palgrave.jibs.8400150
17266,"While organisational size is a popular construct in information systems (IS) research, findings from its use have been inconsistent. Few studies have explored this inconsistency or attempted to address this problem. This paper uses Churchill's measure development paradigm to conduct three separate but related investigations into the size construct. Study 1 explored the domain and dimensions of size. Some 2000 research papers published in six leading IS journals over an 11-year period were read in order to determine what researchers thought size meant and how they measured it. The study found 21 constructs underpinning the size construct and 25 ways of measuring size, but no clear relationship between size meaning and measurement. Study 2 assessed the construct's content validity using a concept map exercise involving 41 participants. Multidimensional scaling clustered the constructs into three conceptual groups. Study 3 administered the size construct in a survey with a sample of 163 Australian firms. The study found that the data supported the constructs observed in Study 2 and that a group of eight constructs could be used to differentiate between smaller and larger firms in the sample. Analysis revealed that organisational levels, risk aversion, geographic distribution and employment reflected respondents’ self-nominated size.
",sigi goode,,2009.0,10.1057/ejis.2009.2,European Journal of Information Systems,Goode2009,Not available,,Nature,Not available,"Rethinking organisational size in IS research: meaning, measurement and redevelopment",8ff957f6af480044dbe7a57e0fc82cdc,http://dx.doi.org/10.1057/ejis.2009.2
17267,"The paper provides a state-of-the-art review of several innovative advances in culture and international business (IB) to stimulate new avenues for future research. We first review the issues surrounding cultural convergence and divergence, and the processes underlying cultural changes. We then examine novel constructs for characterizing cultures, and how to enhance the precision of cultural models by pinpointing when cultural effects are important. Finally, we examine the usefulness of experimental methods, which are rarely used by IB researchers. Implications of these path-breaking approaches for future research on culture and IB are discussed.
",cristina gibson,,2005.0,10.1057/palgrave.jibs.8400150,Journal of International Business Studies,Leung2005,Not available,,Nature,Not available,Culture and international business: recent advances and their implications for future research,3f6893df61dc1eed599c85e84bb30853,http://dx.doi.org/10.1057/palgrave.jibs.8400150
17268,"Resource allocation takes place in various types of real-world complex systems such as urban traffic, social services institutions, economical and ecosystems. Mathematically, the dynamical process of resource allocation can be modeled as minority games. Spontaneous evolution of the resource allocation dynamics, however, often leads to a harmful herding behavior accompanied by strong fluctuations in which a large majority of agents crowd temporarily for a few resources, leaving many others unused. Developing effective control methods to suppress and eliminate herding is an important but open problem. Here we develop a pinning control method, that the fluctuations of the system consist of intrinsic and systematic components allows us to design a control scheme with separated control variables. A striking finding is the universal existence of an optimal pinning fraction to minimize the variance of the system, regardless of the pinning patterns and the network topology. We carry out a generally applicable theory to explain the emergence of optimal pinning and to predict the dependence of the optimal pinning fraction on the network topology. Our work represents a general framework to deal with the broader problem of controlling collective dynamics in complex systems with potential applications in social, economical and political systems.
",ji-qiang zhang,Nonlinear phenomena,2016.0,10.1038/srep20925,Scientific Reports,Zhang2016,Not available,,Nature,Not available,Controlling herding in minority game systems,bbbce6216ef61705721e52ad651b3fea,http://dx.doi.org/10.1038/srep20925
17269,"Resource allocation takes place in various types of real-world complex systems such as urban traffic, social services institutions, economical and ecosystems. Mathematically, the dynamical process of resource allocation can be modeled as minority games. Spontaneous evolution of the resource allocation dynamics, however, often leads to a harmful herding behavior accompanied by strong fluctuations in which a large majority of agents crowd temporarily for a few resources, leaving many others unused. Developing effective control methods to suppress and eliminate herding is an important but open problem. Here we develop a pinning control method, that the fluctuations of the system consist of intrinsic and systematic components allows us to design a control scheme with separated control variables. A striking finding is the universal existence of an optimal pinning fraction to minimize the variance of the system, regardless of the pinning patterns and the network topology. We carry out a generally applicable theory to explain the emergence of optimal pinning and to predict the dependence of the optimal pinning fraction on the network topology. Our work represents a general framework to deal with the broader problem of controlling collective dynamics in complex systems with potential applications in social, economical and political systems.
",ji-qiang zhang,Statistical physics,2016.0,10.1038/srep20925,Scientific Reports,Zhang2016,Not available,,Nature,Not available,Controlling herding in minority game systems,bbbce6216ef61705721e52ad651b3fea,http://dx.doi.org/10.1038/srep20925
17270,"Resource allocation takes place in various types of real-world complex systems such as urban traffic, social services institutions, economical and ecosystems. Mathematically, the dynamical process of resource allocation can be modeled as minority games. Spontaneous evolution of the resource allocation dynamics, however, often leads to a harmful herding behavior accompanied by strong fluctuations in which a large majority of agents crowd temporarily for a few resources, leaving many others unused. Developing effective control methods to suppress and eliminate herding is an important but open problem. Here we develop a pinning control method, that the fluctuations of the system consist of intrinsic and systematic components allows us to design a control scheme with separated control variables. A striking finding is the universal existence of an optimal pinning fraction to minimize the variance of the system, regardless of the pinning patterns and the network topology. We carry out a generally applicable theory to explain the emergence of optimal pinning and to predict the dependence of the optimal pinning fraction on the network topology. Our work represents a general framework to deal with the broader problem of controlling collective dynamics in complex systems with potential applications in social, economical and political systems.
",zi-gang huang,Nonlinear phenomena,2016.0,10.1038/srep20925,Scientific Reports,Zhang2016,Not available,,Nature,Not available,Controlling herding in minority game systems,bbbce6216ef61705721e52ad651b3fea,http://dx.doi.org/10.1038/srep20925
17271,"Resource allocation takes place in various types of real-world complex systems such as urban traffic, social services institutions, economical and ecosystems. Mathematically, the dynamical process of resource allocation can be modeled as minority games. Spontaneous evolution of the resource allocation dynamics, however, often leads to a harmful herding behavior accompanied by strong fluctuations in which a large majority of agents crowd temporarily for a few resources, leaving many others unused. Developing effective control methods to suppress and eliminate herding is an important but open problem. Here we develop a pinning control method, that the fluctuations of the system consist of intrinsic and systematic components allows us to design a control scheme with separated control variables. A striking finding is the universal existence of an optimal pinning fraction to minimize the variance of the system, regardless of the pinning patterns and the network topology. We carry out a generally applicable theory to explain the emergence of optimal pinning and to predict the dependence of the optimal pinning fraction on the network topology. Our work represents a general framework to deal with the broader problem of controlling collective dynamics in complex systems with potential applications in social, economical and political systems.
",zi-gang huang,Statistical physics,2016.0,10.1038/srep20925,Scientific Reports,Zhang2016,Not available,,Nature,Not available,Controlling herding in minority game systems,bbbce6216ef61705721e52ad651b3fea,http://dx.doi.org/10.1038/srep20925
17272,"Resource allocation takes place in various types of real-world complex systems such as urban traffic, social services institutions, economical and ecosystems. Mathematically, the dynamical process of resource allocation can be modeled as minority games. Spontaneous evolution of the resource allocation dynamics, however, often leads to a harmful herding behavior accompanied by strong fluctuations in which a large majority of agents crowd temporarily for a few resources, leaving many others unused. Developing effective control methods to suppress and eliminate herding is an important but open problem. Here we develop a pinning control method, that the fluctuations of the system consist of intrinsic and systematic components allows us to design a control scheme with separated control variables. A striking finding is the universal existence of an optimal pinning fraction to minimize the variance of the system, regardless of the pinning patterns and the network topology. We carry out a generally applicable theory to explain the emergence of optimal pinning and to predict the dependence of the optimal pinning fraction on the network topology. Our work represents a general framework to deal with the broader problem of controlling collective dynamics in complex systems with potential applications in social, economical and political systems.
",zhi-xi wu,Nonlinear phenomena,2016.0,10.1038/srep20925,Scientific Reports,Zhang2016,Not available,,Nature,Not available,Controlling herding in minority game systems,bbbce6216ef61705721e52ad651b3fea,http://dx.doi.org/10.1038/srep20925
17273,"Resource allocation takes place in various types of real-world complex systems such as urban traffic, social services institutions, economical and ecosystems. Mathematically, the dynamical process of resource allocation can be modeled as minority games. Spontaneous evolution of the resource allocation dynamics, however, often leads to a harmful herding behavior accompanied by strong fluctuations in which a large majority of agents crowd temporarily for a few resources, leaving many others unused. Developing effective control methods to suppress and eliminate herding is an important but open problem. Here we develop a pinning control method, that the fluctuations of the system consist of intrinsic and systematic components allows us to design a control scheme with separated control variables. A striking finding is the universal existence of an optimal pinning fraction to minimize the variance of the system, regardless of the pinning patterns and the network topology. We carry out a generally applicable theory to explain the emergence of optimal pinning and to predict the dependence of the optimal pinning fraction on the network topology. Our work represents a general framework to deal with the broader problem of controlling collective dynamics in complex systems with potential applications in social, economical and political systems.
",zhi-xi wu,Statistical physics,2016.0,10.1038/srep20925,Scientific Reports,Zhang2016,Not available,,Nature,Not available,Controlling herding in minority game systems,bbbce6216ef61705721e52ad651b3fea,http://dx.doi.org/10.1038/srep20925
17274,"Resource allocation takes place in various types of real-world complex systems such as urban traffic, social services institutions, economical and ecosystems. Mathematically, the dynamical process of resource allocation can be modeled as minority games. Spontaneous evolution of the resource allocation dynamics, however, often leads to a harmful herding behavior accompanied by strong fluctuations in which a large majority of agents crowd temporarily for a few resources, leaving many others unused. Developing effective control methods to suppress and eliminate herding is an important but open problem. Here we develop a pinning control method, that the fluctuations of the system consist of intrinsic and systematic components allows us to design a control scheme with separated control variables. A striking finding is the universal existence of an optimal pinning fraction to minimize the variance of the system, regardless of the pinning patterns and the network topology. We carry out a generally applicable theory to explain the emergence of optimal pinning and to predict the dependence of the optimal pinning fraction on the network topology. Our work represents a general framework to deal with the broader problem of controlling collective dynamics in complex systems with potential applications in social, economical and political systems.
",riqi su,Nonlinear phenomena,2016.0,10.1038/srep20925,Scientific Reports,Zhang2016,Not available,,Nature,Not available,Controlling herding in minority game systems,bbbce6216ef61705721e52ad651b3fea,http://dx.doi.org/10.1038/srep20925
17275,"Resource allocation takes place in various types of real-world complex systems such as urban traffic, social services institutions, economical and ecosystems. Mathematically, the dynamical process of resource allocation can be modeled as minority games. Spontaneous evolution of the resource allocation dynamics, however, often leads to a harmful herding behavior accompanied by strong fluctuations in which a large majority of agents crowd temporarily for a few resources, leaving many others unused. Developing effective control methods to suppress and eliminate herding is an important but open problem. Here we develop a pinning control method, that the fluctuations of the system consist of intrinsic and systematic components allows us to design a control scheme with separated control variables. A striking finding is the universal existence of an optimal pinning fraction to minimize the variance of the system, regardless of the pinning patterns and the network topology. We carry out a generally applicable theory to explain the emergence of optimal pinning and to predict the dependence of the optimal pinning fraction on the network topology. Our work represents a general framework to deal with the broader problem of controlling collective dynamics in complex systems with potential applications in social, economical and political systems.
",riqi su,Statistical physics,2016.0,10.1038/srep20925,Scientific Reports,Zhang2016,Not available,,Nature,Not available,Controlling herding in minority game systems,bbbce6216ef61705721e52ad651b3fea,http://dx.doi.org/10.1038/srep20925
17276,"Resource allocation takes place in various types of real-world complex systems such as urban traffic, social services institutions, economical and ecosystems. Mathematically, the dynamical process of resource allocation can be modeled as minority games. Spontaneous evolution of the resource allocation dynamics, however, often leads to a harmful herding behavior accompanied by strong fluctuations in which a large majority of agents crowd temporarily for a few resources, leaving many others unused. Developing effective control methods to suppress and eliminate herding is an important but open problem. Here we develop a pinning control method, that the fluctuations of the system consist of intrinsic and systematic components allows us to design a control scheme with separated control variables. A striking finding is the universal existence of an optimal pinning fraction to minimize the variance of the system, regardless of the pinning patterns and the network topology. We carry out a generally applicable theory to explain the emergence of optimal pinning and to predict the dependence of the optimal pinning fraction on the network topology. Our work represents a general framework to deal with the broader problem of controlling collective dynamics in complex systems with potential applications in social, economical and political systems.
",ying-cheng lai,Nonlinear phenomena,2016.0,10.1038/srep20925,Scientific Reports,Zhang2016,Not available,,Nature,Not available,Controlling herding in minority game systems,bbbce6216ef61705721e52ad651b3fea,http://dx.doi.org/10.1038/srep20925
17277,"While organisational size is a popular construct in information systems (IS) research, findings from its use have been inconsistent. Few studies have explored this inconsistency or attempted to address this problem. This paper uses Churchill's measure development paradigm to conduct three separate but related investigations into the size construct. Study 1 explored the domain and dimensions of size. Some 2000 research papers published in six leading IS journals over an 11-year period were read in order to determine what researchers thought size meant and how they measured it. The study found 21 constructs underpinning the size construct and 25 ways of measuring size, but no clear relationship between size meaning and measurement. Study 2 assessed the construct's content validity using a concept map exercise involving 41 participants. Multidimensional scaling clustered the constructs into three conceptual groups. Study 3 administered the size construct in a survey with a sample of 163 Australian firms. The study found that the data supported the constructs observed in Study 2 and that a group of eight constructs could be used to differentiate between smaller and larger firms in the sample. Analysis revealed that organisational levels, risk aversion, geographic distribution and employment reflected respondents’ self-nominated size.
",shirley gregor,,2009.0,10.1057/ejis.2009.2,European Journal of Information Systems,Goode2009,Not available,,Nature,Not available,"Rethinking organisational size in IS research: meaning, measurement and redevelopment",8ff957f6af480044dbe7a57e0fc82cdc,http://dx.doi.org/10.1057/ejis.2009.2
17278,"Resource allocation takes place in various types of real-world complex systems such as urban traffic, social services institutions, economical and ecosystems. Mathematically, the dynamical process of resource allocation can be modeled as minority games. Spontaneous evolution of the resource allocation dynamics, however, often leads to a harmful herding behavior accompanied by strong fluctuations in which a large majority of agents crowd temporarily for a few resources, leaving many others unused. Developing effective control methods to suppress and eliminate herding is an important but open problem. Here we develop a pinning control method, that the fluctuations of the system consist of intrinsic and systematic components allows us to design a control scheme with separated control variables. A striking finding is the universal existence of an optimal pinning fraction to minimize the variance of the system, regardless of the pinning patterns and the network topology. We carry out a generally applicable theory to explain the emergence of optimal pinning and to predict the dependence of the optimal pinning fraction on the network topology. Our work represents a general framework to deal with the broader problem of controlling collective dynamics in complex systems with potential applications in social, economical and political systems.
",ying-cheng lai,Statistical physics,2016.0,10.1038/srep20925,Scientific Reports,Zhang2016,Not available,,Nature,Not available,Controlling herding in minority game systems,bbbce6216ef61705721e52ad651b3fea,http://dx.doi.org/10.1038/srep20925
17279,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",giles story,Motivation,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17280,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",giles story,Social behaviour,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17281,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",giles story,Human behaviour,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17282,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",ivo vlaev,Motivation,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17283,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",ivo vlaev,Social behaviour,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17284,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",ivo vlaev,Human behaviour,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17285,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",robert metcalfe,Motivation,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17286,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",robert metcalfe,Social behaviour,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17287,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",robert metcalfe,Human behaviour,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17288,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",molly crockett,Motivation,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17289,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",molly crockett,Social behaviour,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17290,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",molly crockett,Human behaviour,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17291,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",zeb kurth-nelson,Motivation,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17292,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",zeb kurth-nelson,Social behaviour,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17293,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",zeb kurth-nelson,Human behaviour,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17294,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",ara darzi,Motivation,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17295,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",ara darzi,Social behaviour,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17296,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",ara darzi,Human behaviour,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17297,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",raymond dolan,Motivation,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17298,"This study relies on knowledge regarding the neuroplasticity of dual-system components that govern addiction and excessive behavior and suggests that alterations in the grey matter volumes, i.e., brain morphology, of specific regions of interest are associated with technology-related addictions. Using voxel based morphometry (VBM) applied to structural Magnetic Resonance Imaging (MRI) scans of twenty social network site (SNS) users with varying degrees of SNS addiction, we show that SNS addiction is associated with a presumably more efficient impulsive brain system, manifested through reduced grey matter volumes in the amygdala bilaterally (but not with structural differences in the Nucleus Accumbens). In this regard, SNS addiction is similar in terms of brain anatomy alterations to other (substance, gambling etc.) addictions. We also show that in contrast to other addictions in which the anterior-/ mid- cingulate cortex is impaired and fails to support the needed inhibition, which manifests through reduced grey matter volumes, this region is presumed to be healthy in our sample and its grey matter volume is positively correlated with one’s level of SNS addiction. These findings portray an anatomical morphology model of SNS addiction and point to brain morphology similarities and differences between technology addictions and substance and gambling addictions.
",qinghua he,Brain,2017.0,10.1038/srep45064,Scientific Reports,He2017,Not available,,Nature,Not available,Brain anatomy alterations associated with Social Networking Site (SNS) addiction,355447eac1ec4959117118d905cd56d1,http://dx.doi.org/10.1038/srep45064
17300,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",raymond dolan,Social behaviour,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17301,"People show empathic responses to others’ pain, yet how they choose to apportion pain between themselves and others is not well understood. To address this question, we observed choices to reapportion social allocations of painful stimuli and, for comparison, also elicited equivalent choices with money. On average people sought to equalize allocations of both pain and money, in a manner which indicated that inequality carried an increasing marginal cost. Preferences for pain were more altruistic than for money, with several participants assigning more than half the pain to themselves. Our data indicate that, given concern for others, the fundamental principle of diminishing marginal utility motivates spreading costs across individuals. A model incorporating this assumption outperformed existing models of social utility in explaining the data. By implementing selected allocations for real, we also found that while inequality per se did not influence pain perception, altruistic behavior had an intrinsic analgesic effect for the recipient.
",raymond dolan,Human behaviour,2015.0,10.1038/srep15389,Scientific Reports,Story2015,Not available,,Nature,Not available,Social redistribution of pain and money,b982504648557e2db54c60febb15f5cf,http://dx.doi.org/10.1038/srep15389
17302,One of the prime tenets of most economists is that markets can efficiently set prices that match the producer's costs with the benefits the consumers receive. But what is a market anyway?
,paul wallich,,1992.0,10.1038/scientificamerican0892-121,Scientific American,Wallich1992,Not available,,Nature,Not available,Experimenting with the Invisible Hand,1acdafa672b50b5afd44c2333b9fc9a1,http://dx.doi.org/10.1038/scientificamerican0892-121
17303,"We study dynamic games between two providers — an entrant and an incumbent — each with fixed capacity, who compete to sell in both a forward market and a spot market. We analyse two types of games between the providers: (a) a sequential game where the incumbent plays first followed by the entrant and (b) a repeated game where both providers make simultaneous decisions but do this repeatedly an infinite number of times. Demand is either from a single buyer or a population of independent consumers. We identify outcomes for the sequential game for varying levels of demand. For the repeated game, we identify the existence of subgame-perfect Nash equilibria and show how the two providers can obtain higher average revenues by implicit collusion. The study has implications for revenue management markets where providers have dynamic competitive interactions rather then a single static interaction.
",guillermo gallego,,2006.0,10.1057/palgrave.rpm.5160020,Journal of Revenue and Pricing Management,Gallego2006,Not available,,Nature,Not available,Dynamic revenue management games with forward and spot markets,469072effa62964ad7224bbf317f6ae2,http://dx.doi.org/10.1057/palgrave.rpm.5160020
17304,"We study dynamic games between two providers — an entrant and an incumbent — each with fixed capacity, who compete to sell in both a forward market and a spot market. We analyse two types of games between the providers: (a) a sequential game where the incumbent plays first followed by the entrant and (b) a repeated game where both providers make simultaneous decisions but do this repeatedly an infinite number of times. Demand is either from a single buyer or a population of independent consumers. We identify outcomes for the sequential game for varying levels of demand. For the repeated game, we identify the existence of subgame-perfect Nash equilibria and show how the two providers can obtain higher average revenues by implicit collusion. The study has implications for revenue management markets where providers have dynamic competitive interactions rather then a single static interaction.
",srinivas krishnamoorthy,,2006.0,10.1057/palgrave.rpm.5160020,Journal of Revenue and Pricing Management,Gallego2006,Not available,,Nature,Not available,Dynamic revenue management games with forward and spot markets,469072effa62964ad7224bbf317f6ae2,http://dx.doi.org/10.1057/palgrave.rpm.5160020
17305,"We study dynamic games between two providers — an entrant and an incumbent — each with fixed capacity, who compete to sell in both a forward market and a spot market. We analyse two types of games between the providers: (a) a sequential game where the incumbent plays first followed by the entrant and (b) a repeated game where both providers make simultaneous decisions but do this repeatedly an infinite number of times. Demand is either from a single buyer or a population of independent consumers. We identify outcomes for the sequential game for varying levels of demand. For the repeated game, we identify the existence of subgame-perfect Nash equilibria and show how the two providers can obtain higher average revenues by implicit collusion. The study has implications for revenue management markets where providers have dynamic competitive interactions rather then a single static interaction.
",robert phillips,,2006.0,10.1057/palgrave.rpm.5160020,Journal of Revenue and Pricing Management,Gallego2006,Not available,,Nature,Not available,Dynamic revenue management games with forward and spot markets,469072effa62964ad7224bbf317f6ae2,http://dx.doi.org/10.1057/palgrave.rpm.5160020
17306,"User loyalty or continued use is critical to the survival and development of any website. Focusing on the social network services (SNSs) context, this study proposes a research model for investigating individuals’ use motivations and the moderating role of habit with regard to gratification and continuance intention. This research integrates two influential media communication theories, media system dependency (MSD) and uses and gratifications, to examine SNSs-related behaviors. To comprehend online users’ motivations in depth, three motivations derived from MSD (understanding, orientation and play dependency relations) are operationalized as reflective, second-order constructs. The three motivations are theorized to affect parasocial interaction positively, and parasocial interaction is hypothesized to positively affect the gratification that individuals derive from SNSs usage. Furthermore, this study hypothesizes that gratification positively affects individuals’ continuance intention. Finally, we theorize that habit moderates the impact of gratification on continuance intention. Data collected from 657 Facebook users provide strong support for all six hypotheses. The results indicate that individuals’ motivations (i.e., the understanding, orientation and play dependency relations) positively affect parasocial interaction, which in turn has a positive effect on gratification, and subsequently continuance intention. In addition, the results show that habit has a small but negative moderating effect on the relationship between gratification and continuance intention. Implications for theory and practice are discussed, and suggestions are made for future research.
",chao-min chiu,,2014.0,10.1057/ejis.2014.9,European Journal of Information Systems,Chiu2014,Not available,,Nature,Not available,Examining the antecedents of user gratification and its effects on individuals’ social network services usage: the moderating role of habit,28a351c3d170304075b35628dc161356,http://dx.doi.org/10.1057/ejis.2014.9
17307,"User loyalty or continued use is critical to the survival and development of any website. Focusing on the social network services (SNSs) context, this study proposes a research model for investigating individuals’ use motivations and the moderating role of habit with regard to gratification and continuance intention. This research integrates two influential media communication theories, media system dependency (MSD) and uses and gratifications, to examine SNSs-related behaviors. To comprehend online users’ motivations in depth, three motivations derived from MSD (understanding, orientation and play dependency relations) are operationalized as reflective, second-order constructs. The three motivations are theorized to affect parasocial interaction positively, and parasocial interaction is hypothesized to positively affect the gratification that individuals derive from SNSs usage. Furthermore, this study hypothesizes that gratification positively affects individuals’ continuance intention. Finally, we theorize that habit moderates the impact of gratification on continuance intention. Data collected from 657 Facebook users provide strong support for all six hypotheses. The results indicate that individuals’ motivations (i.e., the understanding, orientation and play dependency relations) positively affect parasocial interaction, which in turn has a positive effect on gratification, and subsequently continuance intention. In addition, the results show that habit has a small but negative moderating effect on the relationship between gratification and continuance intention. Implications for theory and practice are discussed, and suggestions are made for future research.
",hsin-yi huang,,2014.0,10.1057/ejis.2014.9,European Journal of Information Systems,Chiu2014,Not available,,Nature,Not available,Examining the antecedents of user gratification and its effects on individuals’ social network services usage: the moderating role of habit,28a351c3d170304075b35628dc161356,http://dx.doi.org/10.1057/ejis.2014.9
17308,"Information and communication technologies have given rise to a new type of firm, the ibusiness firm. These firms offer a platform that allows users to interact with each other and generate value through user co-creation of content. Because of this, ibusiness firms face different challenges when they internationalize compared with traditional firms, even those online. In this article we extend existing internationalization theory to encompass this new type of organization. We theorize that because ibusiness firms produce value through the creation and coordination of a network of users, these firms tend to suffer greater liabilities of outsidership when expanding abroad and therefore concentrate on network and diffusion-based user adoption processes as they internationalize. Based on a multi-case investigation of a sample of ibusiness firms, we develop new theory and testable hypotheses. Thus, we make an important contribution by expanding internationalization theory to a new set of firms.
",keith brouthers,,2015.0,10.1057/jibs.2015.20,Journal of International Business Studies,Brouthers2015,Not available,,Nature,Not available,Explaining the internationalization of ibusiness firms,ee400dbb7277e9b69414992a610564ed,http://dx.doi.org/10.1057/jibs.2015.20
17309,"Information and communication technologies have given rise to a new type of firm, the ibusiness firm. These firms offer a platform that allows users to interact with each other and generate value through user co-creation of content. Because of this, ibusiness firms face different challenges when they internationalize compared with traditional firms, even those online. In this article we extend existing internationalization theory to encompass this new type of organization. We theorize that because ibusiness firms produce value through the creation and coordination of a network of users, these firms tend to suffer greater liabilities of outsidership when expanding abroad and therefore concentrate on network and diffusion-based user adoption processes as they internationalize. Based on a multi-case investigation of a sample of ibusiness firms, we develop new theory and testable hypotheses. Thus, we make an important contribution by expanding internationalization theory to a new set of firms.
",kim geisser,,2015.0,10.1057/jibs.2015.20,Journal of International Business Studies,Brouthers2015,Not available,,Nature,Not available,Explaining the internationalization of ibusiness firms,ee400dbb7277e9b69414992a610564ed,http://dx.doi.org/10.1057/jibs.2015.20
17311,"Information and communication technologies have given rise to a new type of firm, the ibusiness firm. These firms offer a platform that allows users to interact with each other and generate value through user co-creation of content. Because of this, ibusiness firms face different challenges when they internationalize compared with traditional firms, even those online. In this article we extend existing internationalization theory to encompass this new type of organization. We theorize that because ibusiness firms produce value through the creation and coordination of a network of users, these firms tend to suffer greater liabilities of outsidership when expanding abroad and therefore concentrate on network and diffusion-based user adoption processes as they internationalize. Based on a multi-case investigation of a sample of ibusiness firms, we develop new theory and testable hypotheses. Thus, we make an important contribution by expanding internationalization theory to a new set of firms.
",franz rothlauf,,2015.0,10.1057/jibs.2015.20,Journal of International Business Studies,Brouthers2015,Not available,,Nature,Not available,Explaining the internationalization of ibusiness firms,ee400dbb7277e9b69414992a610564ed,http://dx.doi.org/10.1057/jibs.2015.20
17312,"Online game addiction has become a common phenomenon that affects many individuals and societies. In this study we rely on the functionalist perspective of human behavior and propose and test a balanced model of the antecedents of online game addiction among adolescents, which simultaneously focuses on motivating, and prevention and harm reduction forces. First, a sample of 163 adolescents was used for validating and refining a survey instrument. Second, survey data collected from 623 adolescents were analyzed with Partial Least Squares techniques. The findings point to several functional needs (e.g., need for relationship and need for escapism) that drive online game playing and addiction, as well as to several prevention and harm reduction factors (e.g., education, attention switching activities) that reduce game playing time and alleviate online game addiction. The effects of motivation and prevention factors on online game addiction are often partially mediated by online game playing. Implications for research and practice are discussed.
",zhengchuan xu,,2011.0,10.1057/ejis.2011.56,European Journal of Information Systems,Xu2011,Not available,,Nature,Not available,Online game addiction among adolescents: motivation and prevention factors,4de1519eb73295b47e8397c0e15ba715,http://dx.doi.org/10.1057/ejis.2011.56
17313,"Online game addiction has become a common phenomenon that affects many individuals and societies. In this study we rely on the functionalist perspective of human behavior and propose and test a balanced model of the antecedents of online game addiction among adolescents, which simultaneously focuses on motivating, and prevention and harm reduction forces. First, a sample of 163 adolescents was used for validating and refining a survey instrument. Second, survey data collected from 623 adolescents were analyzed with Partial Least Squares techniques. The findings point to several functional needs (e.g., need for relationship and need for escapism) that drive online game playing and addiction, as well as to several prevention and harm reduction factors (e.g., education, attention switching activities) that reduce game playing time and alleviate online game addiction. The effects of motivation and prevention factors on online game addiction are often partially mediated by online game playing. Implications for research and practice are discussed.
",ofir turel,,2011.0,10.1057/ejis.2011.56,European Journal of Information Systems,Xu2011,Not available,,Nature,Not available,Online game addiction among adolescents: motivation and prevention factors,4de1519eb73295b47e8397c0e15ba715,http://dx.doi.org/10.1057/ejis.2011.56
17314,"Online game addiction has become a common phenomenon that affects many individuals and societies. In this study we rely on the functionalist perspective of human behavior and propose and test a balanced model of the antecedents of online game addiction among adolescents, which simultaneously focuses on motivating, and prevention and harm reduction forces. First, a sample of 163 adolescents was used for validating and refining a survey instrument. Second, survey data collected from 623 adolescents were analyzed with Partial Least Squares techniques. The findings point to several functional needs (e.g., need for relationship and need for escapism) that drive online game playing and addiction, as well as to several prevention and harm reduction factors (e.g., education, attention switching activities) that reduce game playing time and alleviate online game addiction. The effects of motivation and prevention factors on online game addiction are often partially mediated by online game playing. Implications for research and practice are discussed.
",yufei yuan,,2011.0,10.1057/ejis.2011.56,European Journal of Information Systems,Xu2011,Not available,,Nature,Not available,Online game addiction among adolescents: motivation and prevention factors,4de1519eb73295b47e8397c0e15ba715,http://dx.doi.org/10.1057/ejis.2011.56
17315,"It is widely known that financial markets can become dangerously unstable, yet it is unclear why. Recent research has highlighted the possibility that endogenous hormones, in particular testosterone and cortisol, may critically influence traders’ financial decision making. Here we show that cortisol, a hormone that modulates the response to physical or psychological stress, predicts instability in financial markets. Specifically, we recorded salivary levels of cortisol and testosterone in people participating in an experimental asset market (N = 142) and found that individual and aggregate levels of endogenous cortisol predict subsequent risk-taking and price instability. We then administered either cortisol (single oral dose of 100 mg hydrocortisone, N = 34) or testosterone (three doses of 10 g transdermal 1% testosterone gel over 48 hours, N = 41) to young males before they played an asset trading game. We found that both cortisol and testosterone shifted investment towards riskier assets. Cortisol appears to affect risk preferences directly, whereas testosterone operates by inducing increased optimism about future price changes. Our results suggest that changes in both cortisol and testosterone could play a destabilizing role in financial markets through increased risk taking behaviour, acting via different behavioural pathways.
",carlos cueva,Human behaviour,2015.0,10.1038/srep11206,Scientific Reports,Cueva2015,Not available,,Nature,Not available,Cortisol and testosterone increase financial risk taking and may destabilize markets,889445aa40e1f49144f296fe97e5c8a5,http://dx.doi.org/10.1038/srep11206
17316,"It is widely known that financial markets can become dangerously unstable, yet it is unclear why. Recent research has highlighted the possibility that endogenous hormones, in particular testosterone and cortisol, may critically influence traders’ financial decision making. Here we show that cortisol, a hormone that modulates the response to physical or psychological stress, predicts instability in financial markets. Specifically, we recorded salivary levels of cortisol and testosterone in people participating in an experimental asset market (N = 142) and found that individual and aggregate levels of endogenous cortisol predict subsequent risk-taking and price instability. We then administered either cortisol (single oral dose of 100 mg hydrocortisone, N = 34) or testosterone (three doses of 10 g transdermal 1% testosterone gel over 48 hours, N = 41) to young males before they played an asset trading game. We found that both cortisol and testosterone shifted investment towards riskier assets. Cortisol appears to affect risk preferences directly, whereas testosterone operates by inducing increased optimism about future price changes. Our results suggest that changes in both cortisol and testosterone could play a destabilizing role in financial markets through increased risk taking behaviour, acting via different behavioural pathways.
",carlos cueva,Decision,2015.0,10.1038/srep11206,Scientific Reports,Cueva2015,Not available,,Nature,Not available,Cortisol and testosterone increase financial risk taking and may destabilize markets,889445aa40e1f49144f296fe97e5c8a5,http://dx.doi.org/10.1038/srep11206
17317,"It is widely known that financial markets can become dangerously unstable, yet it is unclear why. Recent research has highlighted the possibility that endogenous hormones, in particular testosterone and cortisol, may critically influence traders’ financial decision making. Here we show that cortisol, a hormone that modulates the response to physical or psychological stress, predicts instability in financial markets. Specifically, we recorded salivary levels of cortisol and testosterone in people participating in an experimental asset market (N = 142) and found that individual and aggregate levels of endogenous cortisol predict subsequent risk-taking and price instability. We then administered either cortisol (single oral dose of 100 mg hydrocortisone, N = 34) or testosterone (three doses of 10 g transdermal 1% testosterone gel over 48 hours, N = 41) to young males before they played an asset trading game. We found that both cortisol and testosterone shifted investment towards riskier assets. Cortisol appears to affect risk preferences directly, whereas testosterone operates by inducing increased optimism about future price changes. Our results suggest that changes in both cortisol and testosterone could play a destabilizing role in financial markets through increased risk taking behaviour, acting via different behavioural pathways.
",r. roberts,Human behaviour,2015.0,10.1038/srep11206,Scientific Reports,Cueva2015,Not available,,Nature,Not available,Cortisol and testosterone increase financial risk taking and may destabilize markets,889445aa40e1f49144f296fe97e5c8a5,http://dx.doi.org/10.1038/srep11206
17318,"It is widely known that financial markets can become dangerously unstable, yet it is unclear why. Recent research has highlighted the possibility that endogenous hormones, in particular testosterone and cortisol, may critically influence traders’ financial decision making. Here we show that cortisol, a hormone that modulates the response to physical or psychological stress, predicts instability in financial markets. Specifically, we recorded salivary levels of cortisol and testosterone in people participating in an experimental asset market (N = 142) and found that individual and aggregate levels of endogenous cortisol predict subsequent risk-taking and price instability. We then administered either cortisol (single oral dose of 100 mg hydrocortisone, N = 34) or testosterone (three doses of 10 g transdermal 1% testosterone gel over 48 hours, N = 41) to young males before they played an asset trading game. We found that both cortisol and testosterone shifted investment towards riskier assets. Cortisol appears to affect risk preferences directly, whereas testosterone operates by inducing increased optimism about future price changes. Our results suggest that changes in both cortisol and testosterone could play a destabilizing role in financial markets through increased risk taking behaviour, acting via different behavioural pathways.
",r. roberts,Decision,2015.0,10.1038/srep11206,Scientific Reports,Cueva2015,Not available,,Nature,Not available,Cortisol and testosterone increase financial risk taking and may destabilize markets,889445aa40e1f49144f296fe97e5c8a5,http://dx.doi.org/10.1038/srep11206
17319,"It is widely known that financial markets can become dangerously unstable, yet it is unclear why. Recent research has highlighted the possibility that endogenous hormones, in particular testosterone and cortisol, may critically influence traders’ financial decision making. Here we show that cortisol, a hormone that modulates the response to physical or psychological stress, predicts instability in financial markets. Specifically, we recorded salivary levels of cortisol and testosterone in people participating in an experimental asset market (N = 142) and found that individual and aggregate levels of endogenous cortisol predict subsequent risk-taking and price instability. We then administered either cortisol (single oral dose of 100 mg hydrocortisone, N = 34) or testosterone (three doses of 10 g transdermal 1% testosterone gel over 48 hours, N = 41) to young males before they played an asset trading game. We found that both cortisol and testosterone shifted investment towards riskier assets. Cortisol appears to affect risk preferences directly, whereas testosterone operates by inducing increased optimism about future price changes. Our results suggest that changes in both cortisol and testosterone could play a destabilizing role in financial markets through increased risk taking behaviour, acting via different behavioural pathways.
",tom spencer,Human behaviour,2015.0,10.1038/srep11206,Scientific Reports,Cueva2015,Not available,,Nature,Not available,Cortisol and testosterone increase financial risk taking and may destabilize markets,889445aa40e1f49144f296fe97e5c8a5,http://dx.doi.org/10.1038/srep11206
17320,"It is widely known that financial markets can become dangerously unstable, yet it is unclear why. Recent research has highlighted the possibility that endogenous hormones, in particular testosterone and cortisol, may critically influence traders’ financial decision making. Here we show that cortisol, a hormone that modulates the response to physical or psychological stress, predicts instability in financial markets. Specifically, we recorded salivary levels of cortisol and testosterone in people participating in an experimental asset market (N = 142) and found that individual and aggregate levels of endogenous cortisol predict subsequent risk-taking and price instability. We then administered either cortisol (single oral dose of 100 mg hydrocortisone, N = 34) or testosterone (three doses of 10 g transdermal 1% testosterone gel over 48 hours, N = 41) to young males before they played an asset trading game. We found that both cortisol and testosterone shifted investment towards riskier assets. Cortisol appears to affect risk preferences directly, whereas testosterone operates by inducing increased optimism about future price changes. Our results suggest that changes in both cortisol and testosterone could play a destabilizing role in financial markets through increased risk taking behaviour, acting via different behavioural pathways.
",tom spencer,Decision,2015.0,10.1038/srep11206,Scientific Reports,Cueva2015,Not available,,Nature,Not available,Cortisol and testosterone increase financial risk taking and may destabilize markets,889445aa40e1f49144f296fe97e5c8a5,http://dx.doi.org/10.1038/srep11206
17322,"It is widely known that financial markets can become dangerously unstable, yet it is unclear why. Recent research has highlighted the possibility that endogenous hormones, in particular testosterone and cortisol, may critically influence traders’ financial decision making. Here we show that cortisol, a hormone that modulates the response to physical or psychological stress, predicts instability in financial markets. Specifically, we recorded salivary levels of cortisol and testosterone in people participating in an experimental asset market (N = 142) and found that individual and aggregate levels of endogenous cortisol predict subsequent risk-taking and price instability. We then administered either cortisol (single oral dose of 100 mg hydrocortisone, N = 34) or testosterone (three doses of 10 g transdermal 1% testosterone gel over 48 hours, N = 41) to young males before they played an asset trading game. We found that both cortisol and testosterone shifted investment towards riskier assets. Cortisol appears to affect risk preferences directly, whereas testosterone operates by inducing increased optimism about future price changes. Our results suggest that changes in both cortisol and testosterone could play a destabilizing role in financial markets through increased risk taking behaviour, acting via different behavioural pathways.
",nisha rani,Human behaviour,2015.0,10.1038/srep11206,Scientific Reports,Cueva2015,Not available,,Nature,Not available,Cortisol and testosterone increase financial risk taking and may destabilize markets,889445aa40e1f49144f296fe97e5c8a5,http://dx.doi.org/10.1038/srep11206
17323,"It is widely known that financial markets can become dangerously unstable, yet it is unclear why. Recent research has highlighted the possibility that endogenous hormones, in particular testosterone and cortisol, may critically influence traders’ financial decision making. Here we show that cortisol, a hormone that modulates the response to physical or psychological stress, predicts instability in financial markets. Specifically, we recorded salivary levels of cortisol and testosterone in people participating in an experimental asset market (N = 142) and found that individual and aggregate levels of endogenous cortisol predict subsequent risk-taking and price instability. We then administered either cortisol (single oral dose of 100 mg hydrocortisone, N = 34) or testosterone (three doses of 10 g transdermal 1% testosterone gel over 48 hours, N = 41) to young males before they played an asset trading game. We found that both cortisol and testosterone shifted investment towards riskier assets. Cortisol appears to affect risk preferences directly, whereas testosterone operates by inducing increased optimism about future price changes. Our results suggest that changes in both cortisol and testosterone could play a destabilizing role in financial markets through increased risk taking behaviour, acting via different behavioural pathways.
",nisha rani,Decision,2015.0,10.1038/srep11206,Scientific Reports,Cueva2015,Not available,,Nature,Not available,Cortisol and testosterone increase financial risk taking and may destabilize markets,889445aa40e1f49144f296fe97e5c8a5,http://dx.doi.org/10.1038/srep11206
17324,"It is widely known that financial markets can become dangerously unstable, yet it is unclear why. Recent research has highlighted the possibility that endogenous hormones, in particular testosterone and cortisol, may critically influence traders’ financial decision making. Here we show that cortisol, a hormone that modulates the response to physical or psychological stress, predicts instability in financial markets. Specifically, we recorded salivary levels of cortisol and testosterone in people participating in an experimental asset market (N = 142) and found that individual and aggregate levels of endogenous cortisol predict subsequent risk-taking and price instability. We then administered either cortisol (single oral dose of 100 mg hydrocortisone, N = 34) or testosterone (three doses of 10 g transdermal 1% testosterone gel over 48 hours, N = 41) to young males before they played an asset trading game. We found that both cortisol and testosterone shifted investment towards riskier assets. Cortisol appears to affect risk preferences directly, whereas testosterone operates by inducing increased optimism about future price changes. Our results suggest that changes in both cortisol and testosterone could play a destabilizing role in financial markets through increased risk taking behaviour, acting via different behavioural pathways.
",michelle tempest,Human behaviour,2015.0,10.1038/srep11206,Scientific Reports,Cueva2015,Not available,,Nature,Not available,Cortisol and testosterone increase financial risk taking and may destabilize markets,889445aa40e1f49144f296fe97e5c8a5,http://dx.doi.org/10.1038/srep11206
17325,"It is widely known that financial markets can become dangerously unstable, yet it is unclear why. Recent research has highlighted the possibility that endogenous hormones, in particular testosterone and cortisol, may critically influence traders’ financial decision making. Here we show that cortisol, a hormone that modulates the response to physical or psychological stress, predicts instability in financial markets. Specifically, we recorded salivary levels of cortisol and testosterone in people participating in an experimental asset market (N = 142) and found that individual and aggregate levels of endogenous cortisol predict subsequent risk-taking and price instability. We then administered either cortisol (single oral dose of 100 mg hydrocortisone, N = 34) or testosterone (three doses of 10 g transdermal 1% testosterone gel over 48 hours, N = 41) to young males before they played an asset trading game. We found that both cortisol and testosterone shifted investment towards riskier assets. Cortisol appears to affect risk preferences directly, whereas testosterone operates by inducing increased optimism about future price changes. Our results suggest that changes in both cortisol and testosterone could play a destabilizing role in financial markets through increased risk taking behaviour, acting via different behavioural pathways.
",michelle tempest,Decision,2015.0,10.1038/srep11206,Scientific Reports,Cueva2015,Not available,,Nature,Not available,Cortisol and testosterone increase financial risk taking and may destabilize markets,889445aa40e1f49144f296fe97e5c8a5,http://dx.doi.org/10.1038/srep11206
17326,"It is widely known that financial markets can become dangerously unstable, yet it is unclear why. Recent research has highlighted the possibility that endogenous hormones, in particular testosterone and cortisol, may critically influence traders’ financial decision making. Here we show that cortisol, a hormone that modulates the response to physical or psychological stress, predicts instability in financial markets. Specifically, we recorded salivary levels of cortisol and testosterone in people participating in an experimental asset market (N = 142) and found that individual and aggregate levels of endogenous cortisol predict subsequent risk-taking and price instability. We then administered either cortisol (single oral dose of 100 mg hydrocortisone, N = 34) or testosterone (three doses of 10 g transdermal 1% testosterone gel over 48 hours, N = 41) to young males before they played an asset trading game. We found that both cortisol and testosterone shifted investment towards riskier assets. Cortisol appears to affect risk preferences directly, whereas testosterone operates by inducing increased optimism about future price changes. Our results suggest that changes in both cortisol and testosterone could play a destabilizing role in financial markets through increased risk taking behaviour, acting via different behavioural pathways.
",philippe tobler,Human behaviour,2015.0,10.1038/srep11206,Scientific Reports,Cueva2015,Not available,,Nature,Not available,Cortisol and testosterone increase financial risk taking and may destabilize markets,889445aa40e1f49144f296fe97e5c8a5,http://dx.doi.org/10.1038/srep11206
17327,"It is widely known that financial markets can become dangerously unstable, yet it is unclear why. Recent research has highlighted the possibility that endogenous hormones, in particular testosterone and cortisol, may critically influence traders’ financial decision making. Here we show that cortisol, a hormone that modulates the response to physical or psychological stress, predicts instability in financial markets. Specifically, we recorded salivary levels of cortisol and testosterone in people participating in an experimental asset market (N = 142) and found that individual and aggregate levels of endogenous cortisol predict subsequent risk-taking and price instability. We then administered either cortisol (single oral dose of 100 mg hydrocortisone, N = 34) or testosterone (three doses of 10 g transdermal 1% testosterone gel over 48 hours, N = 41) to young males before they played an asset trading game. We found that both cortisol and testosterone shifted investment towards riskier assets. Cortisol appears to affect risk preferences directly, whereas testosterone operates by inducing increased optimism about future price changes. Our results suggest that changes in both cortisol and testosterone could play a destabilizing role in financial markets through increased risk taking behaviour, acting via different behavioural pathways.
",philippe tobler,Decision,2015.0,10.1038/srep11206,Scientific Reports,Cueva2015,Not available,,Nature,Not available,Cortisol and testosterone increase financial risk taking and may destabilize markets,889445aa40e1f49144f296fe97e5c8a5,http://dx.doi.org/10.1038/srep11206
17328,"It is widely known that financial markets can become dangerously unstable, yet it is unclear why. Recent research has highlighted the possibility that endogenous hormones, in particular testosterone and cortisol, may critically influence traders’ financial decision making. Here we show that cortisol, a hormone that modulates the response to physical or psychological stress, predicts instability in financial markets. Specifically, we recorded salivary levels of cortisol and testosterone in people participating in an experimental asset market (N = 142) and found that individual and aggregate levels of endogenous cortisol predict subsequent risk-taking and price instability. We then administered either cortisol (single oral dose of 100 mg hydrocortisone, N = 34) or testosterone (three doses of 10 g transdermal 1% testosterone gel over 48 hours, N = 41) to young males before they played an asset trading game. We found that both cortisol and testosterone shifted investment towards riskier assets. Cortisol appears to affect risk preferences directly, whereas testosterone operates by inducing increased optimism about future price changes. Our results suggest that changes in both cortisol and testosterone could play a destabilizing role in financial markets through increased risk taking behaviour, acting via different behavioural pathways.
",joe herbert,Human behaviour,2015.0,10.1038/srep11206,Scientific Reports,Cueva2015,Not available,,Nature,Not available,Cortisol and testosterone increase financial risk taking and may destabilize markets,889445aa40e1f49144f296fe97e5c8a5,http://dx.doi.org/10.1038/srep11206
17329,"It is widely known that financial markets can become dangerously unstable, yet it is unclear why. Recent research has highlighted the possibility that endogenous hormones, in particular testosterone and cortisol, may critically influence traders’ financial decision making. Here we show that cortisol, a hormone that modulates the response to physical or psychological stress, predicts instability in financial markets. Specifically, we recorded salivary levels of cortisol and testosterone in people participating in an experimental asset market (N = 142) and found that individual and aggregate levels of endogenous cortisol predict subsequent risk-taking and price instability. We then administered either cortisol (single oral dose of 100 mg hydrocortisone, N = 34) or testosterone (three doses of 10 g transdermal 1% testosterone gel over 48 hours, N = 41) to young males before they played an asset trading game. We found that both cortisol and testosterone shifted investment towards riskier assets. Cortisol appears to affect risk preferences directly, whereas testosterone operates by inducing increased optimism about future price changes. Our results suggest that changes in both cortisol and testosterone could play a destabilizing role in financial markets through increased risk taking behaviour, acting via different behavioural pathways.
",joe herbert,Decision,2015.0,10.1038/srep11206,Scientific Reports,Cueva2015,Not available,,Nature,Not available,Cortisol and testosterone increase financial risk taking and may destabilize markets,889445aa40e1f49144f296fe97e5c8a5,http://dx.doi.org/10.1038/srep11206
17330,"It is widely known that financial markets can become dangerously unstable, yet it is unclear why. Recent research has highlighted the possibility that endogenous hormones, in particular testosterone and cortisol, may critically influence traders’ financial decision making. Here we show that cortisol, a hormone that modulates the response to physical or psychological stress, predicts instability in financial markets. Specifically, we recorded salivary levels of cortisol and testosterone in people participating in an experimental asset market (N = 142) and found that individual and aggregate levels of endogenous cortisol predict subsequent risk-taking and price instability. We then administered either cortisol (single oral dose of 100 mg hydrocortisone, N = 34) or testosterone (three doses of 10 g transdermal 1% testosterone gel over 48 hours, N = 41) to young males before they played an asset trading game. We found that both cortisol and testosterone shifted investment towards riskier assets. Cortisol appears to affect risk preferences directly, whereas testosterone operates by inducing increased optimism about future price changes. Our results suggest that changes in both cortisol and testosterone could play a destabilizing role in financial markets through increased risk taking behaviour, acting via different behavioural pathways.
",aldo rustichini,Human behaviour,2015.0,10.1038/srep11206,Scientific Reports,Cueva2015,Not available,,Nature,Not available,Cortisol and testosterone increase financial risk taking and may destabilize markets,889445aa40e1f49144f296fe97e5c8a5,http://dx.doi.org/10.1038/srep11206
17331,"It is widely known that financial markets can become dangerously unstable, yet it is unclear why. Recent research has highlighted the possibility that endogenous hormones, in particular testosterone and cortisol, may critically influence traders’ financial decision making. Here we show that cortisol, a hormone that modulates the response to physical or psychological stress, predicts instability in financial markets. Specifically, we recorded salivary levels of cortisol and testosterone in people participating in an experimental asset market (N = 142) and found that individual and aggregate levels of endogenous cortisol predict subsequent risk-taking and price instability. We then administered either cortisol (single oral dose of 100 mg hydrocortisone, N = 34) or testosterone (three doses of 10 g transdermal 1% testosterone gel over 48 hours, N = 41) to young males before they played an asset trading game. We found that both cortisol and testosterone shifted investment towards riskier assets. Cortisol appears to affect risk preferences directly, whereas testosterone operates by inducing increased optimism about future price changes. Our results suggest that changes in both cortisol and testosterone could play a destabilizing role in financial markets through increased risk taking behaviour, acting via different behavioural pathways.
",aldo rustichini,Decision,2015.0,10.1038/srep11206,Scientific Reports,Cueva2015,Not available,,Nature,Not available,Cortisol and testosterone increase financial risk taking and may destabilize markets,889445aa40e1f49144f296fe97e5c8a5,http://dx.doi.org/10.1038/srep11206
17332,"An understanding of Over-The-Counter (OTC) derivatives – particularly Credit Default Swaps (CDSs) – is essential, because these derivatives are often accused of being toxic and a significant contributor to the financial turmoil of 2008, and therefore also indirectly causing the sickly world economic growth experienced since. This is the position embraced by high-level politicians at G20, in the EU and in the United States; therefore, an overhaul of the OTC derivatives – including the CDS market – appears inevitable. Nonetheless, this article questions this perception of CDSs as an exclusive detrimental product and argues that, although problems of concentration and interconnectedness in the opaque CDS market arguably amplified the crisis, CDSs merely reflected the crash of the US mortgage market. If this is not recognised by politicians and regulators, the likelihood of incomplete reforms causing unintended consequences seems inescapable. This scenario already seems to be playing out in Europe and also potentially in the United States, although an analysis of the US reform is beyond the scope of this article. This conclusion is reached by evaluating three proposals by the European Commission: the European Market Infrastructure Regulation, the Market in Financial Instruments Directive and the Capital Requirement Directive. In short, they intend to mandate or heavily incentivise the trading of CDSs through Central Counterparties (CCPs) and exchanges; enhance OTC-traded CDSs’ capital requirements; and require trades to be reported to designated trade repositories. Although these proposals are well intentioned, the unintended consequences could be considerable, particularly with regard to the extent to which credit risk hedging is possible through CDSs owing to potential prohibitive OTC capital, CCP and exchange-trading requirements. Rather, it is suggested that the reform could be detrimental to its inherent purpose, financial stability, as a concentration of credit risk can accumulate in CDS CCPs. Juxtapose market participants’ ability to hedge credit risk in the regular OTC market is diminished, thus positioning CDSs in a diminishing vacuum between the OTC and exchange traded market. In a worst case scenario, this would undermine the liquidity of the CDS market and make credit risk hedging significantly more difficult and expensive for end-users and ultimately society, at the same time as crucial information on entities creditworthiness deteriorates.
",frederik domler,,2012.0,10.1057/jbr.2012.2,Journal of Banking Regulation,Dømler2012,Not available,,Nature,Not available,A critical evaluation of the European credit default swap reform: Its challenges and adverse effects as a result of insufficient assumptions,1246a8ae04e7dd506256c812e5cade56,http://dx.doi.org/10.1057/jbr.2012.2
17333,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",b kubera,Cognitive neuroscience,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17334,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",b kubera,Metabolism,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17335,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",j klement,Cognitive neuroscience,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17336,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",j klement,Metabolism,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17337,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",c wagner,Cognitive neuroscience,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17338,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",c wagner,Metabolism,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17339,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",c radel,Cognitive neuroscience,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17340,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",c radel,Metabolism,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17341,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",j eggeling,Cognitive neuroscience,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17342,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",j eggeling,Metabolism,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17343,"Flexible pricing plans are commonly observed in service industries. In this article, we argue that the presence of flexible pricing plans can be attributed to consumers being boundedly rational – these consumers do not always select the best available option; rather, they select better options more often. In our model, the seller faces consumers who are heterogeneous in their degrees of intertemporal inconsistency – their ultimate actions can be different from their intended actions. We show that, in response to these boundedly rational consumers the seller may be able to extract more profit by setting different prices in different periods and allowing the consumers to self-select which period to pay. Moreover, a single pricing plan may emerge as an optimal pricing scheme even when the consumers are heterogeneous in their degrees of rationality and the seller is not fully aware of the consumers’ types. We further show that the pricing patterns depend primarily on the relative discounting factor between the seller and the consumers.
",wenbo cai,,2011.0,10.1057/rpm.2011.14,Journal of Revenue and Pricing Management,Cai2011,Not available,,Nature,Not available,Intertemporal pricing with boundedly rational consumers,98019832fd84f26b4d6ab9963bc8dbe4,http://dx.doi.org/10.1057/rpm.2011.14
17344,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",s fullbrunn,Cognitive neuroscience,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17345,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",s fullbrunn,Metabolism,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17346,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",m kaczmarek,Cognitive neuroscience,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17347,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",m kaczmarek,Metabolism,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17348,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",r levinsky,Cognitive neuroscience,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17349,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",r levinsky,Metabolism,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17350,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",a peters,Cognitive neuroscience,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17351,"
Background:
Employment disparities are known to exist between lean and corpulent people, for example, corpulent people are less likely to be hired and get lower wages. The reasons for these disparities between weight groups are not completely understood. We hypothesize (i) that economic decision making differs between lean and corpulent subjects, (ii) that these differences are influenced by peoples’ blood glucose concentrations and (iii) by the body weight of their opponents.
Methods:
A total of 20 lean and 20 corpulent men were examined, who performed a large set of economic games (ultimatum game, trust game and risk game) under euglycemic and hypoglycemic conditions induced by the glucose clamp technique.
Results:
In the ultimatum game, lean men made less fair decisions and offered 16% less money than corpulent men during euglycemia (P=0.042). During hypoglycemia, study participants of both weight groups accepted smaller amounts of money than during euglycemia (P=0.031), indicating that a lack of energy makes subjects to behave more like a Homo Economicus. In the trust game, lean men allocated twice as much money to lean than to corpulent trustees during hypoglycemia (P<0.001). Risk-seeking behavior did not differ between lean and corpulent men.
Conclusion:
Our data show that economic decision making is affected by both, the body weight of the participants and the body weight of their opponents, and that blood glucose concentrations should be taken into consideration when analyzing economic decision making. When relating these results to the working environment, the weight bias in economic decision making may be also relevant for employment disparities.
",a peters,Metabolism,2016.0,10.1038/ijo.2016.134,International Journal of Obesity,Kubera2016,Not available,,Nature,Not available,Differences in fairness and trust between lean and corpulent men,96787b8decb236e6425f67d1805845d9,http://dx.doi.org/10.1038/ijo.2016.134
17352,"In this paper, I analyze an inspection game between an insurer and an infinite sequence of policyholders, who can try to misrepresent relevant information in order to obtain coverage or lower insurance premium. Because claim-auditing is costly for the insurer, ex-post moral hazard problem arises. I find that the repeated game effect serves as a commitment device, allowing the insurer to deter fraud completely (for sufficiently high discount rate) but only when the policyholders observe past auditing strategies. Under weaker observability conditions, only partial efficiency gains are generally possible. I conclude that the insurers should spend resources on signaling their anti-fraud attempts to the potential policyholders. Similar conclusions can be drawn with respect to conceptually similar problems, such as tax evasion.
",michal krawczyk,,2009.0,10.1057/grir.2009.1,The Geneva Risk and Insurance Review,Krawczyk2009,Not available,,Nature,Not available,The Role of Repetition and Observability in Deterring Insurance Fraud,b481193b6537528fc8a945055b84300d,http://dx.doi.org/10.1057/grir.2009.1
17353,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",christopher hill,Social behaviour,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17354,"Flexible pricing plans are commonly observed in service industries. In this article, we argue that the presence of flexible pricing plans can be attributed to consumers being boundedly rational – these consumers do not always select the best available option; rather, they select better options more often. In our model, the seller faces consumers who are heterogeneous in their degrees of intertemporal inconsistency – their ultimate actions can be different from their intended actions. We show that, in response to these boundedly rational consumers the seller may be able to extract more profit by setting different prices in different periods and allowing the consumers to self-select which period to pay. Moreover, a single pricing plan may emerge as an optimal pricing scheme even when the consumers are heterogeneous in their degrees of rationality and the seller is not fully aware of the consumers’ types. We further show that the pricing patterns depend primarily on the relative discounting factor between the seller and the consumers.
",ying-ju chen,,2011.0,10.1057/rpm.2011.14,Journal of Revenue and Pricing Management,Cai2011,Not available,,Nature,Not available,Intertemporal pricing with boundedly rational consumers,98019832fd84f26b4d6ab9963bc8dbe4,http://dx.doi.org/10.1057/rpm.2011.14
17355,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",christopher hill,Cortex,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17356,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",christopher hill,Decision,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17357,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",shinsuke suzuki,Social behaviour,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17358,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",shinsuke suzuki,Cortex,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17359,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",shinsuke suzuki,Decision,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17360,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",rafael polania,Social behaviour,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17361,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",rafael polania,Cortex,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17362,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",rafael polania,Decision,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17363,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",marius moisa,Social behaviour,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17364,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",marius moisa,Cortex,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17365,"This paper examines the Sachs-Woo hypothesis that the gradual approach to reform, though successful in China, was not possible in the USSR because of structural differences between these two economies. To examine this hypothesis, this paper abstracts from the issue of structural differences by focusing on the industrial sector only and compares the state-owned enterprise (SOE) reforms in China under Deng and in the USSR under Gorbachev. Apart from concerning the same sector, these reforms were roughly contemporaneous, and hence their comparison provides a suitable test of the Sachs-Woo hypothesis. The test shows that the hypothesis does not live up to it.
",nazrul islam,,2011.0,10.1057/ces.2010.30,Comparative Economic Studies,Islam2011,Not available,,Nature,Not available,Was the Gradual Approach Not Possible in the USSR? A Critique of the Sachs-Woo ‘Impossibility Hypothesis’,0d19334356c74faa862b79043e398baa,http://dx.doi.org/10.1057/ces.2010.30
17366,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",marius moisa,Decision,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17367,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",john o'doherty,Social behaviour,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17368,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",john o'doherty,Cortex,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17369,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",john o'doherty,Decision,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17370,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",christian ruff,Social behaviour,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17371,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",christian ruff,Cortex,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17372,"During competitive interactions, humans have to estimate the impact of their own actions on their opponent's strategy. Here we provide evidence that neural computations in the right temporoparietal junction (rTPJ) and interconnected structures are causally involved in this process. By combining inhibitory continuous theta-burst transcranial magnetic stimulation with model-based functional MRI, we show that disrupting neural excitability in the rTPJ reduces behavioral and neural indices of mentalizing-related computations, as well as functional connectivity of the rTPJ with ventral and dorsal parts of the medial prefrontal cortex. These results provide a causal demonstration that neural computations instantiated in the rTPJ are neurobiological prerequisites for the ability to integrate opponent beliefs into strategic choice, through system-level interaction within the valuation and mentalizing networks.
",christian ruff,Decision,2017.0,10.1038/nn.4602,Nature Neuroscience,Hill2017,Not available,,Nature,Not available,A causal account of the brain network computations underlying strategic social behavior,f7fc81fa04f472e8762496ac94b92757,http://dx.doi.org/10.1038/nn.4602
17373,"Recent developments in the theoretical and empirical analysis of balance of payments crises are reviewed. A simple analytical model highlighting the process leading to such crises is first developed. The basic framework is then extended to deal with a variety of issues, including alternative postcollapse regimes, uncertainty, real sector effects, external borrowing and capital controls, imperfect asset substitutability, sticky prices, and endogenous policy switches. Empirical evidence on the collapse of exchange rate regimes is also examined, and the major implications of the analysis for macroeconomic policy are discussed.
",pierre-richard agenor,,1992.0,10.2307/3867063,Staff Papers - International Monetary Fund,Agénor1992,Not available,,Nature,Not available,Speculative Attacks and Models of Balance of Payments Crises,6b4112b4b6c98258df21fbce514b1515,http://dx.doi.org/10.2307/3867063
17374,"Recent developments in the theoretical and empirical analysis of balance of payments crises are reviewed. A simple analytical model highlighting the process leading to such crises is first developed. The basic framework is then extended to deal with a variety of issues, including alternative postcollapse regimes, uncertainty, real sector effects, external borrowing and capital controls, imperfect asset substitutability, sticky prices, and endogenous policy switches. Empirical evidence on the collapse of exchange rate regimes is also examined, and the major implications of the analysis for macroeconomic policy are discussed.
",jagdeep bhandari,,1992.0,10.2307/3867063,Staff Papers - International Monetary Fund,Agénor1992,Not available,,Nature,Not available,Speculative Attacks and Models of Balance of Payments Crises,6b4112b4b6c98258df21fbce514b1515,http://dx.doi.org/10.2307/3867063
17375,"Recent developments in the theoretical and empirical analysis of balance of payments crises are reviewed. A simple analytical model highlighting the process leading to such crises is first developed. The basic framework is then extended to deal with a variety of issues, including alternative postcollapse regimes, uncertainty, real sector effects, external borrowing and capital controls, imperfect asset substitutability, sticky prices, and endogenous policy switches. Empirical evidence on the collapse of exchange rate regimes is also examined, and the major implications of the analysis for macroeconomic policy are discussed.
",robert flood,,1992.0,10.2307/3867063,Staff Papers - International Monetary Fund,Agénor1992,Not available,,Nature,Not available,Speculative Attacks and Models of Balance of Payments Crises,6b4112b4b6c98258df21fbce514b1515,http://dx.doi.org/10.2307/3867063
17376,"The relation between IMF conditionality and country ownership of assistance programs is considered from a political economy perspective, focusing on the question of why conditionality is needed if it is in a country's best interests to undertake the reform program. It is argued that heterogeneity of interests must form the basis of any discussion of conditionality and ownership. The paper stresses a conflict between a reformist government and domestic interest groups that oppose reform, leading to a distinction between government and country ownership of a program. After discussing conceptual issues, I present a model of lending and policy reform that illustrates the effects of unconditional and conditional assistance first without and then with political constraints. It is shown that conditionality can play a key role even when the IMF and authorities agree on the goals of an assistance program.
",allan drazen,,2002.0,10.2307/3872471,IMF Staff Papers,Drazen2002,Not available,,Nature,Not available,Conditionality and Ownership in IMF Lending: A Political Economy Approach,471ab58fd7cc63369a9238232af9e2fc,http://dx.doi.org/10.2307/3872471
17377,"A fundamental problem in many disciplines is the classification of objects in a domain of interest into a taxonomy. Developing a taxonomy, however, is a complex process that has not been adequately addressed in the information systems (IS) literature. The purpose of this paper is to present a method for taxonomy development that can be used in IS. First, this paper demonstrates through a comprehensive literature survey that taxonomy development in IS has largely been ad hoc. Then the paper defines the problem of taxonomy development. Next, the paper presents a method for taxonomy development that is based on taxonomy development literature in other disciplines and shows that the method has certain desirable qualities. Finally, the paper demonstrates the efficacy of the method by developing a taxonomy in a domain in IS.
",robert nickerson,,2012.0,10.1057/ejis.2012.26,European Journal of Information Systems,Nickerson2012,Not available,,Nature,Not available,A method for taxonomy development and its application in information systems,2a1ea7c354bff3cdb212aa1626babf3a,http://dx.doi.org/10.1057/ejis.2012.26
17378,"A fundamental problem in many disciplines is the classification of objects in a domain of interest into a taxonomy. Developing a taxonomy, however, is a complex process that has not been adequately addressed in the information systems (IS) literature. The purpose of this paper is to present a method for taxonomy development that can be used in IS. First, this paper demonstrates through a comprehensive literature survey that taxonomy development in IS has largely been ad hoc. Then the paper defines the problem of taxonomy development. Next, the paper presents a method for taxonomy development that is based on taxonomy development literature in other disciplines and shows that the method has certain desirable qualities. Finally, the paper demonstrates the efficacy of the method by developing a taxonomy in a domain in IS.
",upkar varshney,,2012.0,10.1057/ejis.2012.26,European Journal of Information Systems,Nickerson2012,Not available,,Nature,Not available,A method for taxonomy development and its application in information systems,2a1ea7c354bff3cdb212aa1626babf3a,http://dx.doi.org/10.1057/ejis.2012.26
17379,"A fundamental problem in many disciplines is the classification of objects in a domain of interest into a taxonomy. Developing a taxonomy, however, is a complex process that has not been adequately addressed in the information systems (IS) literature. The purpose of this paper is to present a method for taxonomy development that can be used in IS. First, this paper demonstrates through a comprehensive literature survey that taxonomy development in IS has largely been ad hoc. Then the paper defines the problem of taxonomy development. Next, the paper presents a method for taxonomy development that is based on taxonomy development literature in other disciplines and shows that the method has certain desirable qualities. Finally, the paper demonstrates the efficacy of the method by developing a taxonomy in a domain in IS.
",jan muntermann,,2012.0,10.1057/ejis.2012.26,European Journal of Information Systems,Nickerson2012,Not available,,Nature,Not available,A method for taxonomy development and its application in information systems,2a1ea7c354bff3cdb212aa1626babf3a,http://dx.doi.org/10.1057/ejis.2012.26
17380,"Both the use of Web sites and the empirical knowledge as to what constitutes effective Web site design has grown exponentially in recent years. The aim of the current article is to outline the history and key elements of Web site design in an e-commerce context – primarily in the period 2002–2012. It was in 2002 that a Special Issue of ISR was focused on ‘Measuring e-Commerce in Net-Enabled Organizations.’ Before this, work was conducted on Web site design, but much of it was anecdotal. Systematic, empirical research and modeling of Web site design to dependent variables like trust, satisfaction, and loyalty until then had not receive substantial focus – at least in the information systems domain. In addition to an overview of empirical findings, this article has a practical focus on what designers must know about Web site elements if they are to provide compelling user experiences, taking into account the site’s likely users. To this end, the article elaborates components of effective Web site design, user characteristics, and the online context that impact Web usage and acceptance, and design issues as they are relevant to diverse users including those in global markets. Web site elements that result in positive business impact are articulated. This retrospective on Web site design concludes with an overview of future research directions and current developments.
",dianne cyr,,2014.0,10.1057/jit.2013.25,Journal of Information Technology,Cyr2014,Not available,,Nature,Not available,Return visits: a review of how Web site design can engender visitor loyalty,8cca1b3c99d4b8c3239f6c8d9f39e269,http://dx.doi.org/10.1057/jit.2013.25
17381,"This paper explores the effects of capital controls and policies regulating interest rates and the exchange rate in a model of economic transition applied to China. It builds on Song, Storesletten, and Zilibotti (2011) who construct a growth model consistent with salient features of the recent Chinese growth experience: high output growth, sustained returns on capital investment, extensive reallocation within the manufacturing sector, sluggish wage growth, and accumulation of a large trade surplus. The salient features of the theory are asymmetric financial imperfections and heterogeneous productivity across private and state-owned firms. Capital controls and regulation of banks’ deposit rates stifle competition in the banking sector and hamper the lending to productive private firms. Removing such regulation would accelerate the growth in productivity and output. A temporarily undervalued exchange rate reduces real wages and consumption, stimulating investments in the high-productivity entrepreneurial sector. This fosters productivity growth and a trade surplus. A high interest rate mitigates the disadvantage of financially constrained firms, reduces wages, and increases the speed of transition from low- to high-productivity firms.
",zheng song,,2014.0,10.1057/imfer.2014.18,IMF Economic Review,Song2014,Not available,,Nature,Not available,Growing (with Capital Controls) like China,1742456570d016fdc22b987299d47d8f,http://dx.doi.org/10.1057/imfer.2014.18
17382,"This paper explores the effects of capital controls and policies regulating interest rates and the exchange rate in a model of economic transition applied to China. It builds on Song, Storesletten, and Zilibotti (2011) who construct a growth model consistent with salient features of the recent Chinese growth experience: high output growth, sustained returns on capital investment, extensive reallocation within the manufacturing sector, sluggish wage growth, and accumulation of a large trade surplus. The salient features of the theory are asymmetric financial imperfections and heterogeneous productivity across private and state-owned firms. Capital controls and regulation of banks’ deposit rates stifle competition in the banking sector and hamper the lending to productive private firms. Removing such regulation would accelerate the growth in productivity and output. A temporarily undervalued exchange rate reduces real wages and consumption, stimulating investments in the high-productivity entrepreneurial sector. This fosters productivity growth and a trade surplus. A high interest rate mitigates the disadvantage of financially constrained firms, reduces wages, and increases the speed of transition from low- to high-productivity firms.
",kjetil storesletten,,2014.0,10.1057/imfer.2014.18,IMF Economic Review,Song2014,Not available,,Nature,Not available,Growing (with Capital Controls) like China,1742456570d016fdc22b987299d47d8f,http://dx.doi.org/10.1057/imfer.2014.18
17383,"This paper explores the effects of capital controls and policies regulating interest rates and the exchange rate in a model of economic transition applied to China. It builds on Song, Storesletten, and Zilibotti (2011) who construct a growth model consistent with salient features of the recent Chinese growth experience: high output growth, sustained returns on capital investment, extensive reallocation within the manufacturing sector, sluggish wage growth, and accumulation of a large trade surplus. The salient features of the theory are asymmetric financial imperfections and heterogeneous productivity across private and state-owned firms. Capital controls and regulation of banks’ deposit rates stifle competition in the banking sector and hamper the lending to productive private firms. Removing such regulation would accelerate the growth in productivity and output. A temporarily undervalued exchange rate reduces real wages and consumption, stimulating investments in the high-productivity entrepreneurial sector. This fosters productivity growth and a trade surplus. A high interest rate mitigates the disadvantage of financially constrained firms, reduces wages, and increases the speed of transition from low- to high-productivity firms.
",fabrizio zilibotti,,2014.0,10.1057/imfer.2014.18,IMF Economic Review,Song2014,Not available,,Nature,Not available,Growing (with Capital Controls) like China,1742456570d016fdc22b987299d47d8f,http://dx.doi.org/10.1057/imfer.2014.18
17384,"Humans can reflect on decisions and report variable levels of confidence. But why maintain an explicit representation of confidence for choices that have already been made and therefore cannot be undone? Here we show that an explicit representation of confidence is harnessed for subsequent changes of mind. Specifically, when confidence is low, participants are more likely to change their minds when the same choice is presented again, an effect that is most pronounced in participants with greater fidelity in their confidence reports. Furthermore, we show that choices reported with high confidence follow a more consistent pattern (fewer transitivity violations). Finally, by tracking participants’ eye movements, we demonstrate that lower-level gaze dynamics can track uncertainty but do not directly impact changes of mind. These results suggest that an explicit and accurate representation of confidence has a positive impact on the quality of future value-based decisions.
",tomas folke,,2016.0,10.1038/s41562-016-0002,Nature Human Behaviour,Folke2016,Not available,,Nature,Not available,Explicit representation of confidence informs future value-based decisions,c422a74241a64da65c2198b33fdd17c0,http://dx.doi.org/10.1038/s41562-016-0002
17385,"We report feedback-assisted adaptive multicasting from a single Gaussian mode to multiple orbital angular momentum (OAM) modes using a single phase-only spatial light modulator loaded with a complex phase pattern. By designing and optimizing the complex phase pattern through the adaptive correction of feedback coefficients, the power of each multicast OAM channel can be arbitrarily controlled. We experimentally demonstrate power-controllable multicasting from a single Gaussian mode to two and six OAM modes with different target power distributions. Equalized power multicasting, “up-down” power multicasting and “ladder” power multicasting are realized in the experiment. The difference between measured power distributions and target power distributions is assessed to be less than 1 dB. Moreover, we demonstrate data-carrying OAM multicasting by employing orthogonal frequency-division multiplexing 64-ary quadrature amplitude modulation (OFDM 64-QAM) signal. The measured bit-error rate curves and observed optical signal-to-noise ratio penalties show favorable operation performance of the proposed adaptive power-controllable OAM multicasting.
",shuhui li,Optical techniques,2015.0,10.1038/srep09677,Scientific Reports,Li2015,Not available,,Nature,Not available,Adaptive power-controllable orbital angular momentum (OAM) multicasting,d58267deae8d8e56bb2cd1fed3aecfec,http://dx.doi.org/10.1038/srep09677
17386,"We report feedback-assisted adaptive multicasting from a single Gaussian mode to multiple orbital angular momentum (OAM) modes using a single phase-only spatial light modulator loaded with a complex phase pattern. By designing and optimizing the complex phase pattern through the adaptive correction of feedback coefficients, the power of each multicast OAM channel can be arbitrarily controlled. We experimentally demonstrate power-controllable multicasting from a single Gaussian mode to two and six OAM modes with different target power distributions. Equalized power multicasting, “up-down” power multicasting and “ladder” power multicasting are realized in the experiment. The difference between measured power distributions and target power distributions is assessed to be less than 1 dB. Moreover, we demonstrate data-carrying OAM multicasting by employing orthogonal frequency-division multiplexing 64-ary quadrature amplitude modulation (OFDM 64-QAM) signal. The measured bit-error rate curves and observed optical signal-to-noise ratio penalties show favorable operation performance of the proposed adaptive power-controllable OAM multicasting.
",shuhui li,Applied optics,2015.0,10.1038/srep09677,Scientific Reports,Li2015,Not available,,Nature,Not available,Adaptive power-controllable orbital angular momentum (OAM) multicasting,d58267deae8d8e56bb2cd1fed3aecfec,http://dx.doi.org/10.1038/srep09677
17387,"We report feedback-assisted adaptive multicasting from a single Gaussian mode to multiple orbital angular momentum (OAM) modes using a single phase-only spatial light modulator loaded with a complex phase pattern. By designing and optimizing the complex phase pattern through the adaptive correction of feedback coefficients, the power of each multicast OAM channel can be arbitrarily controlled. We experimentally demonstrate power-controllable multicasting from a single Gaussian mode to two and six OAM modes with different target power distributions. Equalized power multicasting, “up-down” power multicasting and “ladder” power multicasting are realized in the experiment. The difference between measured power distributions and target power distributions is assessed to be less than 1 dB. Moreover, we demonstrate data-carrying OAM multicasting by employing orthogonal frequency-division multiplexing 64-ary quadrature amplitude modulation (OFDM 64-QAM) signal. The measured bit-error rate curves and observed optical signal-to-noise ratio penalties show favorable operation performance of the proposed adaptive power-controllable OAM multicasting.
",jian wang,Optical techniques,2015.0,10.1038/srep09677,Scientific Reports,Li2015,Not available,,Nature,Not available,Adaptive power-controllable orbital angular momentum (OAM) multicasting,d58267deae8d8e56bb2cd1fed3aecfec,http://dx.doi.org/10.1038/srep09677
17388,"We report feedback-assisted adaptive multicasting from a single Gaussian mode to multiple orbital angular momentum (OAM) modes using a single phase-only spatial light modulator loaded with a complex phase pattern. By designing and optimizing the complex phase pattern through the adaptive correction of feedback coefficients, the power of each multicast OAM channel can be arbitrarily controlled. We experimentally demonstrate power-controllable multicasting from a single Gaussian mode to two and six OAM modes with different target power distributions. Equalized power multicasting, “up-down” power multicasting and “ladder” power multicasting are realized in the experiment. The difference between measured power distributions and target power distributions is assessed to be less than 1 dB. Moreover, we demonstrate data-carrying OAM multicasting by employing orthogonal frequency-division multiplexing 64-ary quadrature amplitude modulation (OFDM 64-QAM) signal. The measured bit-error rate curves and observed optical signal-to-noise ratio penalties show favorable operation performance of the proposed adaptive power-controllable OAM multicasting.
",jian wang,Applied optics,2015.0,10.1038/srep09677,Scientific Reports,Li2015,Not available,,Nature,Not available,Adaptive power-controllable orbital angular momentum (OAM) multicasting,d58267deae8d8e56bb2cd1fed3aecfec,http://dx.doi.org/10.1038/srep09677
17389,"This paper puts forward a case for using hermeneutics in information systems (IS) research. Unlike case study and action research, which could now be described as ‘mainstream’ interpretive research in IS, hermeneutics is neither well accepted nor much practiced in IS research. A suitable hermeneutic approach is described in detail. A brief account of hermeneutics in action is provided through a description of research investigating notions of convenience in home Internet shopping. The hermeneutic circle enabled the researcher to reveal unexpectedly the practice of using surrogates in Internet shopping and this example illustrates some of the potential of the approach in IS research.
",melissa cole,,2007.0,10.1057/palgrave.ejis.3000725,European Journal of Information Systems,Cole2007,Not available,,Nature,Not available,The potential of hermeneutics in information systems research,30472e7164e3136b31f9db7343e690d9,http://dx.doi.org/10.1057/palgrave.ejis.3000725
17390,"This paper puts forward a case for using hermeneutics in information systems (IS) research. Unlike case study and action research, which could now be described as ‘mainstream’ interpretive research in IS, hermeneutics is neither well accepted nor much practiced in IS research. A suitable hermeneutic approach is described in detail. A brief account of hermeneutics in action is provided through a description of research investigating notions of convenience in home Internet shopping. The hermeneutic circle enabled the researcher to reveal unexpectedly the practice of using surrogates in Internet shopping and this example illustrates some of the potential of the approach in IS research.
",david avison,,2007.0,10.1057/palgrave.ejis.3000725,European Journal of Information Systems,Cole2007,Not available,,Nature,Not available,The potential of hermeneutics in information systems research,30472e7164e3136b31f9db7343e690d9,http://dx.doi.org/10.1057/palgrave.ejis.3000725
17391,"This paper traces the journey of the marketing mix paradigm from its inception through continuous debate and discussion over the years. It traces the evolution of marketing mix components and the transformation of the marketing paradigm as society, technology, media, information and money have changed. A significant evolution of technology has changed the face of marketing. The paper uses inputs from marketing experts to trace the all-encompassing and unstoppable expansion of cyberspace that is changing every single dimension of consumers’ lifestyles. The paper outlines the acceleration of the information revolution with the advent of the ‘Read-Write Web’ or ‘Web 2.0’. Within this emergent virtual domain, corporate blogs, online communities, social networks and wikis have redefined the routine lives of individuals and changed the way people relate to information, brands, other people and even themselves. The discussion further proceeds to address three important issues facing the world of marketing today: the implications of today’s technologically inspired environment for marketing in the twenty-first century, the conceptualization of a customer mix as a pre-requisite for the marketing mix and, in conclusion, finally proposes an update to the marketing mix itself. In addition to this, the paper also traces the incorporation of the concepts of relationship marketing, customer relationship management, co-creation, salesforce automation and digital marketing in current-day marketing environments.
",graham jackson,,2016.0,10.1057/dddmp.2016.3,"Journal of Direct, Data and Digital Marketing Practice",Jackson2016,Not available,,Nature,Not available,Dawn of the digital age and the evolution of the marketing mix,6b48f5eb3a7ca84e673de2ef4b66c701,http://dx.doi.org/10.1057/dddmp.2016.3
17392,"This paper traces the journey of the marketing mix paradigm from its inception through continuous debate and discussion over the years. It traces the evolution of marketing mix components and the transformation of the marketing paradigm as society, technology, media, information and money have changed. A significant evolution of technology has changed the face of marketing. The paper uses inputs from marketing experts to trace the all-encompassing and unstoppable expansion of cyberspace that is changing every single dimension of consumers’ lifestyles. The paper outlines the acceleration of the information revolution with the advent of the ‘Read-Write Web’ or ‘Web 2.0’. Within this emergent virtual domain, corporate blogs, online communities, social networks and wikis have redefined the routine lives of individuals and changed the way people relate to information, brands, other people and even themselves. The discussion further proceeds to address three important issues facing the world of marketing today: the implications of today’s technologically inspired environment for marketing in the twenty-first century, the conceptualization of a customer mix as a pre-requisite for the marketing mix and, in conclusion, finally proposes an update to the marketing mix itself. In addition to this, the paper also traces the incorporation of the concepts of relationship marketing, customer relationship management, co-creation, salesforce automation and digital marketing in current-day marketing environments.
",vandana ahuja,,2016.0,10.1057/dddmp.2016.3,"Journal of Direct, Data and Digital Marketing Practice",Jackson2016,Not available,,Nature,Not available,Dawn of the digital age and the evolution of the marketing mix,6b48f5eb3a7ca84e673de2ef4b66c701,http://dx.doi.org/10.1057/dddmp.2016.3
17393,"In this paper we review the recent IS literature on knowledge and consider different assumptions that underpin different approaches to this broad research area. In doing this we contrast those who focus on knowledge management with those who focus on knowing as practice and examine how contexts, processes and purposes need to be considered whichever approach to knowledge one is adopting. We also identify how recent IT developments, especially in relation to social software and the digitization of everything, are presenting new opportunities (and challenges) for how organizations can manage both knowledge and knowledge work. This presents IS scholars with new research agendas for examining and understanding the relationships between technology, organization and society.
",sue newell,,2014.0,10.1057/jit.2014.12,Journal of Information Technology,Newell2014,Not available,,Nature,Not available,Managing knowledge and managing knowledge work: what we know and what the future holds,0bddaf028c780bbde4956eb67506b137,http://dx.doi.org/10.1057/jit.2014.12
17394,"Increasingly, brands are able to embed themselves in consumer brand-centric communities. Subsequently, through networks and socialisation processes, convergences around a brand serve as conduits for defining meaning and identity. Using a methodological approach drawing from inductive reasoning and syllogisms, as a basis for conceptual metaphor theory and critical discourse analysis, evidence is gathered from literature reviews – supported by anecdotal evidence, personal observations and experiences. From this, the authors examine the positioning of brands within brand communities, exploring how they can add meaning and authenticity to their consumer-centric friendships. A proposed test of this friendship lies in how brands respond to consumers, when relationships fall short of expectations, and subsequently how these are addressed. The position of the authors is that the more those brands push themselves towards friendships, consumers will respond by expecting some form of loss mitigation, associated more with the intangible aspects of that brand – led by experiential elements and reputation. In the light of this, brands should consider their role in maintaining friendships, beyond consumption-based loyalty and CSR – towards developing a demonstrable Brand Conscience, supported by stakeholders. Furthermore, new thinking suggests that brands should attempt to mitigate the non-functional and emotional losses that consumers may feel, supported by a Brand Conscience.
",jonathan wilson,,2011.0,10.1057/bm.2011.4,Journal of Brand Management,Wilson2011,Not available,,Nature,Not available,Friends or Freeloaders? Encouraging brand conscience and introducing the concept of emotion-based consumer loss mitigation,91fa39e48ef9852841cc8ef62fb7f161,http://dx.doi.org/10.1057/bm.2011.4
17395,"Humans can reflect on decisions and report variable levels of confidence. But why maintain an explicit representation of confidence for choices that have already been made and therefore cannot be undone? Here we show that an explicit representation of confidence is harnessed for subsequent changes of mind. Specifically, when confidence is low, participants are more likely to change their minds when the same choice is presented again, an effect that is most pronounced in participants with greater fidelity in their confidence reports. Furthermore, we show that choices reported with high confidence follow a more consistent pattern (fewer transitivity violations). Finally, by tracking participants’ eye movements, we demonstrate that lower-level gaze dynamics can track uncertainty but do not directly impact changes of mind. These results suggest that an explicit and accurate representation of confidence has a positive impact on the quality of future value-based decisions.
",catrine jacobsen,,2016.0,10.1038/s41562-016-0002,Nature Human Behaviour,Folke2016,Not available,,Nature,Not available,Explicit representation of confidence informs future value-based decisions,c422a74241a64da65c2198b33fdd17c0,http://dx.doi.org/10.1038/s41562-016-0002
17396,"Increasingly, brands are able to embed themselves in consumer brand-centric communities. Subsequently, through networks and socialisation processes, convergences around a brand serve as conduits for defining meaning and identity. Using a methodological approach drawing from inductive reasoning and syllogisms, as a basis for conceptual metaphor theory and critical discourse analysis, evidence is gathered from literature reviews – supported by anecdotal evidence, personal observations and experiences. From this, the authors examine the positioning of brands within brand communities, exploring how they can add meaning and authenticity to their consumer-centric friendships. A proposed test of this friendship lies in how brands respond to consumers, when relationships fall short of expectations, and subsequently how these are addressed. The position of the authors is that the more those brands push themselves towards friendships, consumers will respond by expecting some form of loss mitigation, associated more with the intangible aspects of that brand – led by experiential elements and reputation. In the light of this, brands should consider their role in maintaining friendships, beyond consumption-based loyalty and CSR – towards developing a demonstrable Brand Conscience, supported by stakeholders. Furthermore, new thinking suggests that brands should attempt to mitigate the non-functional and emotional losses that consumers may feel, supported by a Brand Conscience.
",joseph morgan,,2011.0,10.1057/bm.2011.4,Journal of Brand Management,Wilson2011,Not available,,Nature,Not available,Friends or Freeloaders? Encouraging brand conscience and introducing the concept of emotion-based consumer loss mitigation,91fa39e48ef9852841cc8ef62fb7f161,http://dx.doi.org/10.1057/bm.2011.4
17397,"While the primary effort of all retailers is to generate that initial sales, return management is generally identified as a secondary issue that does not necessarily need the same level of planning and proactive strategies. In this article, we position return management as a process that is at the interface of both inventory and revenue management by explicitly incorporating the return policy of the retailer in consumer's valuation. We consider a retailer that sells a fixed amount of inventory over a finite horizon. We assume that return policy is a decision variable that can be changed dynamically at every period. While flexible return policies generate more demand, it also induces more returns. We characterize the optimal dynamic return policies based on two costs of return scenarios. We show a conditional monotonicity result and discuss how these return policies change with respect to inventory and time. We then propose a heuristic and prove that it is asymptotically optimal. We also study the joint dynamic pricing and dynamic return management problem in the same setting and propose two more heuristics whose performance is tested numerically and found to be close to optimal for higher inventory levels. We extend our model to multiple competing retailers and characterize the resulting equilibrium return policy and prices.
",mehmet altug,,2012.0,10.1057/rpm.2012.37,Journal of Revenue and Pricing Management,Altug2012,Not available,,Nature,Not available,Optimal dynamic return management of fixed inventories,ae83f1cd3345e6bac1da2498027c766e,http://dx.doi.org/10.1057/rpm.2012.37
17398,"Reputation is an intangible asset that directly affects the market value of the firm. Although reputation evidences belief that the firm is on a sustainable course, it is built on the trust established with all stakeholders through past proper behaviour. It proves more resilient than one might think at first but even menial misconducts, if repeated, can lead to a downfall. The demise of a few can engulf a whole industry when the transactions are based on trust in the fulfilment of future promises. This is why reputation has become an object of research for several branches of management sciences. The key drivers are still in need of refinement. Nevertheless, they can be summarized in one word: authenticity.
",sophie gaultier-gaillard,,2006.0,10.1057/palgrave.gpp.2510090,The Geneva Papers on Risk and Insurance Issues and Practice,Gaultier-Gaillard2006,Not available,,Nature,Not available,Risks to Reputation: A Global Approach,d79c935799c00bf034e01dad4b7de4c2,http://dx.doi.org/10.1057/palgrave.gpp.2510090
17399,"Reputation is an intangible asset that directly affects the market value of the firm. Although reputation evidences belief that the firm is on a sustainable course, it is built on the trust established with all stakeholders through past proper behaviour. It proves more resilient than one might think at first but even menial misconducts, if repeated, can lead to a downfall. The demise of a few can engulf a whole industry when the transactions are based on trust in the fulfilment of future promises. This is why reputation has become an object of research for several branches of management sciences. The key drivers are still in need of refinement. Nevertheless, they can be summarized in one word: authenticity.
",jean-paul louisot,,2006.0,10.1057/palgrave.gpp.2510090,The Geneva Papers on Risk and Insurance Issues and Practice,Gaultier-Gaillard2006,Not available,,Nature,Not available,Risks to Reputation: A Global Approach,d79c935799c00bf034e01dad4b7de4c2,http://dx.doi.org/10.1057/palgrave.gpp.2510090
17400,"Since the introduction of the Motivational Technology Acceptance Model in 1992, many researchers have considered both extrinsic and intrinsic motivation as antecedents of intent to use and actual use of a system. However, it has been a long-standing and largely unchallenged assumption that intrinsic motivation (i.e., fun or enjoyment) is a more dominant predictor of hedonic (fun) application use and that extrinsic motivation (i.e., usefulness) is a more dominant predictor of utilitarian (practical) application use. In this article, we probe whether system type serves as a boundary condition (i.e., moderator) for understanding an individual’s interaction with information technology. Specifically, we examine whether perceived enjoyment’s influence on perceived ease of use, perceived usefulness, intention, and use varies with system type. On the basis of a meta-analytic structural equation modeling analysis of 185 studies between 1992 and February 2011, our findings suggest intrinsic motivation is equally relevant for predicting intentions toward using and actual use of both hedonic and utilitarian systems. Therefore, our meta-analytic results call into question the rigidity of the assumption that system type is a ‘boundary condition’ for understanding individuals’ interaction with information technology. The implications of these results for research and practice are discussed.
",jennifer gerow,,2012.0,10.1057/ejis.2012.25,European Journal of Information Systems,Gerow2012,Not available,,Nature,Not available,Can we have fun @ work? The role of intrinsic motivation for utilitarian systems,297beaf728219088d9b587c262ffbf11,http://dx.doi.org/10.1057/ejis.2012.25
17401,"Since the introduction of the Motivational Technology Acceptance Model in 1992, many researchers have considered both extrinsic and intrinsic motivation as antecedents of intent to use and actual use of a system. However, it has been a long-standing and largely unchallenged assumption that intrinsic motivation (i.e., fun or enjoyment) is a more dominant predictor of hedonic (fun) application use and that extrinsic motivation (i.e., usefulness) is a more dominant predictor of utilitarian (practical) application use. In this article, we probe whether system type serves as a boundary condition (i.e., moderator) for understanding an individual’s interaction with information technology. Specifically, we examine whether perceived enjoyment’s influence on perceived ease of use, perceived usefulness, intention, and use varies with system type. On the basis of a meta-analytic structural equation modeling analysis of 185 studies between 1992 and February 2011, our findings suggest intrinsic motivation is equally relevant for predicting intentions toward using and actual use of both hedonic and utilitarian systems. Therefore, our meta-analytic results call into question the rigidity of the assumption that system type is a ‘boundary condition’ for understanding individuals’ interaction with information technology. The implications of these results for research and practice are discussed.
",ramakrishna ayyagari,,2012.0,10.1057/ejis.2012.25,European Journal of Information Systems,Gerow2012,Not available,,Nature,Not available,Can we have fun @ work? The role of intrinsic motivation for utilitarian systems,297beaf728219088d9b587c262ffbf11,http://dx.doi.org/10.1057/ejis.2012.25
17402,"Since the introduction of the Motivational Technology Acceptance Model in 1992, many researchers have considered both extrinsic and intrinsic motivation as antecedents of intent to use and actual use of a system. However, it has been a long-standing and largely unchallenged assumption that intrinsic motivation (i.e., fun or enjoyment) is a more dominant predictor of hedonic (fun) application use and that extrinsic motivation (i.e., usefulness) is a more dominant predictor of utilitarian (practical) application use. In this article, we probe whether system type serves as a boundary condition (i.e., moderator) for understanding an individual’s interaction with information technology. Specifically, we examine whether perceived enjoyment’s influence on perceived ease of use, perceived usefulness, intention, and use varies with system type. On the basis of a meta-analytic structural equation modeling analysis of 185 studies between 1992 and February 2011, our findings suggest intrinsic motivation is equally relevant for predicting intentions toward using and actual use of both hedonic and utilitarian systems. Therefore, our meta-analytic results call into question the rigidity of the assumption that system type is a ‘boundary condition’ for understanding individuals’ interaction with information technology. The implications of these results for research and practice are discussed.
",jason thatcher,,2012.0,10.1057/ejis.2012.25,European Journal of Information Systems,Gerow2012,Not available,,Nature,Not available,Can we have fun @ work? The role of intrinsic motivation for utilitarian systems,297beaf728219088d9b587c262ffbf11,http://dx.doi.org/10.1057/ejis.2012.25
17403,"Since the introduction of the Motivational Technology Acceptance Model in 1992, many researchers have considered both extrinsic and intrinsic motivation as antecedents of intent to use and actual use of a system. However, it has been a long-standing and largely unchallenged assumption that intrinsic motivation (i.e., fun or enjoyment) is a more dominant predictor of hedonic (fun) application use and that extrinsic motivation (i.e., usefulness) is a more dominant predictor of utilitarian (practical) application use. In this article, we probe whether system type serves as a boundary condition (i.e., moderator) for understanding an individual’s interaction with information technology. Specifically, we examine whether perceived enjoyment’s influence on perceived ease of use, perceived usefulness, intention, and use varies with system type. On the basis of a meta-analytic structural equation modeling analysis of 185 studies between 1992 and February 2011, our findings suggest intrinsic motivation is equally relevant for predicting intentions toward using and actual use of both hedonic and utilitarian systems. Therefore, our meta-analytic results call into question the rigidity of the assumption that system type is a ‘boundary condition’ for understanding individuals’ interaction with information technology. The implications of these results for research and practice are discussed.
",philip roth,,2012.0,10.1057/ejis.2012.25,European Journal of Information Systems,Gerow2012,Not available,,Nature,Not available,Can we have fun @ work? The role of intrinsic motivation for utilitarian systems,297beaf728219088d9b587c262ffbf11,http://dx.doi.org/10.1057/ejis.2012.25
17404,"Revenue management systems rely on customer data, and are thus affected by the absence of registered demand that arises when a product is no longer available. In the present work, we review the uncensoring (or unconstraining) techniques that have been proposed to deal with this issue, and develop a taxonomy based on their respective features. This study will be helpful in identifying the relative merits of these techniques, as well as avenues for future research.
",shadi azadeh,,2014.0,10.1057/rpm.2014.8,Journal of Revenue and Pricing Management,Azadeh2014,Not available,,Nature,Not available,A taxonomy of demand uncensoring methods in revenue management,b5f5a6ba431936a945b4de113bc94123,http://dx.doi.org/10.1057/rpm.2014.8
17405,"Revenue management systems rely on customer data, and are thus affected by the absence of registered demand that arises when a product is no longer available. In the present work, we review the uncensoring (or unconstraining) techniques that have been proposed to deal with this issue, and develop a taxonomy based on their respective features. This study will be helpful in identifying the relative merits of these techniques, as well as avenues for future research.
",patrice marcotte,,2014.0,10.1057/rpm.2014.8,Journal of Revenue and Pricing Management,Azadeh2014,Not available,,Nature,Not available,A taxonomy of demand uncensoring methods in revenue management,b5f5a6ba431936a945b4de113bc94123,http://dx.doi.org/10.1057/rpm.2014.8
17406,"This study relies on knowledge regarding the neuroplasticity of dual-system components that govern addiction and excessive behavior and suggests that alterations in the grey matter volumes, i.e., brain morphology, of specific regions of interest are associated with technology-related addictions. Using voxel based morphometry (VBM) applied to structural Magnetic Resonance Imaging (MRI) scans of twenty social network site (SNS) users with varying degrees of SNS addiction, we show that SNS addiction is associated with a presumably more efficient impulsive brain system, manifested through reduced grey matter volumes in the amygdala bilaterally (but not with structural differences in the Nucleus Accumbens). In this regard, SNS addiction is similar in terms of brain anatomy alterations to other (substance, gambling etc.) addictions. We also show that in contrast to other addictions in which the anterior-/ mid- cingulate cortex is impaired and fails to support the needed inhibition, which manifests through reduced grey matter volumes, this region is presumed to be healthy in our sample and its grey matter volume is positively correlated with one’s level of SNS addiction. These findings portray an anatomical morphology model of SNS addiction and point to brain morphology similarities and differences between technology addictions and substance and gambling addictions.
",qinghua he,Brain imaging,2017.0,10.1038/srep45064,Scientific Reports,He2017,Not available,,Nature,Not available,Brain anatomy alterations associated with Social Networking Site (SNS) addiction,355447eac1ec4959117118d905cd56d1,http://dx.doi.org/10.1038/srep45064
17407,"Humans can reflect on decisions and report variable levels of confidence. But why maintain an explicit representation of confidence for choices that have already been made and therefore cannot be undone? Here we show that an explicit representation of confidence is harnessed for subsequent changes of mind. Specifically, when confidence is low, participants are more likely to change their minds when the same choice is presented again, an effect that is most pronounced in participants with greater fidelity in their confidence reports. Furthermore, we show that choices reported with high confidence follow a more consistent pattern (fewer transitivity violations). Finally, by tracking participants’ eye movements, we demonstrate that lower-level gaze dynamics can track uncertainty but do not directly impact changes of mind. These results suggest that an explicit and accurate representation of confidence has a positive impact on the quality of future value-based decisions.
",stephen fleming,,2016.0,10.1038/s41562-016-0002,Nature Human Behaviour,Folke2016,Not available,,Nature,Not available,Explicit representation of confidence informs future value-based decisions,c422a74241a64da65c2198b33fdd17c0,http://dx.doi.org/10.1038/s41562-016-0002
17408,"Revenue management systems rely on customer data, and are thus affected by the absence of registered demand that arises when a product is no longer available. In the present work, we review the uncensoring (or unconstraining) techniques that have been proposed to deal with this issue, and develop a taxonomy based on their respective features. This study will be helpful in identifying the relative merits of these techniques, as well as avenues for future research.
",gilles savard,,2014.0,10.1057/rpm.2014.8,Journal of Revenue and Pricing Management,Azadeh2014,Not available,,Nature,Not available,A taxonomy of demand uncensoring methods in revenue management,b5f5a6ba431936a945b4de113bc94123,http://dx.doi.org/10.1057/rpm.2014.8
17410,"By analyzing previously overlooked fossils and by taking a second look at some old finds, paleontologists are providing the first glimpses of the actual behavior of the tyrannosaurs",gregory erickson,,2014.0,10.1038/scientificamericandinosaurs0514-38,Scientific American,Erickson2014,Not available,,Nature,Not available,Breathing Life into T. rex,7b6edd9d8b3a8ecd180e57d91d0364e7,http://dx.doi.org/10.1038/scientificamericandinosaurs0514-38
17411,"This paper looks at the short history of the Eurozone through the lens of an evolutionary approach to forming new institutions. The euro has operated as a currency without a state under the dominance of Germany. This by itself may be good news, as long as Germany does not shirk its growing responsibility for the euro’s future. This would require Germany to invest more in upgrading Eurozone institutions and balancing its dominance gains with the economic and political responsibilities that come with it. Germany’s resilience and dominant size within the EU may explain its ‘muddling-through’ approach toward the Eurozone crisis: doing enough to prevent the unraveling of the Eurozone while resisting policies that may mitigate the depth of the crisis if they involve short-run costs to Germany. We review several manifestations of this muddling-through process. Germany’s attitude toward the Eurozone resembles the attitude of the United States toward the Bretton Woods system in the 1960s – benign neglect of the growing tensions, which led to the ultimate demise of the Bretton Woods system. Chances are that unraveling the Eurozone would be much more costly than the end of the Bretton Woods regime. One hopes that the muddling-through process would work as stepping-stones toward a more perfect euro union, yet hope may not be enough to deliver it.
",joshua aizenman,,2015.0,10.1057/ces.2014.37,Comparative Economic Studies,Aizenman2015,Not available,,Nature,Not available,"The Eurocrisis: Muddling through, or on the Way to a More Perfect Euro Union?",c26897829ba3c0cfc9f1bc7c517d2179,http://dx.doi.org/10.1057/ces.2014.37
17413,"Financial crises have occurred periodically for hundreds of years, and Adam Smith had important insights into their causes. Although by no means all that we know about such crises has been derived from Smith, it is interesting and important to reflect on what he did know and how ignoring his warnings about the creation of excess liquidity has contributed to the current crisis. In addition to the complexity of contemporary finance and the role of central banks and other regulatory institutions, a major difference between Smith's day and ours is the emergence of “moral hazard” as an important policy issue and its corollary, “immoral results.” It is important to realize that the risks of financial crisis, moral hazard, and immoral results cannot be avoided by financial and accounting gimmicks, and that there is no substitute for adequate capital in the creation of liquidity.
",michael mussa,,2009.0,10.1057/be.2008.9,Business Economics,Mussa2009,Not available,,Nature,Not available,Adam Smith and the Political Economy of a Modern Financial Crisis,3114b535e9d3a838bab1d601b87e5f3b,http://dx.doi.org/10.1057/be.2008.9
17414,"INTRODUCTION
In recent years, the importance of income from broadcasting rights for professional sports clubs' revenues has increased significantly both in the U.S. and in Europe [see Cave and Crandall, 2001]. While up to the 1980s gate receipts have constituted the major pillar of revenues, this role has since been taken over by income out of broadcasting rights sales [Andreff and Staudohar, 2000].
",helmut dietl,,2007.0,10.1057/eej.2007.33,Eastern Economic Journal,Dietl2007,Not available,,Nature,Not available,Pay-TV Versus Free-Tv: A Model of Sports Broadcasting Rights Sales,d9b8a36bb5699308f38010fb64230510,http://dx.doi.org/10.1057/eej.2007.33
17415,"INTRODUCTION
In recent years, the importance of income from broadcasting rights for professional sports clubs' revenues has increased significantly both in the U.S. and in Europe [see Cave and Crandall, 2001]. While up to the 1980s gate receipts have constituted the major pillar of revenues, this role has since been taken over by income out of broadcasting rights sales [Andreff and Staudohar, 2000].
",tariq hasan,,2007.0,10.1057/eej.2007.33,Eastern Economic Journal,Dietl2007,Not available,,Nature,Not available,Pay-TV Versus Free-Tv: A Model of Sports Broadcasting Rights Sales,d9b8a36bb5699308f38010fb64230510,http://dx.doi.org/10.1057/eej.2007.33
17416,How Obama and his team can pass climate legislation and reach an international accord by December 2009,chris mooney,,2009.0,10.1038/scientificamericanearth0309-24,Scientific American,Mooney2009,Not available,,Nature,Not available,Winning the Carbon Game,42bea32177778e8422255211cd197b3d,http://dx.doi.org/10.1038/scientificamericanearth0309-24
17417,How Obama and his team can pass climate legislation and reach an international accord by December 2009,chris mooney,,2009.0,10.1038/scientificamericanearth0309-24,Scientific American,Mooney2009,Not available,,Nature,Not available,Winning the Carbon Game,42bea32177778e8422255211cd197b3d,http://dx.doi.org/10.1038/scientificamericanearth0309-24
17418,"Humans can reflect on decisions and report variable levels of confidence. But why maintain an explicit representation of confidence for choices that have already been made and therefore cannot be undone? Here we show that an explicit representation of confidence is harnessed for subsequent changes of mind. Specifically, when confidence is low, participants are more likely to change their minds when the same choice is presented again, an effect that is most pronounced in participants with greater fidelity in their confidence reports. Furthermore, we show that choices reported with high confidence follow a more consistent pattern (fewer transitivity violations). Finally, by tracking participants’ eye movements, we demonstrate that lower-level gaze dynamics can track uncertainty but do not directly impact changes of mind. These results suggest that an explicit and accurate representation of confidence has a positive impact on the quality of future value-based decisions.
",benedetto martino,,2016.0,10.1038/s41562-016-0002,Nature Human Behaviour,Folke2016,Not available,,Nature,Not available,Explicit representation of confidence informs future value-based decisions,c422a74241a64da65c2198b33fdd17c0,http://dx.doi.org/10.1038/s41562-016-0002
17419,"Nash equilibrium is widely present in various social disputes. As of now, in structured static populations, such as social networks, regular, and random graphs, the discussions on Nash equilibrium are quite limited. In a relatively stable static gaming network, a rational individual has to comprehensively consider all his/her opponents' strategies before they adopt a unified strategy. In this scenario, a new strategy equilibrium emerges in the system. We define this equilibrium as a local Nash equilibrium. In this paper, we present an explicit definition of the local Nash equilibrium for the two-strategy games in structured populations. Based on the definition, we investigate the condition that a system reaches the evolutionary stable state when the individuals play the Prisoner's dilemma and snow-drift game. The local Nash equilibrium provides a way to judge whether a gaming structured population reaches the evolutionary stable state on one hand. On the other hand, it can be used to predict whether cooperators can survive in a system long before the system reaches its evolutionary stable state for the Prisoner's dilemma game. Our work therefore provides a theoretical framework for understanding the evolutionary stable state in the gaming populations with static structures.
",yichao zhang,Evolutionary theory,2014.0,10.1038/srep06224,Scientific Reports,Zhang2014,Not available,,Nature,Not available,Local Nash Equilibrium in Social Networks,de74cee6ac1b6582533f7944250f8ba1,http://dx.doi.org/10.1038/srep06224
17420,"Nash equilibrium is widely present in various social disputes. As of now, in structured static populations, such as social networks, regular, and random graphs, the discussions on Nash equilibrium are quite limited. In a relatively stable static gaming network, a rational individual has to comprehensively consider all his/her opponents' strategies before they adopt a unified strategy. In this scenario, a new strategy equilibrium emerges in the system. We define this equilibrium as a local Nash equilibrium. In this paper, we present an explicit definition of the local Nash equilibrium for the two-strategy games in structured populations. Based on the definition, we investigate the condition that a system reaches the evolutionary stable state when the individuals play the Prisoner's dilemma and snow-drift game. The local Nash equilibrium provides a way to judge whether a gaming structured population reaches the evolutionary stable state on one hand. On the other hand, it can be used to predict whether cooperators can survive in a system long before the system reaches its evolutionary stable state for the Prisoner's dilemma game. Our work therefore provides a theoretical framework for understanding the evolutionary stable state in the gaming populations with static structures.
",yichao zhang,Statistics,2014.0,10.1038/srep06224,Scientific Reports,Zhang2014,Not available,,Nature,Not available,Local Nash Equilibrium in Social Networks,de74cee6ac1b6582533f7944250f8ba1,http://dx.doi.org/10.1038/srep06224
17421,"Nash equilibrium is widely present in various social disputes. As of now, in structured static populations, such as social networks, regular, and random graphs, the discussions on Nash equilibrium are quite limited. In a relatively stable static gaming network, a rational individual has to comprehensively consider all his/her opponents' strategies before they adopt a unified strategy. In this scenario, a new strategy equilibrium emerges in the system. We define this equilibrium as a local Nash equilibrium. In this paper, we present an explicit definition of the local Nash equilibrium for the two-strategy games in structured populations. Based on the definition, we investigate the condition that a system reaches the evolutionary stable state when the individuals play the Prisoner's dilemma and snow-drift game. The local Nash equilibrium provides a way to judge whether a gaming structured population reaches the evolutionary stable state on one hand. On the other hand, it can be used to predict whether cooperators can survive in a system long before the system reaches its evolutionary stable state for the Prisoner's dilemma game. Our work therefore provides a theoretical framework for understanding the evolutionary stable state in the gaming populations with static structures.
",m. aziz-alaoui,Evolutionary theory,2014.0,10.1038/srep06224,Scientific Reports,Zhang2014,Not available,,Nature,Not available,Local Nash Equilibrium in Social Networks,de74cee6ac1b6582533f7944250f8ba1,http://dx.doi.org/10.1038/srep06224
17422,"Nash equilibrium is widely present in various social disputes. As of now, in structured static populations, such as social networks, regular, and random graphs, the discussions on Nash equilibrium are quite limited. In a relatively stable static gaming network, a rational individual has to comprehensively consider all his/her opponents' strategies before they adopt a unified strategy. In this scenario, a new strategy equilibrium emerges in the system. We define this equilibrium as a local Nash equilibrium. In this paper, we present an explicit definition of the local Nash equilibrium for the two-strategy games in structured populations. Based on the definition, we investigate the condition that a system reaches the evolutionary stable state when the individuals play the Prisoner's dilemma and snow-drift game. The local Nash equilibrium provides a way to judge whether a gaming structured population reaches the evolutionary stable state on one hand. On the other hand, it can be used to predict whether cooperators can survive in a system long before the system reaches its evolutionary stable state for the Prisoner's dilemma game. Our work therefore provides a theoretical framework for understanding the evolutionary stable state in the gaming populations with static structures.
",m. aziz-alaoui,Statistics,2014.0,10.1038/srep06224,Scientific Reports,Zhang2014,Not available,,Nature,Not available,Local Nash Equilibrium in Social Networks,de74cee6ac1b6582533f7944250f8ba1,http://dx.doi.org/10.1038/srep06224
17423,"Nash equilibrium is widely present in various social disputes. As of now, in structured static populations, such as social networks, regular, and random graphs, the discussions on Nash equilibrium are quite limited. In a relatively stable static gaming network, a rational individual has to comprehensively consider all his/her opponents' strategies before they adopt a unified strategy. In this scenario, a new strategy equilibrium emerges in the system. We define this equilibrium as a local Nash equilibrium. In this paper, we present an explicit definition of the local Nash equilibrium for the two-strategy games in structured populations. Based on the definition, we investigate the condition that a system reaches the evolutionary stable state when the individuals play the Prisoner's dilemma and snow-drift game. The local Nash equilibrium provides a way to judge whether a gaming structured population reaches the evolutionary stable state on one hand. On the other hand, it can be used to predict whether cooperators can survive in a system long before the system reaches its evolutionary stable state for the Prisoner's dilemma game. Our work therefore provides a theoretical framework for understanding the evolutionary stable state in the gaming populations with static structures.
",cyrille bertelle,Evolutionary theory,2014.0,10.1038/srep06224,Scientific Reports,Zhang2014,Not available,,Nature,Not available,Local Nash Equilibrium in Social Networks,de74cee6ac1b6582533f7944250f8ba1,http://dx.doi.org/10.1038/srep06224
17424,"Nash equilibrium is widely present in various social disputes. As of now, in structured static populations, such as social networks, regular, and random graphs, the discussions on Nash equilibrium are quite limited. In a relatively stable static gaming network, a rational individual has to comprehensively consider all his/her opponents' strategies before they adopt a unified strategy. In this scenario, a new strategy equilibrium emerges in the system. We define this equilibrium as a local Nash equilibrium. In this paper, we present an explicit definition of the local Nash equilibrium for the two-strategy games in structured populations. Based on the definition, we investigate the condition that a system reaches the evolutionary stable state when the individuals play the Prisoner's dilemma and snow-drift game. The local Nash equilibrium provides a way to judge whether a gaming structured population reaches the evolutionary stable state on one hand. On the other hand, it can be used to predict whether cooperators can survive in a system long before the system reaches its evolutionary stable state for the Prisoner's dilemma game. Our work therefore provides a theoretical framework for understanding the evolutionary stable state in the gaming populations with static structures.
",cyrille bertelle,Statistics,2014.0,10.1038/srep06224,Scientific Reports,Zhang2014,Not available,,Nature,Not available,Local Nash Equilibrium in Social Networks,de74cee6ac1b6582533f7944250f8ba1,http://dx.doi.org/10.1038/srep06224
17425,"Nash equilibrium is widely present in various social disputes. As of now, in structured static populations, such as social networks, regular, and random graphs, the discussions on Nash equilibrium are quite limited. In a relatively stable static gaming network, a rational individual has to comprehensively consider all his/her opponents' strategies before they adopt a unified strategy. In this scenario, a new strategy equilibrium emerges in the system. We define this equilibrium as a local Nash equilibrium. In this paper, we present an explicit definition of the local Nash equilibrium for the two-strategy games in structured populations. Based on the definition, we investigate the condition that a system reaches the evolutionary stable state when the individuals play the Prisoner's dilemma and snow-drift game. The local Nash equilibrium provides a way to judge whether a gaming structured population reaches the evolutionary stable state on one hand. On the other hand, it can be used to predict whether cooperators can survive in a system long before the system reaches its evolutionary stable state for the Prisoner's dilemma game. Our work therefore provides a theoretical framework for understanding the evolutionary stable state in the gaming populations with static structures.
",jihong guan,Evolutionary theory,2014.0,10.1038/srep06224,Scientific Reports,Zhang2014,Not available,,Nature,Not available,Local Nash Equilibrium in Social Networks,de74cee6ac1b6582533f7944250f8ba1,http://dx.doi.org/10.1038/srep06224
17426,"Nash equilibrium is widely present in various social disputes. As of now, in structured static populations, such as social networks, regular, and random graphs, the discussions on Nash equilibrium are quite limited. In a relatively stable static gaming network, a rational individual has to comprehensively consider all his/her opponents' strategies before they adopt a unified strategy. In this scenario, a new strategy equilibrium emerges in the system. We define this equilibrium as a local Nash equilibrium. In this paper, we present an explicit definition of the local Nash equilibrium for the two-strategy games in structured populations. Based on the definition, we investigate the condition that a system reaches the evolutionary stable state when the individuals play the Prisoner's dilemma and snow-drift game. The local Nash equilibrium provides a way to judge whether a gaming structured population reaches the evolutionary stable state on one hand. On the other hand, it can be used to predict whether cooperators can survive in a system long before the system reaches its evolutionary stable state for the Prisoner's dilemma game. Our work therefore provides a theoretical framework for understanding the evolutionary stable state in the gaming populations with static structures.
",jihong guan,Statistics,2014.0,10.1038/srep06224,Scientific Reports,Zhang2014,Not available,,Nature,Not available,Local Nash Equilibrium in Social Networks,de74cee6ac1b6582533f7944250f8ba1,http://dx.doi.org/10.1038/srep06224
17427,"A popular method for selling excess inventory over the Internet is via a Name-Your-Own Price auction, where the bidder bids on an item and the seller immediately decides on whether or not to accept the bid. The analytical modeling of such auctions is still in its infancy. A number of papers have appeared over the last few years making various assumptions about buyers and sellers. The intent of this article is to carefully delineate the various assumptions and modeling approaches and, consequently, suggest avenues for further research.
",chris anderson,,2010.0,10.1057/rpm.2010.46,Journal of Revenue and Pricing Management,Anderson2010,Not available,,Nature,Not available,Name-your-own price auction mechanisms – Modeling and future implications,121c7a1e1de9dc73f1570768516fcb21,http://dx.doi.org/10.1057/rpm.2010.46
17428,"A popular method for selling excess inventory over the Internet is via a Name-Your-Own Price auction, where the bidder bids on an item and the seller immediately decides on whether or not to accept the bid. The analytical modeling of such auctions is still in its infancy. A number of papers have appeared over the last few years making various assumptions about buyers and sellers. The intent of this article is to carefully delineate the various assumptions and modeling approaches and, consequently, suggest avenues for further research.
",john wilson,,2010.0,10.1057/rpm.2010.46,Journal of Revenue and Pricing Management,Anderson2010,Not available,,Nature,Not available,Name-your-own price auction mechanisms – Modeling and future implications,121c7a1e1de9dc73f1570768516fcb21,http://dx.doi.org/10.1057/rpm.2010.46
17429,"Socio–ecological systems are increasingly modelled by games played on complex networks. While the concept of Nash equilibrium assumes perfect rationality, in reality players display heterogeneous bounded rationality. Here we present a topological model of bounded rationality in socio-ecological systems, using the rationality parameter of the Quantal Response Equilibrium. We argue that system rationality could be measured by the average Kullback–-Leibler divergence between Nash and Quantal Response Equilibria, and that the convergence towards Nash equilibria on average corresponds to increased system rationality. Using this model, we show that when a randomly connected socio-ecological system is topologically optimised to converge towards Nash equilibria, scale-free and small world features emerge. Therefore, optimising system rationality is an evolutionary reason for the emergence of scale-free and small-world features in socio-ecological systems. Further, we show that in games where multiple equilibria are possible, the correlation between the scale-freeness of the system and the fraction of links with multiple equilibria goes through a rapid transition when the average system rationality increases. Our results explain the influence of the topological structure of socio–ecological systems in shaping their collective cognitive behaviour, and provide an explanation for the prevalence of scale-free and small-world characteristics in such systems.
",dharshana kasthurirathna,Computational science,2015.0,10.1038/srep10448,Scientific Reports,Kasthurirathna2015,Not available,,Nature,Not available,Emergence of scale-free characteristics in socio-ecological systems with bounded rationality,c0b2114fe9fe59412084edd809ff7de5,http://dx.doi.org/10.1038/srep10448
17430,"Socio–ecological systems are increasingly modelled by games played on complex networks. While the concept of Nash equilibrium assumes perfect rationality, in reality players display heterogeneous bounded rationality. Here we present a topological model of bounded rationality in socio-ecological systems, using the rationality parameter of the Quantal Response Equilibrium. We argue that system rationality could be measured by the average Kullback–-Leibler divergence between Nash and Quantal Response Equilibria, and that the convergence towards Nash equilibria on average corresponds to increased system rationality. Using this model, we show that when a randomly connected socio-ecological system is topologically optimised to converge towards Nash equilibria, scale-free and small world features emerge. Therefore, optimising system rationality is an evolutionary reason for the emergence of scale-free and small-world features in socio-ecological systems. Further, we show that in games where multiple equilibria are possible, the correlation between the scale-freeness of the system and the fraction of links with multiple equilibria goes through a rapid transition when the average system rationality increases. Our results explain the influence of the topological structure of socio–ecological systems in shaping their collective cognitive behaviour, and provide an explanation for the prevalence of scale-free and small-world characteristics in such systems.
",dharshana kasthurirathna,Computer science,2015.0,10.1038/srep10448,Scientific Reports,Kasthurirathna2015,Not available,,Nature,Not available,Emergence of scale-free characteristics in socio-ecological systems with bounded rationality,c0b2114fe9fe59412084edd809ff7de5,http://dx.doi.org/10.1038/srep10448
17431,"Socio–ecological systems are increasingly modelled by games played on complex networks. While the concept of Nash equilibrium assumes perfect rationality, in reality players display heterogeneous bounded rationality. Here we present a topological model of bounded rationality in socio-ecological systems, using the rationality parameter of the Quantal Response Equilibrium. We argue that system rationality could be measured by the average Kullback–-Leibler divergence between Nash and Quantal Response Equilibria, and that the convergence towards Nash equilibria on average corresponds to increased system rationality. Using this model, we show that when a randomly connected socio-ecological system is topologically optimised to converge towards Nash equilibria, scale-free and small world features emerge. Therefore, optimising system rationality is an evolutionary reason for the emergence of scale-free and small-world features in socio-ecological systems. Further, we show that in games where multiple equilibria are possible, the correlation between the scale-freeness of the system and the fraction of links with multiple equilibria goes through a rapid transition when the average system rationality increases. Our results explain the influence of the topological structure of socio–ecological systems in shaping their collective cognitive behaviour, and provide an explanation for the prevalence of scale-free and small-world characteristics in such systems.
",mahendra piraveenan,Computational science,2015.0,10.1038/srep10448,Scientific Reports,Kasthurirathna2015,Not available,,Nature,Not available,Emergence of scale-free characteristics in socio-ecological systems with bounded rationality,c0b2114fe9fe59412084edd809ff7de5,http://dx.doi.org/10.1038/srep10448
17432,"Socio–ecological systems are increasingly modelled by games played on complex networks. While the concept of Nash equilibrium assumes perfect rationality, in reality players display heterogeneous bounded rationality. Here we present a topological model of bounded rationality in socio-ecological systems, using the rationality parameter of the Quantal Response Equilibrium. We argue that system rationality could be measured by the average Kullback–-Leibler divergence between Nash and Quantal Response Equilibria, and that the convergence towards Nash equilibria on average corresponds to increased system rationality. Using this model, we show that when a randomly connected socio-ecological system is topologically optimised to converge towards Nash equilibria, scale-free and small world features emerge. Therefore, optimising system rationality is an evolutionary reason for the emergence of scale-free and small-world features in socio-ecological systems. Further, we show that in games where multiple equilibria are possible, the correlation between the scale-freeness of the system and the fraction of links with multiple equilibria goes through a rapid transition when the average system rationality increases. Our results explain the influence of the topological structure of socio–ecological systems in shaping their collective cognitive behaviour, and provide an explanation for the prevalence of scale-free and small-world characteristics in such systems.
",mahendra piraveenan,Computer science,2015.0,10.1038/srep10448,Scientific Reports,Kasthurirathna2015,Not available,,Nature,Not available,Emergence of scale-free characteristics in socio-ecological systems with bounded rationality,c0b2114fe9fe59412084edd809ff7de5,http://dx.doi.org/10.1038/srep10448
17433,"It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex.
",tom schonberg,Attention,2014.0,10.1038/nn.3673,Nature Neuroscience,Schonberg2014,Not available,,Nature,Not available,Changing value through cued approach: an automatic mechanism of behavior change,c60521d9824aeffd15ab9986e4ad4b56,http://dx.doi.org/10.1038/nn.3673
17434,"It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex.
",tom schonberg,Decision,2014.0,10.1038/nn.3673,Nature Neuroscience,Schonberg2014,Not available,,Nature,Not available,Changing value through cued approach: an automatic mechanism of behavior change,c60521d9824aeffd15ab9986e4ad4b56,http://dx.doi.org/10.1038/nn.3673
17435,"It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex.
",akram bakkour,Attention,2014.0,10.1038/nn.3673,Nature Neuroscience,Schonberg2014,Not available,,Nature,Not available,Changing value through cued approach: an automatic mechanism of behavior change,c60521d9824aeffd15ab9986e4ad4b56,http://dx.doi.org/10.1038/nn.3673
17436,"It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex.
",akram bakkour,Decision,2014.0,10.1038/nn.3673,Nature Neuroscience,Schonberg2014,Not available,,Nature,Not available,Changing value through cued approach: an automatic mechanism of behavior change,c60521d9824aeffd15ab9986e4ad4b56,http://dx.doi.org/10.1038/nn.3673
17437,"It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex.
",ashleigh hover,Attention,2014.0,10.1038/nn.3673,Nature Neuroscience,Schonberg2014,Not available,,Nature,Not available,Changing value through cued approach: an automatic mechanism of behavior change,c60521d9824aeffd15ab9986e4ad4b56,http://dx.doi.org/10.1038/nn.3673
17438,"It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex.
",ashleigh hover,Decision,2014.0,10.1038/nn.3673,Nature Neuroscience,Schonberg2014,Not available,,Nature,Not available,Changing value through cued approach: an automatic mechanism of behavior change,c60521d9824aeffd15ab9986e4ad4b56,http://dx.doi.org/10.1038/nn.3673
17439,"It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex.
",jeanette mumford,Attention,2014.0,10.1038/nn.3673,Nature Neuroscience,Schonberg2014,Not available,,Nature,Not available,Changing value through cued approach: an automatic mechanism of behavior change,c60521d9824aeffd15ab9986e4ad4b56,http://dx.doi.org/10.1038/nn.3673
17440,"It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex.
",jeanette mumford,Decision,2014.0,10.1038/nn.3673,Nature Neuroscience,Schonberg2014,Not available,,Nature,Not available,Changing value through cued approach: an automatic mechanism of behavior change,c60521d9824aeffd15ab9986e4ad4b56,http://dx.doi.org/10.1038/nn.3673
17441,"It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex.
",lakshya nagar,Attention,2014.0,10.1038/nn.3673,Nature Neuroscience,Schonberg2014,Not available,,Nature,Not available,Changing value through cued approach: an automatic mechanism of behavior change,c60521d9824aeffd15ab9986e4ad4b56,http://dx.doi.org/10.1038/nn.3673
17442,"It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex.
",lakshya nagar,Decision,2014.0,10.1038/nn.3673,Nature Neuroscience,Schonberg2014,Not available,,Nature,Not available,Changing value through cued approach: an automatic mechanism of behavior change,c60521d9824aeffd15ab9986e4ad4b56,http://dx.doi.org/10.1038/nn.3673
17443,"It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex.
",jacob perez,Attention,2014.0,10.1038/nn.3673,Nature Neuroscience,Schonberg2014,Not available,,Nature,Not available,Changing value through cued approach: an automatic mechanism of behavior change,c60521d9824aeffd15ab9986e4ad4b56,http://dx.doi.org/10.1038/nn.3673
17444,"It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex.
",jacob perez,Decision,2014.0,10.1038/nn.3673,Nature Neuroscience,Schonberg2014,Not available,,Nature,Not available,Changing value through cued approach: an automatic mechanism of behavior change,c60521d9824aeffd15ab9986e4ad4b56,http://dx.doi.org/10.1038/nn.3673
17445,"It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex.
",russell poldrack,Attention,2014.0,10.1038/nn.3673,Nature Neuroscience,Schonberg2014,Not available,,Nature,Not available,Changing value through cued approach: an automatic mechanism of behavior change,c60521d9824aeffd15ab9986e4ad4b56,http://dx.doi.org/10.1038/nn.3673
17446,"It is believed that choice behavior reveals the underlying value of goods. The subjective values of stimuli can be changed through reward-based learning mechanisms as well as by modifying the description of the decision problem, but it has yet to be shown that preferences can be manipulated by perturbing intrinsic values of individual items. Here we show that the value of food items can be modulated by the concurrent presentation of an irrelevant auditory cue to which subjects must make a simple motor response (i.e., cue-approach training). Follow-up tests showed that the effects of this pairing on choice lasted at least 2 months after prolonged training. Eye-tracking during choice confirmed that cue-approach training increased attention to the cued items. Neuroimaging revealed the neural signature of a value change in the form of amplified preference-related activity in ventromedial prefrontal cortex.
",russell poldrack,Decision,2014.0,10.1038/nn.3673,Nature Neuroscience,Schonberg2014,Not available,,Nature,Not available,Changing value through cued approach: an automatic mechanism of behavior change,c60521d9824aeffd15ab9986e4ad4b56,http://dx.doi.org/10.1038/nn.3673
17447,"This paper proposes a model to compute nodal prices in oligopolistic markets. The model generalizes a previous model aimed at solving the single-bus problem by applying an optimization procedure. Both models can be classified as conjectured supply function models. The conjectured supply functions are assumed to be linear with constant slopes. The conjectured price responses (price sensitivity as seen for each generating unit), however, are assumed to be dependent on the system line's status (congested or not congested). The consideration of such a dependence is one of the main contributions of this paper. Market equilibrium is defined in this framework. A procedure based on solving an optimization problem is proposed. It only requires convexity of cost functions. Existence of equilibrium, however, is not guaranteed in this multi-nodal situation and an iterative search is required to find it if it exists. A two-area multi-period case study is analysed. The model reaches equilibrium for some cases, mainly depending on the number of periods considered and on the value of conjectured supply function slopes. Some oscillation patterns are observed that can be interpreted as quasi-equilibria. This methodology can be applied to the study of the future Iberian electricity market.
",j barquin,,2008.0,10.1057/jors.2008.118,Journal of the Operational Research Society,Barquín2008,Not available,,Nature,Not available,An optimization-based conjectured supply function equilibrium model for network constrained electricity markets,81599ebed3fed5bf40ac968c7d9a32df,http://dx.doi.org/10.1057/jors.2008.118
17448,"The verification of nuclear warheads for arms control involves a paradox: international inspectors will have to gain high confidence in the authenticity of submitted items while learning nothing about them. Proposed inspection systems featuring ‘information barriers’, designed to hide measurements stored in electronic systems, are at risk of tampering and snooping. Here we show the viability of a fundamentally new approach to nuclear warhead verification that incorporates a zero-knowledge protocol, which is designed in such a way that sensitive information is never measured and so does not need to be hidden. We interrogate submitted items with energetic neutrons, making, in effect, differential measurements of both neutron transmission and emission. Calculations for scenarios in which material is diverted from a test object show that a high degree of discrimination can be achieved while revealing zero information. Our ideas for a physical zero-knowledge system could have applications beyond the context of nuclear disarmament. The proposed technique suggests a way to perform comparisons or computations on personal or confidential data without measuring the data in the first place.
",alexander glaser,Imaging techniques,2014.0,10.1038/nature13457,Nature,Glaser2014,Not available,,Nature,Not available,A zero-knowledge protocol for nuclear warhead verification,19c04d3188dc45725c469607ba72c4aa,http://dx.doi.org/10.1038/nature13457
17449,"The verification of nuclear warheads for arms control involves a paradox: international inspectors will have to gain high confidence in the authenticity of submitted items while learning nothing about them. Proposed inspection systems featuring ‘information barriers’, designed to hide measurements stored in electronic systems, are at risk of tampering and snooping. Here we show the viability of a fundamentally new approach to nuclear warhead verification that incorporates a zero-knowledge protocol, which is designed in such a way that sensitive information is never measured and so does not need to be hidden. We interrogate submitted items with energetic neutrons, making, in effect, differential measurements of both neutron transmission and emission. Calculations for scenarios in which material is diverted from a test object show that a high degree of discrimination can be achieved while revealing zero information. Our ideas for a physical zero-knowledge system could have applications beyond the context of nuclear disarmament. The proposed technique suggests a way to perform comparisons or computations on personal or confidential data without measuring the data in the first place.
",alexander glaser,Experimental nuclear physics,2014.0,10.1038/nature13457,Nature,Glaser2014,Not available,,Nature,Not available,A zero-knowledge protocol for nuclear warhead verification,19c04d3188dc45725c469607ba72c4aa,http://dx.doi.org/10.1038/nature13457
17450,"The verification of nuclear warheads for arms control involves a paradox: international inspectors will have to gain high confidence in the authenticity of submitted items while learning nothing about them. Proposed inspection systems featuring ‘information barriers’, designed to hide measurements stored in electronic systems, are at risk of tampering and snooping. Here we show the viability of a fundamentally new approach to nuclear warhead verification that incorporates a zero-knowledge protocol, which is designed in such a way that sensitive information is never measured and so does not need to be hidden. We interrogate submitted items with energetic neutrons, making, in effect, differential measurements of both neutron transmission and emission. Calculations for scenarios in which material is diverted from a test object show that a high degree of discrimination can be achieved while revealing zero information. Our ideas for a physical zero-knowledge system could have applications beyond the context of nuclear disarmament. The proposed technique suggests a way to perform comparisons or computations on personal or confidential data without measuring the data in the first place.
",boaz barak,Imaging techniques,2014.0,10.1038/nature13457,Nature,Glaser2014,Not available,,Nature,Not available,A zero-knowledge protocol for nuclear warhead verification,19c04d3188dc45725c469607ba72c4aa,http://dx.doi.org/10.1038/nature13457
17451,"The verification of nuclear warheads for arms control involves a paradox: international inspectors will have to gain high confidence in the authenticity of submitted items while learning nothing about them. Proposed inspection systems featuring ‘information barriers’, designed to hide measurements stored in electronic systems, are at risk of tampering and snooping. Here we show the viability of a fundamentally new approach to nuclear warhead verification that incorporates a zero-knowledge protocol, which is designed in such a way that sensitive information is never measured and so does not need to be hidden. We interrogate submitted items with energetic neutrons, making, in effect, differential measurements of both neutron transmission and emission. Calculations for scenarios in which material is diverted from a test object show that a high degree of discrimination can be achieved while revealing zero information. Our ideas for a physical zero-knowledge system could have applications beyond the context of nuclear disarmament. The proposed technique suggests a way to perform comparisons or computations on personal or confidential data without measuring the data in the first place.
",boaz barak,Experimental nuclear physics,2014.0,10.1038/nature13457,Nature,Glaser2014,Not available,,Nature,Not available,A zero-knowledge protocol for nuclear warhead verification,19c04d3188dc45725c469607ba72c4aa,http://dx.doi.org/10.1038/nature13457
17452,"The verification of nuclear warheads for arms control involves a paradox: international inspectors will have to gain high confidence in the authenticity of submitted items while learning nothing about them. Proposed inspection systems featuring ‘information barriers’, designed to hide measurements stored in electronic systems, are at risk of tampering and snooping. Here we show the viability of a fundamentally new approach to nuclear warhead verification that incorporates a zero-knowledge protocol, which is designed in such a way that sensitive information is never measured and so does not need to be hidden. We interrogate submitted items with energetic neutrons, making, in effect, differential measurements of both neutron transmission and emission. Calculations for scenarios in which material is diverted from a test object show that a high degree of discrimination can be achieved while revealing zero information. Our ideas for a physical zero-knowledge system could have applications beyond the context of nuclear disarmament. The proposed technique suggests a way to perform comparisons or computations on personal or confidential data without measuring the data in the first place.
",robert goldston,Imaging techniques,2014.0,10.1038/nature13457,Nature,Glaser2014,Not available,,Nature,Not available,A zero-knowledge protocol for nuclear warhead verification,19c04d3188dc45725c469607ba72c4aa,http://dx.doi.org/10.1038/nature13457
17453,"The verification of nuclear warheads for arms control involves a paradox: international inspectors will have to gain high confidence in the authenticity of submitted items while learning nothing about them. Proposed inspection systems featuring ‘information barriers’, designed to hide measurements stored in electronic systems, are at risk of tampering and snooping. Here we show the viability of a fundamentally new approach to nuclear warhead verification that incorporates a zero-knowledge protocol, which is designed in such a way that sensitive information is never measured and so does not need to be hidden. We interrogate submitted items with energetic neutrons, making, in effect, differential measurements of both neutron transmission and emission. Calculations for scenarios in which material is diverted from a test object show that a high degree of discrimination can be achieved while revealing zero information. Our ideas for a physical zero-knowledge system could have applications beyond the context of nuclear disarmament. The proposed technique suggests a way to perform comparisons or computations on personal or confidential data without measuring the data in the first place.
",robert goldston,Experimental nuclear physics,2014.0,10.1038/nature13457,Nature,Glaser2014,Not available,,Nature,Not available,A zero-knowledge protocol for nuclear warhead verification,19c04d3188dc45725c469607ba72c4aa,http://dx.doi.org/10.1038/nature13457
17454,"Much research has been devoted to the early adoption and the continued and habituated use of information systems (IS). Nevertheless, less is known about quitting the use of IS by individuals, especially in habituated hedonic settings, that is, IS discontinuance. This study focuses on this phenomenon, and argues that in hedonic IS use contexts (1) IS continuance and discontinuance can be considered simultaneously yet independently by current users, and that (2) IS continuance and discontinuance drivers can have differential effects on the respective behavioral intentions. Specifically, social cognitive theory is used to point to key unique drivers of website discontinuance intentions: guilt feelings regarding the use of the website and website-specific discontinuance self-efficacy, which counterbalance the effects of continuance drivers: habit and satisfaction. The distinctiveness of continuance and discontinuance intentions and their respective nomological networks, as well as the proposed research model, were then empirically validated in a study of 510 Facebook users. The findings indicate that satisfaction reduces discontinuance intentions directly and indirectly through habit formation. However, habit can also facilitate the development of ‘addiction’ to the use of the website, which produces guilt feelings and reduces one’s self-efficacy to quit using the website. These factors, in turn, drive discontinuance intentions and possibly the quitting of the use of the website.
",ofir turel,,2014.0,10.1057/ejis.2014.19,European Journal of Information Systems,Turel2014,Not available,,Nature,Not available,Quitting the use of a habituated hedonic information system: a theoretical model and empirical examination of Facebook users,e70374729f408e797aa1b01fe89afdf7,http://dx.doi.org/10.1057/ejis.2014.19
17455,"This article discusses the changing landscape of US crime, and both describes and evaluates the growth of private security in total security provided. Since the mid-1970s violent and basic property crimes have constantly declined while the number of economic crimes like identity theft, counterfeit goods and cyber misdeeds increased substantially. Monopolistic police have not addressed the changing landscape of crime and continue to deliver their traditional services. As market forces have limited influence on government, private security that is highly competitive and client oriented has been quicker to adopt technology and management innovations and address the new types of crime. Private police are estimated to be three times larger than public law enforcement. The article concludes that the increased penetration of private security is socially beneficial by improving efficiency, delivering client-oriented services and forcing police to improve their performance.
",erwin blackstone,,2012.0,10.1057/sj.2012.5,Security Journal,Blackstone2012,Not available,,Nature,Not available,Competition versus monopoly in the provision of police,c76436f4364a6d43b5b39c32a7871f1e,http://dx.doi.org/10.1057/sj.2012.5
17456,"This article discusses the changing landscape of US crime, and both describes and evaluates the growth of private security in total security provided. Since the mid-1970s violent and basic property crimes have constantly declined while the number of economic crimes like identity theft, counterfeit goods and cyber misdeeds increased substantially. Monopolistic police have not addressed the changing landscape of crime and continue to deliver their traditional services. As market forces have limited influence on government, private security that is highly competitive and client oriented has been quicker to adopt technology and management innovations and address the new types of crime. Private police are estimated to be three times larger than public law enforcement. The article concludes that the increased penetration of private security is socially beneficial by improving efficiency, delivering client-oriented services and forcing police to improve their performance.
",simon hakim,,2012.0,10.1057/sj.2012.5,Security Journal,Blackstone2012,Not available,,Nature,Not available,Competition versus monopoly in the provision of police,c76436f4364a6d43b5b39c32a7871f1e,http://dx.doi.org/10.1057/sj.2012.5
17457,"This paper proposes a model to compute nodal prices in oligopolistic markets. The model generalizes a previous model aimed at solving the single-bus problem by applying an optimization procedure. Both models can be classified as conjectured supply function models. The conjectured supply functions are assumed to be linear with constant slopes. The conjectured price responses (price sensitivity as seen for each generating unit), however, are assumed to be dependent on the system line's status (congested or not congested). The consideration of such a dependence is one of the main contributions of this paper. Market equilibrium is defined in this framework. A procedure based on solving an optimization problem is proposed. It only requires convexity of cost functions. Existence of equilibrium, however, is not guaranteed in this multi-nodal situation and an iterative search is required to find it if it exists. A two-area multi-period case study is analysed. The model reaches equilibrium for some cases, mainly depending on the number of periods considered and on the value of conjectured supply function slopes. Some oscillation patterns are observed that can be interpreted as quasi-equilibria. This methodology can be applied to the study of the future Iberian electricity market.
",b vitoriano,,2008.0,10.1057/jors.2008.118,Journal of the Operational Research Society,Barquín2008,Not available,,Nature,Not available,An optimization-based conjectured supply function equilibrium model for network constrained electricity markets,81599ebed3fed5bf40ac968c7d9a32df,http://dx.doi.org/10.1057/jors.2008.118
17458,"We develop and test a theoretical model to investigate the adoption of government-to-government (G2G) information systems in public administration organizations. Specifically, this model explains how top management commitment (TMC) mediates the impact of external institutional pressures on internal organizational resource allocation, which finally leads to the adoption decision. The hypotheses were tested using survey data from public administration organizations in China. Results from partial least squares analyses suggest that coercive and normative pressures positively affect TMC, which then positively affects financial and information technology (IT) human resources in the G2G adoption process. In turn, financial and IT human resources are confirmed to positively affect the intention to adopt G2G. Surprisingly, we do not find support for our hypothesis that mimetic pressures directly influence TMC. Rather, a post hoc analysis implies that mimetic pressures indirectly influence TMC via the influence of coercive pressures. Our findings provide important managerial implications for public administration organizations.
",daqing zheng,,2012.0,10.1057/ejis.2012.28,European Journal of Information Systems,Zheng2012,Not available,,Nature,Not available,E-government adoption in public administration organizations: integrating institutional theory perspective and resource-based view,1f96d311b2e4791392b743aea12b581b,http://dx.doi.org/10.1057/ejis.2012.28
17459,"We develop and test a theoretical model to investigate the adoption of government-to-government (G2G) information systems in public administration organizations. Specifically, this model explains how top management commitment (TMC) mediates the impact of external institutional pressures on internal organizational resource allocation, which finally leads to the adoption decision. The hypotheses were tested using survey data from public administration organizations in China. Results from partial least squares analyses suggest that coercive and normative pressures positively affect TMC, which then positively affects financial and information technology (IT) human resources in the G2G adoption process. In turn, financial and IT human resources are confirmed to positively affect the intention to adopt G2G. Surprisingly, we do not find support for our hypothesis that mimetic pressures directly influence TMC. Rather, a post hoc analysis implies that mimetic pressures indirectly influence TMC via the influence of coercive pressures. Our findings provide important managerial implications for public administration organizations.
",jin chen,,2012.0,10.1057/ejis.2012.28,European Journal of Information Systems,Zheng2012,Not available,,Nature,Not available,E-government adoption in public administration organizations: integrating institutional theory perspective and resource-based view,1f96d311b2e4791392b743aea12b581b,http://dx.doi.org/10.1057/ejis.2012.28
17460,"We develop and test a theoretical model to investigate the adoption of government-to-government (G2G) information systems in public administration organizations. Specifically, this model explains how top management commitment (TMC) mediates the impact of external institutional pressures on internal organizational resource allocation, which finally leads to the adoption decision. The hypotheses were tested using survey data from public administration organizations in China. Results from partial least squares analyses suggest that coercive and normative pressures positively affect TMC, which then positively affects financial and information technology (IT) human resources in the G2G adoption process. In turn, financial and IT human resources are confirmed to positively affect the intention to adopt G2G. Surprisingly, we do not find support for our hypothesis that mimetic pressures directly influence TMC. Rather, a post hoc analysis implies that mimetic pressures indirectly influence TMC via the influence of coercive pressures. Our findings provide important managerial implications for public administration organizations.
",lihua huang,,2012.0,10.1057/ejis.2012.28,European Journal of Information Systems,Zheng2012,Not available,,Nature,Not available,E-government adoption in public administration organizations: integrating institutional theory perspective and resource-based view,1f96d311b2e4791392b743aea12b581b,http://dx.doi.org/10.1057/ejis.2012.28
17461,"We develop and test a theoretical model to investigate the adoption of government-to-government (G2G) information systems in public administration organizations. Specifically, this model explains how top management commitment (TMC) mediates the impact of external institutional pressures on internal organizational resource allocation, which finally leads to the adoption decision. The hypotheses were tested using survey data from public administration organizations in China. Results from partial least squares analyses suggest that coercive and normative pressures positively affect TMC, which then positively affects financial and information technology (IT) human resources in the G2G adoption process. In turn, financial and IT human resources are confirmed to positively affect the intention to adopt G2G. Surprisingly, we do not find support for our hypothesis that mimetic pressures directly influence TMC. Rather, a post hoc analysis implies that mimetic pressures indirectly influence TMC via the influence of coercive pressures. Our findings provide important managerial implications for public administration organizations.
",cheng zhang,,2012.0,10.1057/ejis.2012.28,European Journal of Information Systems,Zheng2012,Not available,,Nature,Not available,E-government adoption in public administration organizations: integrating institutional theory perspective and resource-based view,1f96d311b2e4791392b743aea12b581b,http://dx.doi.org/10.1057/ejis.2012.28
17462,"To keep a network of enterprises sustainable, inter-organizational control measures are needed to detect or prevent opportunistic behaviour of network participants. We present a requirements engineering method for understanding control problems and designing solutions, based on an economic value perspective. The methodology employs a library of so-called control patterns, inspired by design patterns in software engineering. A control pattern is a generic solution for a common control problem. The usefulness and adequacy of the control patterns is demonstrated by a case study of the governance and control mechanisms of the Dutch public health insurance network for exceptional medical expenses (AWBZ).
",vera kartseva,,2010.0,10.1057/ejis.2010.13,European Journal of Information Systems,Kartseva2010,Not available,,Nature,Not available,Control patterns in a health-care network,60ba2c97e964f6383971d0c7efecef84,http://dx.doi.org/10.1057/ejis.2010.13
17463,"To keep a network of enterprises sustainable, inter-organizational control measures are needed to detect or prevent opportunistic behaviour of network participants. We present a requirements engineering method for understanding control problems and designing solutions, based on an economic value perspective. The methodology employs a library of so-called control patterns, inspired by design patterns in software engineering. A control pattern is a generic solution for a common control problem. The usefulness and adequacy of the control patterns is demonstrated by a case study of the governance and control mechanisms of the Dutch public health insurance network for exceptional medical expenses (AWBZ).
",joris hulstijn,,2010.0,10.1057/ejis.2010.13,European Journal of Information Systems,Kartseva2010,Not available,,Nature,Not available,Control patterns in a health-care network,60ba2c97e964f6383971d0c7efecef84,http://dx.doi.org/10.1057/ejis.2010.13
17464,"To keep a network of enterprises sustainable, inter-organizational control measures are needed to detect or prevent opportunistic behaviour of network participants. We present a requirements engineering method for understanding control problems and designing solutions, based on an economic value perspective. The methodology employs a library of so-called control patterns, inspired by design patterns in software engineering. A control pattern is a generic solution for a common control problem. The usefulness and adequacy of the control patterns is demonstrated by a case study of the governance and control mechanisms of the Dutch public health insurance network for exceptional medical expenses (AWBZ).
",jaap gordijn,,2010.0,10.1057/ejis.2010.13,European Journal of Information Systems,Kartseva2010,Not available,,Nature,Not available,Control patterns in a health-care network,60ba2c97e964f6383971d0c7efecef84,http://dx.doi.org/10.1057/ejis.2010.13
17465,"To keep a network of enterprises sustainable, inter-organizational control measures are needed to detect or prevent opportunistic behaviour of network participants. We present a requirements engineering method for understanding control problems and designing solutions, based on an economic value perspective. The methodology employs a library of so-called control patterns, inspired by design patterns in software engineering. A control pattern is a generic solution for a common control problem. The usefulness and adequacy of the control patterns is demonstrated by a case study of the governance and control mechanisms of the Dutch public health insurance network for exceptional medical expenses (AWBZ).
",yao-hua tan,,2010.0,10.1057/ejis.2010.13,European Journal of Information Systems,Kartseva2010,Not available,,Nature,Not available,Control patterns in a health-care network,60ba2c97e964f6383971d0c7efecef84,http://dx.doi.org/10.1057/ejis.2010.13
17466,"This qualitative longitudinal study observed the strategy process of several Spanish banks at the turn of the century, where the industry was undergoing a structural transformation due to the threat of Internet banking. We develop a model of organizational learning informed by an integration of findings from a qualitative study with theoretical perspectives from the strategy, knowledge creation, and learning literatures. The model is then used to compare and contrast the different learning processes that led these banks to the development and implementation of diverse Internet banking strategies, and to draw preliminary conclusions regarding the potential relationships between the learning processes used, the strategies chosen, and their performance outcomes.
",maria salmador,,2012.0,10.1057/kmrp.2012.33,Knowledge Management Research & Practice,Salmador2012,Not available,,Nature,Not available,Knowledge creation and competitive advantage in turbulent environments: a process model of organizational learning,de35def34356ea0f41caea92b9997f1b,http://dx.doi.org/10.1057/kmrp.2012.33
17467,"This qualitative longitudinal study observed the strategy process of several Spanish banks at the turn of the century, where the industry was undergoing a structural transformation due to the threat of Internet banking. We develop a model of organizational learning informed by an integration of findings from a qualitative study with theoretical perspectives from the strategy, knowledge creation, and learning literatures. The model is then used to compare and contrast the different learning processes that led these banks to the development and implementation of diverse Internet banking strategies, and to draw preliminary conclusions regarding the potential relationships between the learning processes used, the strategies chosen, and their performance outcomes.
",juan florin,,2012.0,10.1057/kmrp.2012.33,Knowledge Management Research & Practice,Salmador2012,Not available,,Nature,Not available,Knowledge creation and competitive advantage in turbulent environments: a process model of organizational learning,de35def34356ea0f41caea92b9997f1b,http://dx.doi.org/10.1057/kmrp.2012.33
17468,"This paper proposes a model to compute nodal prices in oligopolistic markets. The model generalizes a previous model aimed at solving the single-bus problem by applying an optimization procedure. Both models can be classified as conjectured supply function models. The conjectured supply functions are assumed to be linear with constant slopes. The conjectured price responses (price sensitivity as seen for each generating unit), however, are assumed to be dependent on the system line's status (congested or not congested). The consideration of such a dependence is one of the main contributions of this paper. Market equilibrium is defined in this framework. A procedure based on solving an optimization problem is proposed. It only requires convexity of cost functions. Existence of equilibrium, however, is not guaranteed in this multi-nodal situation and an iterative search is required to find it if it exists. A two-area multi-period case study is analysed. The model reaches equilibrium for some cases, mainly depending on the number of periods considered and on the value of conjectured supply function slopes. Some oscillation patterns are observed that can be interpreted as quasi-equilibria. This methodology can be applied to the study of the future Iberian electricity market.
",e centeno,,2008.0,10.1057/jors.2008.118,Journal of the Operational Research Society,Barquín2008,Not available,,Nature,Not available,An optimization-based conjectured supply function equilibrium model for network constrained electricity markets,81599ebed3fed5bf40ac968c7d9a32df,http://dx.doi.org/10.1057/jors.2008.118
17471,"This paper analyzes Poland's economic performance following the introduction of shock therapy as compared to China's performance under its transformation policies. Both nation's economic policies are evaluated within the context of the goals, perceptions and performance criteria selected by Polish policymakers. Evidence indicates that shock therapy had mixed results and that ideology has influenced previous analyses of such policies. Since China's performance based upon the same criteria has been no less favorable, lessons from its transformation experience deserve attention from Central and East European policymakers comparable to that given Poland.
",james angresano,,1996.0,10.1057/ces.1996.14,Comparative Economic Studies,Angresano1996,Not available,,Nature,Not available,Poland After the Shock,144aaf478d040346044f727abf72d70f,http://dx.doi.org/10.1057/ces.1996.14
17472,A central bank must decide on the frequency with which it will conduct open market operations and the variability in short-term money market that it will allow. The paper shows how the optimal operating procedure balances the value of attaining an immediate target and broadcasting the central bank's intentions against the informational advantages to the central bank of allowing the free play of market forces to reveal more of the information available to market participants.
,daniel hardy,,1997.0,10.2307/3867464,Staff Papers - International Monetary Fund,Hardy1997,Not available,,Nature,Not available,"Market Information and Signaling in Central Bank Operations, or, How Often Should a Central Bank Intervene?",44164a4bfa5d15c62786bedc9dc8d396,http://dx.doi.org/10.2307/3867464
17473,"Some public health advocates in tobacco states, having reconsidered the impacts of the federal tobacco price-support program, have negotiated common tobacco regulatory policy stances with tobacco grower representatives. This paper describes the impact of this rapprochement on the state-level negotiations of Master Settlement Agreement funds. It argues that there are indeed two worthy public health goals: tobacco control and the economic viability of tobacco dependent communities (TDCs), but the immediacy of the threat to the latter, the political potency of tobacco growers, and growers' goal of maintaining tobacco as their farms' anchor bring severe risks to the tobacco control portion of Settlement funds. Among three competing philosophies of economic development for TDCs, none are well evaluated, and two potentially create endless demands on Settlement resources. Public health policy advocates are urged to participate in negotiations on TDC economic development and to forcefully advocate for adequate tobacco control resources.
",w austin,,2000.0,10.2307/3343341,Journal of Public Health Policy,Austin2000,Not available,,Nature,Not available,Rural Economic Development vs. Tobacco Control? Tensions Underlying the Use of Tobacco Settlement Funds,f1b2d9e6ae91679355572678b02a6f93,http://dx.doi.org/10.2307/3343341
17474,"Some public health advocates in tobacco states, having reconsidered the impacts of the federal tobacco price-support program, have negotiated common tobacco regulatory policy stances with tobacco grower representatives. This paper describes the impact of this rapprochement on the state-level negotiations of Master Settlement Agreement funds. It argues that there are indeed two worthy public health goals: tobacco control and the economic viability of tobacco dependent communities (TDCs), but the immediacy of the threat to the latter, the political potency of tobacco growers, and growers' goal of maintaining tobacco as their farms' anchor bring severe risks to the tobacco control portion of Settlement funds. Among three competing philosophies of economic development for TDCs, none are well evaluated, and two potentially create endless demands on Settlement resources. Public health policy advocates are urged to participate in negotiations on TDC economic development and to forcefully advocate for adequate tobacco control resources.
",david altman,,2000.0,10.2307/3343341,Journal of Public Health Policy,Austin2000,Not available,,Nature,Not available,Rural Economic Development vs. Tobacco Control? Tensions Underlying the Use of Tobacco Settlement Funds,f1b2d9e6ae91679355572678b02a6f93,http://dx.doi.org/10.2307/3343341
17475,"As fisheries generally take place in a common pool resource in which exclusion is by definition difficult, they are a unique entry point to investigating the inclusive development concept. This article discusses the debates and interactions between owners of mechanized fishing boats in Chennai, India, over entry into their ocean fisheries. For the time period under consideration (1995-2014), we demonstrate that the discussion over social boundaries to the profession continued unabated, with moderate and more extreme views alternating and poorer owners standing opposed to the boat-owning elite. Interactive governance theory provides the framework of analysis. We conclude that governors – whether of the state or of the fishing population – need to balance between different policy objectives and between the imperatives of inclusion and exclusion to improve governability.
",maarten bavinck,,2015.0,10.1057/ejdr.2015.46,European Journal of Development Research,Bavinck2015,Not available,,Nature,Not available,"Contesting Inclusiveness: The Anxieties of Mechanised Fishers Over Social Boundaries in Chennai, South India",0ff20de2efaff3202e2dc0c09f716601,http://dx.doi.org/10.1057/ejdr.2015.46
17476,"As fisheries generally take place in a common pool resource in which exclusion is by definition difficult, they are a unique entry point to investigating the inclusive development concept. This article discusses the debates and interactions between owners of mechanized fishing boats in Chennai, India, over entry into their ocean fisheries. For the time period under consideration (1995-2014), we demonstrate that the discussion over social boundaries to the profession continued unabated, with moderate and more extreme views alternating and poorer owners standing opposed to the boat-owning elite. Interactive governance theory provides the framework of analysis. We conclude that governors – whether of the state or of the fishing population – need to balance between different policy objectives and between the imperatives of inclusion and exclusion to improve governability.
",subramanian karuppiah,,2015.0,10.1057/ejdr.2015.46,European Journal of Development Research,Bavinck2015,Not available,,Nature,Not available,"Contesting Inclusiveness: The Anxieties of Mechanised Fishers Over Social Boundaries in Chennai, South India",0ff20de2efaff3202e2dc0c09f716601,http://dx.doi.org/10.1057/ejdr.2015.46
17477,"As fisheries generally take place in a common pool resource in which exclusion is by definition difficult, they are a unique entry point to investigating the inclusive development concept. This article discusses the debates and interactions between owners of mechanized fishing boats in Chennai, India, over entry into their ocean fisheries. For the time period under consideration (1995-2014), we demonstrate that the discussion over social boundaries to the profession continued unabated, with moderate and more extreme views alternating and poorer owners standing opposed to the boat-owning elite. Interactive governance theory provides the framework of analysis. We conclude that governors – whether of the state or of the fishing population – need to balance between different policy objectives and between the imperatives of inclusion and exclusion to improve governability.
",svein jentoft,,2015.0,10.1057/ejdr.2015.46,European Journal of Development Research,Bavinck2015,Not available,,Nature,Not available,"Contesting Inclusiveness: The Anxieties of Mechanised Fishers Over Social Boundaries in Chennai, South India",0ff20de2efaff3202e2dc0c09f716601,http://dx.doi.org/10.1057/ejdr.2015.46
17478,"This paper proposes a model to compute nodal prices in oligopolistic markets. The model generalizes a previous model aimed at solving the single-bus problem by applying an optimization procedure. Both models can be classified as conjectured supply function models. The conjectured supply functions are assumed to be linear with constant slopes. The conjectured price responses (price sensitivity as seen for each generating unit), however, are assumed to be dependent on the system line's status (congested or not congested). The consideration of such a dependence is one of the main contributions of this paper. Market equilibrium is defined in this framework. A procedure based on solving an optimization problem is proposed. It only requires convexity of cost functions. Existence of equilibrium, however, is not guaranteed in this multi-nodal situation and an iterative search is required to find it if it exists. A two-area multi-period case study is analysed. The model reaches equilibrium for some cases, mainly depending on the number of periods considered and on the value of conjectured supply function slopes. Some oscillation patterns are observed that can be interpreted as quasi-equilibria. This methodology can be applied to the study of the future Iberian electricity market.
",f fernandez-menendez,,2008.0,10.1057/jors.2008.118,Journal of the Operational Research Society,Barquín2008,Not available,,Nature,Not available,An optimization-based conjectured supply function equilibrium model for network constrained electricity markets,81599ebed3fed5bf40ac968c7d9a32df,http://dx.doi.org/10.1057/jors.2008.118
17479,"Technology, standardization, and global integration have created a world of ever-increasing financial and economic complexity. However, measurement and modeling have not kept pace with these developments: new approaches to recognize and embrace the complexity of an open social-economic system are necessary. In particular, it is necessary to address five fundamental challenges to system modeling and forward-looking examinations of human behavior: fallibility, reflexivity, time inconsistency, domain inconsistency, and the “Lucas critique.” It is of particular importance to recognize that human life operates in an integrated domain of economic, political, spiritual, family, social, and cultural aspects. To support the needs of analysis, new types of data are necessary. The article presents several specific areas in which modeling and measurement must be improved to meet the demands for economic analysis in the 21st century.
",gene huang,,2013.0,10.1057/be.2012.37,Business Economics,Huang2013,Not available,,Nature,Not available,In Search of System Understanding and Control,782de9e11c94e2117e7c64d3f20768a4,http://dx.doi.org/10.1057/be.2012.37
17480,"In many species, individuals express phenotypic characteristics that enhance their competitiveness, that is, the ability to acquire resources in competition with others. Moreover, the degree of competitiveness varies considerably across individuals and in time. By means of an evolutionary model, we provide an explanation for this finding. We make the assumption that investment into competitiveness enhances the probability to acquire a high-quality resource, but at the same time reduces the ability of exploiting acquired resources with maximal efficiency. The model reveals that under a broad range of conditions competitiveness either converges to a polymorphic state, where individuals differing in competitive ability stably coexist, or is subject to perpetual transitions between periods of high and low competitiveness. The dynamics becomes even more complex if females can evolve preferences for (or against) competitive males. In extreme cases, such preferences can even drive the population to extinction.
",sebastian baldauf,Evolution,2014.0,10.1038/ncomms6233,Nature Communications,Baldauf2014,Not available,,Nature,Not available,Diversifying evolution of competitiveness,02b7426a0cfd9b65d9d66d2970745a7b,http://dx.doi.org/10.1038/ncomms6233
17481,"In many species, individuals express phenotypic characteristics that enhance their competitiveness, that is, the ability to acquire resources in competition with others. Moreover, the degree of competitiveness varies considerably across individuals and in time. By means of an evolutionary model, we provide an explanation for this finding. We make the assumption that investment into competitiveness enhances the probability to acquire a high-quality resource, but at the same time reduces the ability of exploiting acquired resources with maximal efficiency. The model reveals that under a broad range of conditions competitiveness either converges to a polymorphic state, where individuals differing in competitive ability stably coexist, or is subject to perpetual transitions between periods of high and low competitiveness. The dynamics becomes even more complex if females can evolve preferences for (or against) competitive males. In extreme cases, such preferences can even drive the population to extinction.
",leif engqvist,Evolution,2014.0,10.1038/ncomms6233,Nature Communications,Baldauf2014,Not available,,Nature,Not available,Diversifying evolution of competitiveness,02b7426a0cfd9b65d9d66d2970745a7b,http://dx.doi.org/10.1038/ncomms6233
17482,"In many species, individuals express phenotypic characteristics that enhance their competitiveness, that is, the ability to acquire resources in competition with others. Moreover, the degree of competitiveness varies considerably across individuals and in time. By means of an evolutionary model, we provide an explanation for this finding. We make the assumption that investment into competitiveness enhances the probability to acquire a high-quality resource, but at the same time reduces the ability of exploiting acquired resources with maximal efficiency. The model reveals that under a broad range of conditions competitiveness either converges to a polymorphic state, where individuals differing in competitive ability stably coexist, or is subject to perpetual transitions between periods of high and low competitiveness. The dynamics becomes even more complex if females can evolve preferences for (or against) competitive males. In extreme cases, such preferences can even drive the population to extinction.
",franz weissing,Evolution,2014.0,10.1038/ncomms6233,Nature Communications,Baldauf2014,Not available,,Nature,Not available,Diversifying evolution of competitiveness,02b7426a0cfd9b65d9d66d2970745a7b,http://dx.doi.org/10.1038/ncomms6233
17483,"Because stock markets in emerging economies are relatively new, under-regulated, and often segmented, investors' responses to public announcements by firms in these economies may differ from responses in developed economies' stock markets. We draw on the institutional and corporate governance literatures to explain investor reactions to announcements of international strategic alliances (ISAs) between foreign and emerging-market firms. We argue that emerging economies' stock markets positively value ISAs; however, information leakages due to weak regulatory environments siphon off the “good news” before the ISA announcement date. The level of state ownership of publicly traded firms and the nationality of foreign partners both affect the size and timing of market reactions.
",stewart miller,,2007.0,10.1057/palgrave.jibs.8400322,Journal of International Business Studies,Miller2007,Not available,,Nature,Not available,Insider trading and the valuation of international strategic alliances in emerging stock markets,f700b76feb97f47661124fbf18348253,http://dx.doi.org/10.1057/palgrave.jibs.8400322
17484,"Because stock markets in emerging economies are relatively new, under-regulated, and often segmented, investors' responses to public announcements by firms in these economies may differ from responses in developed economies' stock markets. We draw on the institutional and corporate governance literatures to explain investor reactions to announcements of international strategic alliances (ISAs) between foreign and emerging-market firms. We argue that emerging economies' stock markets positively value ISAs; however, information leakages due to weak regulatory environments siphon off the “good news” before the ISA announcement date. The level of state ownership of publicly traded firms and the nationality of foreign partners both affect the size and timing of market reactions.
",dan li,,2007.0,10.1057/palgrave.jibs.8400322,Journal of International Business Studies,Miller2007,Not available,,Nature,Not available,Insider trading and the valuation of international strategic alliances in emerging stock markets,f700b76feb97f47661124fbf18348253,http://dx.doi.org/10.1057/palgrave.jibs.8400322
17485,"Because stock markets in emerging economies are relatively new, under-regulated, and often segmented, investors' responses to public announcements by firms in these economies may differ from responses in developed economies' stock markets. We draw on the institutional and corporate governance literatures to explain investor reactions to announcements of international strategic alliances (ISAs) between foreign and emerging-market firms. We argue that emerging economies' stock markets positively value ISAs; however, information leakages due to weak regulatory environments siphon off the “good news” before the ISA announcement date. The level of state ownership of publicly traded firms and the nationality of foreign partners both affect the size and timing of market reactions.
",lorraine eden,,2007.0,10.1057/palgrave.jibs.8400322,Journal of International Business Studies,Miller2007,Not available,,Nature,Not available,Insider trading and the valuation of international strategic alliances in emerging stock markets,f700b76feb97f47661124fbf18348253,http://dx.doi.org/10.1057/palgrave.jibs.8400322
17486,"Because stock markets in emerging economies are relatively new, under-regulated, and often segmented, investors' responses to public announcements by firms in these economies may differ from responses in developed economies' stock markets. We draw on the institutional and corporate governance literatures to explain investor reactions to announcements of international strategic alliances (ISAs) between foreign and emerging-market firms. We argue that emerging economies' stock markets positively value ISAs; however, information leakages due to weak regulatory environments siphon off the “good news” before the ISA announcement date. The level of state ownership of publicly traded firms and the nationality of foreign partners both affect the size and timing of market reactions.
",michael hitt,,2007.0,10.1057/palgrave.jibs.8400322,Journal of International Business Studies,Miller2007,Not available,,Nature,Not available,Insider trading and the valuation of international strategic alliances in emerging stock markets,f700b76feb97f47661124fbf18348253,http://dx.doi.org/10.1057/palgrave.jibs.8400322
17487,"Available empirical evidence suggests that skewness preference plays an important role in understanding asset pricing and gambling. This paper establishes a skewness-comparability condition on probability distributions that is necessary and sufficient for any decision-maker's preferences over the distributions to depend on their means, variances, and third moments only. Under the condition, an Expected Utility maximizer's preferences for a larger mean, a smaller variance, and a larger third moment are shown to parallel, respectively, his preferences for a first-degree stochastic dominant improvement, a mean-preserving contraction, and a downside risk decrease and are characterized in terms of the von Neumann-Morgenstern utility function in exactly the same way. By showing that all Bernoulli distributions are mutually skewness comparable, we further show that in the wide range of economic models where these distributions are used individuals’ decisions under risk can be understood as trade-offs between mean, variance, and skewness. Our results on skewness-inducing transformations of random variables can also be applied to analyze the effects of progressive tax reforms on the incentive to make risky investments.
",w chiu,,2010.0,10.1057/grir.2009.9,The Geneva Risk and Insurance Review,Chiu2010,Not available,,Nature,Not available,"Skewness Preference, Risk Taking and Expected Utility Maximisation",f7b47d1a98baacee2c6d5396adaff1c5,http://dx.doi.org/10.1057/grir.2009.9
17488,"Risk attitudes other than risk aversion (e.g. prudence and temperance) are becoming important both in theoretical and empirical work. While the literature has mainly focused its attention on the intensity of such risk attitudes (e.g. the concepts of absolute prudence and absolute temperance), I consider here an alternative approach related to the direction of these attitudes (i.e. the sign of the successive derivatives of the utility function).
",louis eeckhoudt,,2012.0,10.1057/grir.2012.1,The Geneva Risk and Insurance Review,Eeckhoudt2012,Not available,,Nature,Not available,"Beyond Risk Aversion: Why, How and What's Next?*",bcffee0d883abd9dc7a95740cecafe63,http://dx.doi.org/10.1057/grir.2012.1
17489,Translating university research into a business opportunity almost always involves a technology transfer office (TTO). But there are misconceptions about the role of TTos as well as opportunities for them to be more effective.
,stephen caddick,,2017.0,10.1038/s41570-017-0103,Nature Reviews Chemistry,Caddick2017,Not available,,Nature,Not available,Don't get lost in translation,20924cd5cc0192bc86f1344c40a6c6fc,http://dx.doi.org/10.1038/s41570-017-0103
17490,"Brokers' reliance on ethical conduct as a critical element of their service package is not new. What has changed for a handful of brokerage firms is the extent to which, in response to client demand, they now operate on a “global basis”, a phrase that has taken on multiple levels of meaning. An unintended consequence of this evolution has been the emergence of a series of challenges, some apparent and some not, to be managed in assuring “utmost good faith” as a consistent deliverable for all of a global broker's numerous constituents.
This paper will first analyze and then suggest some possible solutions for management of the very real risk issues arising out of the following factors for global brokers:
the increasing complexities of an already highly fragmented industry “gone global,”
a scale of operations with which there is little experience,
“utmost good faith” as not just a goal of company culture, but also the objective of process management, and
management of sometimes seemingly confused alignments, or as some would argue, conflicts of interest.
",james hutchin,,2005.0,10.1057/palgrave.gpp.2510036,The Geneva Papers on Risk and Insurance Issues and Practice,Hutchin2005,Not available,,Nature,Not available,"Global Brokers, Global Clients: A New Operational and Ethical Context",216f04570490339716da7a98142e8657,http://dx.doi.org/10.1057/palgrave.gpp.2510036
17491,"Foreign affiliates of Chilean companies operating in Latin America were more profitable than similar local firms, but this difference in profitability has been decreasing over time. Two case studies of Chilean multinationals illustrate our hypothesis that one source of competitive advantage for Chilean firms was their know-how of business strategy during economic liberalization. Using empirical and theoretical considerations, we analyze whether this hypothesis is valid for other firms and industries.
",patricio sol,,2007.0,10.1057/palgrave.jibs.8400299,Journal of International Business Studies,Sol2007,Not available,,Nature,Not available,Regional competitive advantage based on pioneering economic reforms: the case of Chilean FDI,7b718081861ca3c58085706e7986c155,http://dx.doi.org/10.1057/palgrave.jibs.8400299
17492,"Foreign affiliates of Chilean companies operating in Latin America were more profitable than similar local firms, but this difference in profitability has been decreasing over time. Two case studies of Chilean multinationals illustrate our hypothesis that one source of competitive advantage for Chilean firms was their know-how of business strategy during economic liberalization. Using empirical and theoretical considerations, we analyze whether this hypothesis is valid for other firms and industries.
",joseph kogan,,2007.0,10.1057/palgrave.jibs.8400299,Journal of International Business Studies,Sol2007,Not available,,Nature,Not available,Regional competitive advantage based on pioneering economic reforms: the case of Chilean FDI,7b718081861ca3c58085706e7986c155,http://dx.doi.org/10.1057/palgrave.jibs.8400299
17493,"We review the fiscal evolution of China and Russia and how the process of creating a separate tax-financed public sector in the two countries differed. China's fiscal budget was consistently smaller than in Russia, and their fiscal decentralisation was consistently greater. In China, local governments that were allowed to keep marginal increases in local tax revenue had incentives to pursue growth-supporting policies, but the absence of financial markets and barriers to investment resulted in protectionism and inefficient use of capital. Interregional fiscal transfers from the centre provided modest fiscal equalisation in China, but not in Russia. Russia's status as a petro-state makes efficient management of the public sector particularly difficult. Rising world energy prices and resource rents have generated growing federal budget surpluses, and fiscal recentralisation has been associated with expanding state control in other areas.
",elliott parker,,2007.0,10.1057/palgrave.ces.8100225,Comparative Economic Studies,Parker2007,Not available,,Nature,Not available,Fiscal Centralisation and Decentralisation in Russia and China,ef328e390dd9511ad1758f156d478e96,http://dx.doi.org/10.1057/palgrave.ces.8100225
17494,"We review the fiscal evolution of China and Russia and how the process of creating a separate tax-financed public sector in the two countries differed. China's fiscal budget was consistently smaller than in Russia, and their fiscal decentralisation was consistently greater. In China, local governments that were allowed to keep marginal increases in local tax revenue had incentives to pursue growth-supporting policies, but the absence of financial markets and barriers to investment resulted in protectionism and inefficient use of capital. Interregional fiscal transfers from the centre provided modest fiscal equalisation in China, but not in Russia. Russia's status as a petro-state makes efficient management of the public sector particularly difficult. Rising world energy prices and resource rents have generated growing federal budget surpluses, and fiscal recentralisation has been associated with expanding state control in other areas.
",judith thornton,,2007.0,10.1057/palgrave.ces.8100225,Comparative Economic Studies,Parker2007,Not available,,Nature,Not available,Fiscal Centralisation and Decentralisation in Russia and China,ef328e390dd9511ad1758f156d478e96,http://dx.doi.org/10.1057/palgrave.ces.8100225
17495,"Executive Summary
All too often, research on the influence of interest organizations in democratic politics produces null findings. What are we to make of these results? In part, the answer may lie in our conception of influence – what it is and what might constitute evidence for it. But even when more complete conceptions of influence are considered in better research designs, null results will still occur. They merit explanation. To address these issues, I will first try to provide a broader conception of influence and its many possible meanings by exploring the older theoretical literature on urban power from the 1950s and 1960s, considering along the way what the different interpretations might tell us about lobbying. And second, I will develop a catalog of null hypotheses and discuss how these bear on interpreting the many null findings in influence research. Finally, I discuss the implications of this analysis for the future of influence research.
",david lowery,,2013.0,10.1057/iga.2012.20,Interest Groups & Advocacy,Lowery2013,Not available,,Nature,Not available,"Lobbying influence: Meaning, measurement and missing",5106b1ead957ee0bde56795d39430df5,http://dx.doi.org/10.1057/iga.2012.20
17496,"During the 20 year lifetime of the Journal of Financial Services Marketing the study of online banking adoption has emerged and matured as a field. Now 20 years on, we reflect on the accumulated online banking adoption knowledge and consider what this tells us. On the basis of an audit of published research over a 10-year period, 1998–2008, we identify the core theories and approaches utilised to study online banking. The findings reveal the widespread application of the Technology Adoption Model (TAM). Drawing on the current debate regarding TAM within the Information Systems domain, we critically evaluate the ongoing appropriateness of TAM for online banking adoption research, and call for a refreshed approach to the study of bank technology adoption. The paper concludes by highlighting other theories that offer potential to extend knowledge in this area.
",kathryn waite,,2015.0,10.1057/fsm.2015.19,Journal of Financial Services Marketing,Waite2015,Not available,,Nature,Not available,Online banking adoption: We should know better 20 years on,9398587bb4ede6a964aed02cf339145c,http://dx.doi.org/10.1057/fsm.2015.19
17497,"During the 20 year lifetime of the Journal of Financial Services Marketing the study of online banking adoption has emerged and matured as a field. Now 20 years on, we reflect on the accumulated online banking adoption knowledge and consider what this tells us. On the basis of an audit of published research over a 10-year period, 1998–2008, we identify the core theories and approaches utilised to study online banking. The findings reveal the widespread application of the Technology Adoption Model (TAM). Drawing on the current debate regarding TAM within the Information Systems domain, we critically evaluate the ongoing appropriateness of TAM for online banking adoption research, and call for a refreshed approach to the study of bank technology adoption. The paper concludes by highlighting other theories that offer potential to extend knowledge in this area.
",tina harrison,,2015.0,10.1057/fsm.2015.19,Journal of Financial Services Marketing,Waite2015,Not available,,Nature,Not available,Online banking adoption: We should know better 20 years on,9398587bb4ede6a964aed02cf339145c,http://dx.doi.org/10.1057/fsm.2015.19
17498,"Regional governance systems may resolve the dilemmas of global financial integration, and the Eurozone is the most advanced attempt to do so. The Euroland sovereign debt crisis is a test of this proposition but the outcome finds the EU wanting. The first section places EMU in the broader context of financial liberalisation. The next section shows that we have long known that financial liberalisation is associated with financial instability, demanding robust governance. The subsequent section examines the reaction to the Eurozone crisis, and argues that the lessons available were poorly learned. Although the EU and ECB revealed leadership and crisis management capacity in the financial market phase, the sovereign debt phase of the crisis was less successfully handled, producing conflict among Eurozone members. As a result the Eurozone hangs in the balance.
",geoffrey underhill,,2011.0,10.1057/eps.2011.22,European Political Science,Underhill2011,Not available,,Nature,Not available,Paved with Good Intentions: Global Financial Integration and the Eurozone's Response,5d4c7d6142f5b2104bbd3b1dd8d7ce18,http://dx.doi.org/10.1057/eps.2011.22
17499,"In opaque selling certain characteristics of the product or service are hidden from the consumer until after purchase, transforming a differentiated good into somewhat of a commodity. Opaque selling has become popular in travel service pricing as it allows firms to sell their differentiated products at higher prices to regular brand loyal customers while simultaneously selling to non-loyal customers at discounted prices. At its simplest level, the process can be regarded as a Newsvendor problem where a supplier has to make both pricing and quantity allocation decisions for a perishable good or service. As the originator of opaque selling, Priceline.com provides unique data to sellers that allows them to better utilize their opaque selling mechanism. Recently Priceline has made some changes to their mechanism that have potential impacts on how firms set prices and control inventory within the channel. In this framework, the problem has the characteristics of Newsvendor problems with multiple price points. In this article, we develop optimal pricing and inventory policies for a seller releasing inventory to an opaque sales channel. Furthermore, we investigate the impacts of Priceline’s changes upon optimal prices and inventory allocation policies. The model is empirically illustrated using Priceline data for a 3.5 star hotel.
",chris anderson,,2014.0,10.1057/rpm.2014.32,Journal of Revenue and Pricing Management,Anderson2014,Not available,,Nature,Not available,A newsvendor approach to inventory and pricing decisions in NYOP channels,cf2470b7872d31714e06c3f1cc27562d,http://dx.doi.org/10.1057/rpm.2014.32
17500,"In opaque selling certain characteristics of the product or service are hidden from the consumer until after purchase, transforming a differentiated good into somewhat of a commodity. Opaque selling has become popular in travel service pricing as it allows firms to sell their differentiated products at higher prices to regular brand loyal customers while simultaneously selling to non-loyal customers at discounted prices. At its simplest level, the process can be regarded as a Newsvendor problem where a supplier has to make both pricing and quantity allocation decisions for a perishable good or service. As the originator of opaque selling, Priceline.com provides unique data to sellers that allows them to better utilize their opaque selling mechanism. Recently Priceline has made some changes to their mechanism that have potential impacts on how firms set prices and control inventory within the channel. In this framework, the problem has the characteristics of Newsvendor problems with multiple price points. In this article, we develop optimal pricing and inventory policies for a seller releasing inventory to an opaque sales channel. Furthermore, we investigate the impacts of Priceline’s changes upon optimal prices and inventory allocation policies. The model is empirically illustrated using Priceline data for a 3.5 star hotel.
",fredrik odegaard,,2014.0,10.1057/rpm.2014.32,Journal of Revenue and Pricing Management,Anderson2014,Not available,,Nature,Not available,A newsvendor approach to inventory and pricing decisions in NYOP channels,cf2470b7872d31714e06c3f1cc27562d,http://dx.doi.org/10.1057/rpm.2014.32
17501,"In opaque selling certain characteristics of the product or service are hidden from the consumer until after purchase, transforming a differentiated good into somewhat of a commodity. Opaque selling has become popular in travel service pricing as it allows firms to sell their differentiated products at higher prices to regular brand loyal customers while simultaneously selling to non-loyal customers at discounted prices. At its simplest level, the process can be regarded as a Newsvendor problem where a supplier has to make both pricing and quantity allocation decisions for a perishable good or service. As the originator of opaque selling, Priceline.com provides unique data to sellers that allows them to better utilize their opaque selling mechanism. Recently Priceline has made some changes to their mechanism that have potential impacts on how firms set prices and control inventory within the channel. In this framework, the problem has the characteristics of Newsvendor problems with multiple price points. In this article, we develop optimal pricing and inventory policies for a seller releasing inventory to an opaque sales channel. Furthermore, we investigate the impacts of Priceline’s changes upon optimal prices and inventory allocation policies. The model is empirically illustrated using Priceline data for a 3.5 star hotel.
",john wilson,,2014.0,10.1057/rpm.2014.32,Journal of Revenue and Pricing Management,Anderson2014,Not available,,Nature,Not available,A newsvendor approach to inventory and pricing decisions in NYOP channels,cf2470b7872d31714e06c3f1cc27562d,http://dx.doi.org/10.1057/rpm.2014.32
17502,"This paper examines the enforcement of environmental protection laws under communism and democracy, while exploring the possibilities for cost-shifting between Czech enterprises and their employees as offered by labor contracts. Theory establishes the connection between cost-shifting possibilities and the efficient rule for enforcing water protection laws when the actions of enterprises and their employees combine to cause water-damaging accidents (for example, oil spills). For the years 1988 to 1991, analysis of labor arrangements in the Czech Republic discerns the apparently efficient rule for the communist and democratic periods, while statistical analysis discerns the operative enforcement rule in each period.
",dietrich earnhart,,1996.0,10.1057/ces.1996.37,Comparative Economic Studies,Earnhart1996,Not available,,Nature,Not available,Environmental Penalties Against Enterprises and Employees: Labor Contracts and Cost-Shifting in the Czech Republic*,988702beb343e7537fefbd20a41aa951,http://dx.doi.org/10.1057/ces.1996.37
17503,"In this study, a model representing military requirements as scenarios and capabilities is offered. Pair-wise comparisons of scenarios are made according to occurrence probabilities by using the Analytical Hierarchy Process (AHP). The weights calculated from AHP are used as the starting weights in a Quality Function Deployment (QFD) matrix. QFD is used to transfer war fighter requirements into the benefit values of projects. Two levels of QFD matrices are used to evaluate new capability areas versus capabilities and capabilities versus projects. The benefit values of the projects are used in a multi-objective problem (multi-objective multiple knapsack problem) that considers the project benefit, implementation risks and environmental impact as multiple objectives. Implementation risk and environmental impact values are also calculated using the same combined AHP and QFD methodology. Finally, the results of the fuzzy multi-objective goal programming suggest a list of projects that offers optimal benefit when carried out within multiple budgets.
",b bakirli,,2013.0,10.1057/jors.2013.36,Journal of the Operational Research Society,Bakirli2013,Not available,,Nature,Not available,A combined approach for fuzzy multi-objective multiple knapsack problems for defence project selection,c47a0a2fd5bd6b3435428cd6038a14fc,http://dx.doi.org/10.1057/jors.2013.36
17504,"In this study, a model representing military requirements as scenarios and capabilities is offered. Pair-wise comparisons of scenarios are made according to occurrence probabilities by using the Analytical Hierarchy Process (AHP). The weights calculated from AHP are used as the starting weights in a Quality Function Deployment (QFD) matrix. QFD is used to transfer war fighter requirements into the benefit values of projects. Two levels of QFD matrices are used to evaluate new capability areas versus capabilities and capabilities versus projects. The benefit values of the projects are used in a multi-objective problem (multi-objective multiple knapsack problem) that considers the project benefit, implementation risks and environmental impact as multiple objectives. Implementation risk and environmental impact values are also calculated using the same combined AHP and QFD methodology. Finally, the results of the fuzzy multi-objective goal programming suggest a list of projects that offers optimal benefit when carried out within multiple budgets.
",c gencer,,2013.0,10.1057/jors.2013.36,Journal of the Operational Research Society,Bakirli2013,Not available,,Nature,Not available,A combined approach for fuzzy multi-objective multiple knapsack problems for defence project selection,c47a0a2fd5bd6b3435428cd6038a14fc,http://dx.doi.org/10.1057/jors.2013.36
17505,"In this study, a model representing military requirements as scenarios and capabilities is offered. Pair-wise comparisons of scenarios are made according to occurrence probabilities by using the Analytical Hierarchy Process (AHP). The weights calculated from AHP are used as the starting weights in a Quality Function Deployment (QFD) matrix. QFD is used to transfer war fighter requirements into the benefit values of projects. Two levels of QFD matrices are used to evaluate new capability areas versus capabilities and capabilities versus projects. The benefit values of the projects are used in a multi-objective problem (multi-objective multiple knapsack problem) that considers the project benefit, implementation risks and environmental impact as multiple objectives. Implementation risk and environmental impact values are also calculated using the same combined AHP and QFD methodology. Finally, the results of the fuzzy multi-objective goal programming suggest a list of projects that offers optimal benefit when carried out within multiple budgets.
",e aydogan,,2013.0,10.1057/jors.2013.36,Journal of the Operational Research Society,Bakirli2013,Not available,,Nature,Not available,A combined approach for fuzzy multi-objective multiple knapsack problems for defence project selection,c47a0a2fd5bd6b3435428cd6038a14fc,http://dx.doi.org/10.1057/jors.2013.36
17506,"In this paper, we study pricing situations where a firm provides a price quote in the presence of uncertainty in the preferences of the buyer and the competitive landscape. We introduce two customised-pricing bid-response models (CPBRMs) used in practice, which can be developed from the historical information available to the firm based on previous bidding opportunities. We show how these models may be used to exploit the differences in the market segments to generate optimal price quotes given the characteristics of the current bid opportunity. We also describe the process of evaluating competing models using an industry data set as a test bed to measure the model fit. Finally, we test the models on the industry data set to compare their performance and estimate the per cent improvement in expected profits that may be possible from their use.
",vishal agrawal,,2007.0,10.1057/palgrave.rpm.5160085,Journal of Revenue and Pricing Management,Agrawal2007,Not available,,Nature,Not available,Bid-response models for customised pricing,23351be11276bca5e71e39d826078d00,http://dx.doi.org/10.1057/palgrave.rpm.5160085
17507,"This study relies on knowledge regarding the neuroplasticity of dual-system components that govern addiction and excessive behavior and suggests that alterations in the grey matter volumes, i.e., brain morphology, of specific regions of interest are associated with technology-related addictions. Using voxel based morphometry (VBM) applied to structural Magnetic Resonance Imaging (MRI) scans of twenty social network site (SNS) users with varying degrees of SNS addiction, we show that SNS addiction is associated with a presumably more efficient impulsive brain system, manifested through reduced grey matter volumes in the amygdala bilaterally (but not with structural differences in the Nucleus Accumbens). In this regard, SNS addiction is similar in terms of brain anatomy alterations to other (substance, gambling etc.) addictions. We also show that in contrast to other addictions in which the anterior-/ mid- cingulate cortex is impaired and fails to support the needed inhibition, which manifests through reduced grey matter volumes, this region is presumed to be healthy in our sample and its grey matter volume is positively correlated with one’s level of SNS addiction. These findings portray an anatomical morphology model of SNS addiction and point to brain morphology similarities and differences between technology addictions and substance and gambling addictions.
",ofir turel,Brain,2017.0,10.1038/srep45064,Scientific Reports,He2017,Not available,,Nature,Not available,Brain anatomy alterations associated with Social Networking Site (SNS) addiction,355447eac1ec4959117118d905cd56d1,http://dx.doi.org/10.1038/srep45064
17508,"In this paper, we study pricing situations where a firm provides a price quote in the presence of uncertainty in the preferences of the buyer and the competitive landscape. We introduce two customised-pricing bid-response models (CPBRMs) used in practice, which can be developed from the historical information available to the firm based on previous bidding opportunities. We show how these models may be used to exploit the differences in the market segments to generate optimal price quotes given the characteristics of the current bid opportunity. We also describe the process of evaluating competing models using an industry data set as a test bed to measure the model fit. Finally, we test the models on the industry data set to compare their performance and estimate the per cent improvement in expected profits that may be possible from their use.
",mark ferguson,,2007.0,10.1057/palgrave.rpm.5160085,Journal of Revenue and Pricing Management,Agrawal2007,Not available,,Nature,Not available,Bid-response models for customised pricing,23351be11276bca5e71e39d826078d00,http://dx.doi.org/10.1057/palgrave.rpm.5160085
17509,"We survey the recent literature on the use of spot market operations to manage procurement in supply chains. We present results in two categories: work that deals with optimal procurement strategies and work related to the valuation of procurement contracts. As an example of the latter, we provide new results on valuation of a supply contract with abandonment option. Based on our review, we also discuss the scope for doing further work.
",c haksoz,,2007.0,10.1057/palgrave.jors.2602401,Journal of the Operational Research Society,Haksöz2007,Not available,,Nature,Not available,Supply chain operations in the presence of a spot market: a review with discussion,d21f7c4022195caef1dca44724ea7a9e,http://dx.doi.org/10.1057/palgrave.jors.2602401
17510,"We survey the recent literature on the use of spot market operations to manage procurement in supply chains. We present results in two categories: work that deals with optimal procurement strategies and work related to the valuation of procurement contracts. As an example of the latter, we provide new results on valuation of a supply contract with abandonment option. Based on our review, we also discuss the scope for doing further work.
",s seshadri,,2007.0,10.1057/palgrave.jors.2602401,Journal of the Operational Research Society,Haksöz2007,Not available,,Nature,Not available,Supply chain operations in the presence of a spot market: a review with discussion,d21f7c4022195caef1dca44724ea7a9e,http://dx.doi.org/10.1057/palgrave.jors.2602401
17511,"Standard ocean shipping contracts stipulate that a chartered vessel must sail at ‘utmost despatch’, with no consideration for the availability of berths at the destination port. The berthing policies used at many ports, which admit vessels on a first-come, first-served basis, provide an additional incentive for the master to sail at full speed. These legacy contracts and berthing policies constitute a major driver of harbour congestion and marine fuel consumption, with adverse economic, safety, and environmental consequences. We propose a methodology to evaluate the potential benefits of new berthing policies and ocean shipping contracts. Given the importance of stochasticity on the performance of maritime transport systems, and the need to represent the efficient allocation of terminal resources, we have chosen a hybrid simulation-optimization approach. Our discrete event simulation model represents vessels and their principal economic and physical characteristics, the spatial layout of the terminal, performance of the land-side equipment, contractual agreements and associated penalties, and berthing policies. The proposed optimization model – a substantial extension of the traditional berth assignment problem – represents the logic of the terminal planner. The simulation program solves multiple instances of the optimization model successively in order to represent the progression of planning activities at the terminal.
",j alvarez,,2010.0,10.1057/mel.2010.11,Maritime Economics & Logistics,Alvarez2010,Not available,,Nature,Not available,A methodology to assess vessel berthing and speed optimization policies,4da09f1c8bc6e9c62875fc438116ed16,http://dx.doi.org/10.1057/mel.2010.11
17512,"Standard ocean shipping contracts stipulate that a chartered vessel must sail at ‘utmost despatch’, with no consideration for the availability of berths at the destination port. The berthing policies used at many ports, which admit vessels on a first-come, first-served basis, provide an additional incentive for the master to sail at full speed. These legacy contracts and berthing policies constitute a major driver of harbour congestion and marine fuel consumption, with adverse economic, safety, and environmental consequences. We propose a methodology to evaluate the potential benefits of new berthing policies and ocean shipping contracts. Given the importance of stochasticity on the performance of maritime transport systems, and the need to represent the efficient allocation of terminal resources, we have chosen a hybrid simulation-optimization approach. Our discrete event simulation model represents vessels and their principal economic and physical characteristics, the spatial layout of the terminal, performance of the land-side equipment, contractual agreements and associated penalties, and berthing policies. The proposed optimization model – a substantial extension of the traditional berth assignment problem – represents the logic of the terminal planner. The simulation program solves multiple instances of the optimization model successively in order to represent the progression of planning activities at the terminal.
",tore longva,,2010.0,10.1057/mel.2010.11,Maritime Economics & Logistics,Alvarez2010,Not available,,Nature,Not available,A methodology to assess vessel berthing and speed optimization policies,4da09f1c8bc6e9c62875fc438116ed16,http://dx.doi.org/10.1057/mel.2010.11
17513,"Standard ocean shipping contracts stipulate that a chartered vessel must sail at ‘utmost despatch’, with no consideration for the availability of berths at the destination port. The berthing policies used at many ports, which admit vessels on a first-come, first-served basis, provide an additional incentive for the master to sail at full speed. These legacy contracts and berthing policies constitute a major driver of harbour congestion and marine fuel consumption, with adverse economic, safety, and environmental consequences. We propose a methodology to evaluate the potential benefits of new berthing policies and ocean shipping contracts. Given the importance of stochasticity on the performance of maritime transport systems, and the need to represent the efficient allocation of terminal resources, we have chosen a hybrid simulation-optimization approach. Our discrete event simulation model represents vessels and their principal economic and physical characteristics, the spatial layout of the terminal, performance of the land-side equipment, contractual agreements and associated penalties, and berthing policies. The proposed optimization model – a substantial extension of the traditional berth assignment problem – represents the logic of the terminal planner. The simulation program solves multiple instances of the optimization model successively in order to represent the progression of planning activities at the terminal.
",erna engebrethsen,,2010.0,10.1057/mel.2010.11,Maritime Economics & Logistics,Alvarez2010,Not available,,Nature,Not available,A methodology to assess vessel berthing and speed optimization policies,4da09f1c8bc6e9c62875fc438116ed16,http://dx.doi.org/10.1057/mel.2010.11
17514,"Infrastructure-as-a-Service (IaaS) is expected to grow at a rapid pace in the next few years. This is primarily because of the flexibility that it offers satisfying variable demand in computing power without any fixed investments in computing capacity. Our work focuses on a particular customer segment of IaaS – online platform providers (OPP). These businesses experience fluctuations in the number of users of their platforms, which impacts advertising revenues. Therefore, it is necessary for these businesses to support any sudden surge in demand without excessive upfront investments in computing infrastructure. IaaS offers an attractive option for such businesses. Pricing of IaaS for OPP is trickier as IaaS providers must consider the fluctuations in the number of users of such platforms while designing a plan. We model these fluctuations and their impact on the revenue of the platform providers to design an optimal pricing plan.
",soumyakanti chakraborty,,2014.0,10.1057/rpm.2013.37,Journal of Revenue and Pricing Management,Chakraborty2014,Not available,,Nature,Not available,Pricing Infrastructure-as-a-Service for online two-sided platform providers,3572c1d779d87767e6de208eb1988356,http://dx.doi.org/10.1057/rpm.2013.37
17515,"Infrastructure-as-a-Service (IaaS) is expected to grow at a rapid pace in the next few years. This is primarily because of the flexibility that it offers satisfying variable demand in computing power without any fixed investments in computing capacity. Our work focuses on a particular customer segment of IaaS – online platform providers (OPP). These businesses experience fluctuations in the number of users of their platforms, which impacts advertising revenues. Therefore, it is necessary for these businesses to support any sudden surge in demand without excessive upfront investments in computing infrastructure. IaaS offers an attractive option for such businesses. Pricing of IaaS for OPP is trickier as IaaS providers must consider the fluctuations in the number of users of such platforms while designing a plan. We model these fluctuations and their impact on the revenue of the platform providers to design an optimal pricing plan.
",sumanta basu,,2014.0,10.1057/rpm.2013.37,Journal of Revenue and Pricing Management,Chakraborty2014,Not available,,Nature,Not available,Pricing Infrastructure-as-a-Service for online two-sided platform providers,3572c1d779d87767e6de208eb1988356,http://dx.doi.org/10.1057/rpm.2013.37
17516,"Infrastructure-as-a-Service (IaaS) is expected to grow at a rapid pace in the next few years. This is primarily because of the flexibility that it offers satisfying variable demand in computing power without any fixed investments in computing capacity. Our work focuses on a particular customer segment of IaaS – online platform providers (OPP). These businesses experience fluctuations in the number of users of their platforms, which impacts advertising revenues. Therefore, it is necessary for these businesses to support any sudden surge in demand without excessive upfront investments in computing infrastructure. IaaS offers an attractive option for such businesses. Pricing of IaaS for OPP is trickier as IaaS providers must consider the fluctuations in the number of users of such platforms while designing a plan. We model these fluctuations and their impact on the revenue of the platform providers to design an optimal pricing plan.
",megha sharma,,2014.0,10.1057/rpm.2013.37,Journal of Revenue and Pricing Management,Chakraborty2014,Not available,,Nature,Not available,Pricing Infrastructure-as-a-Service for online two-sided platform providers,3572c1d779d87767e6de208eb1988356,http://dx.doi.org/10.1057/rpm.2013.37
17517,"This study proposes a new approach to ‘integrated e-marketing value creation’ processes on the internet in order to provide insights about how to develop successful marketing strategies in the digital world. The study first discusses the changes in traditional marketing (4Ps) with possible complementary e-marketing elements, and then introduces some new marketing value elements (4Cs). The study discusses each element of how these new marketing value creation elements work in the digital world, along with some managerial suggestions. Finally, the study provides a new discussion on how to utilize today's e-value drivers in the light of the transforming of our understanding of marketing in the digital age.
",s kucuk,,2011.0,10.1057/dddmp.2011.3,"Journal of Direct, Data and Digital Marketing Practice",Kucuk2011,Not available,,Nature,Not available,Towards integrated e-marketing value creation process,1fb634a3b5f6ead66b32ff94d36353b3,http://dx.doi.org/10.1057/dddmp.2011.3
17518,"Despite huge obstacles, political forces in Washington may finally get greenhouse-gas legislation moving, says David Goldston.",david goldston,,2009.0,10.1038/458021a,Nature,Goldston2009,Not available,,Nature,Not available,The climate to get things done,90b1d77e9f40952a4a8abdf02854e7ed,http://dx.doi.org/10.1038/458021a
17519,"This paper offers a paradigmatic analysis of digital application marketplaces for advancing information systems research on digital platforms and ecosystems. We refer to the notion of digital application marketplace, colloquially called ‘appstores,’ as a platform component that offers a venue for exchanging applications between developers and end users belonging to a single or multiple ecosystems. Such marketplaces exhibit diversity in features and assumptions, and we propose that examining this diversity, and its ideal types, will help us to further understand the relationship between application marketplaces, platforms, and platform ecosystems. To this end, we generate a typology that distinguishes four kinds of digital application marketplaces: closed, censored, focused, and open marketplaces. The paper also offers implications for actors wishing to make informed decisions about their relationship to a particular digital application marketplace.
",ahmad ghazawneh,,2015.0,10.1057/jit.2015.16,Journal of Information Technology,Ghazawneh2015,Not available,,Nature,Not available,A paradigmatic analysis of digital application marketplaces,a4175d64358763db715a5eb564c44da1,http://dx.doi.org/10.1057/jit.2015.16
17520,"This paper offers a paradigmatic analysis of digital application marketplaces for advancing information systems research on digital platforms and ecosystems. We refer to the notion of digital application marketplace, colloquially called ‘appstores,’ as a platform component that offers a venue for exchanging applications between developers and end users belonging to a single or multiple ecosystems. Such marketplaces exhibit diversity in features and assumptions, and we propose that examining this diversity, and its ideal types, will help us to further understand the relationship between application marketplaces, platforms, and platform ecosystems. To this end, we generate a typology that distinguishes four kinds of digital application marketplaces: closed, censored, focused, and open marketplaces. The paper also offers implications for actors wishing to make informed decisions about their relationship to a particular digital application marketplace.
",ola henfridsson,,2015.0,10.1057/jit.2015.16,Journal of Information Technology,Ghazawneh2015,Not available,,Nature,Not available,A paradigmatic analysis of digital application marketplaces,a4175d64358763db715a5eb564c44da1,http://dx.doi.org/10.1057/jit.2015.16
17521,"Generating sustainable business value from information services is challenging on the web where free information and zero-switching costs are the norm. This study examines the role of free comments given in a commercial information service through the lens of the expectation-confirmation theory and continuance. Data from a question and answer web site are analyzed by structural equations modeling to test the theoretical model whereby customer satisfaction is key to continuance and is predicted largely by social interaction that takes place on the site. The model is supported by the field data retrieved from the site. The data show that people came with equal expectations, received equal service, and continued to use the system if they were satisfied with it. Satisfaction was predicted by conversation. Free activity emerges as an integral part of the service in a fee-based information market, improving satisfaction and continuance, and thereby leading to measurable outcomes for the commercial owners of the site. The findings, based on unobtrusive field data rather than self-report questionnaires, extend expectation confirmation theory by adding a social dimension to it.
",raban ruth,,2011.0,10.1057/ejis.2011.42,European Journal of Information Systems,Ruth2011,Not available,,Nature,Not available,Conversation as a source of satisfaction and continuance in a question-and-answer site,6be99a00e259a31f0bc17c41ea12d684,http://dx.doi.org/10.1057/ejis.2011.42
17522,"This article introduces and opens discussion on some of the conditions and ambivalences encountered by the rising creative workforce in Shanghai, through engagement with theories of immaterial labour. Drawing from conversations with several Chinese creative workers, the text aims to provoke thought on the potential for political organisation and resistance within fractalised creative sectors mobilised by high levels of innovation, entrepreneurialism, competition and aspiration. By focusing on processes of subjectivation and desire, it calls for considerations of what might constitute political registers in the Shanghainese creative fields.
",anja kanngieser,,2012.0,10.1057/sub.2011.25,Subjectivity,Kanngieser2012,Not available,,Nature,Not available,"Creative labour in Shanghai: Questions on politics, composition and ambivalence",2602606c9724a864377162ed18e913d8,http://dx.doi.org/10.1057/sub.2011.25
17523,An annual survey of books about science for children,philip philip,,1976.0,10.1038/scientificamerican1276-134,Scientific American,Philip1976,Not available,,Nature,Not available,Books,2dc825cce538481274bd380e6b1b7118,http://dx.doi.org/10.1038/scientificamerican1276-134
17524,An annual survey of books about science for children,phylis morrison,,1976.0,10.1038/scientificamerican1276-134,Scientific American,Philip1976,Not available,,Nature,Not available,Books,2dc825cce538481274bd380e6b1b7118,http://dx.doi.org/10.1038/scientificamerican1276-134
17525,"In 1999, the automotive industry was in a difficult situation: overcapacity and customer demand for faster delivery and better service drove executives to explore the potential business value of the internet. The authors provide a teaching case, which is based on an analysis of the DCXNET initiative which bundled all e-business actions taken by DaimlerChrysler to exploit the opportunities of this then new technology. The teaching case describes the strategic planning process for e-business at DaimlerChrysler, resulting organizational structures and an outline of the components of DCXNET. Furthermore, the authors provide results of the initiative, success factors and lessons learned.
",arnd klein,,2005.0,10.1057/palgrave.jit.2000047,Journal of Information Technology,Klein2005,Not available,,Nature,Not available,DCXNET: e-transformation at DaimlerChrysler,76d5547e34d169736e4d9d6f8eacd137,http://dx.doi.org/10.1057/palgrave.jit.2000047
17526,"In 1999, the automotive industry was in a difficult situation: overcapacity and customer demand for faster delivery and better service drove executives to explore the potential business value of the internet. The authors provide a teaching case, which is based on an analysis of the DCXNET initiative which bundled all e-business actions taken by DaimlerChrysler to exploit the opportunities of this then new technology. The teaching case describes the strategic planning process for e-business at DaimlerChrysler, resulting organizational structures and an outline of the components of DCXNET. Furthermore, the authors provide results of the initiative, success factors and lessons learned.
",helmut krcmar,,2005.0,10.1057/palgrave.jit.2000047,Journal of Information Technology,Klein2005,Not available,,Nature,Not available,DCXNET: e-transformation at DaimlerChrysler,76d5547e34d169736e4d9d6f8eacd137,http://dx.doi.org/10.1057/palgrave.jit.2000047
17527,"This paper explores whether labour-management theory provides significant insights into the operation of the Yugoslav economy and into the process of transition in the Yugoslav successor states. It concludes that the literature offered only modest insights into the operation of the Yugoslav economy, primarily because Yugoslavia did not satisfy many of the basic assumptions of the model. The socialist features of the Yugoslav economy remained dominant, suppressing many of the elements of economic democracy. The most significant contributions of the labour-management literature were theoretical, concerning supply responses of worker-controlled firms in a decentralised source allocation mechanism and the incentive, organisational and efficiency aspects of labour-management.
",saul estrin,,2008.0,10.1057/ces.2008.41,Comparative Economic Studies,Estrin2008,Not available,,Nature,Not available,From Illyria towards Capitalism: Did Labour-Management Theory Teach Us Anything about Yugoslavia and Transition in Its Successor States?,bfbfcc0d3870280123148da83b179852,http://dx.doi.org/10.1057/ces.2008.41
17528,"This paper explores whether labour-management theory provides significant insights into the operation of the Yugoslav economy and into the process of transition in the Yugoslav successor states. It concludes that the literature offered only modest insights into the operation of the Yugoslav economy, primarily because Yugoslavia did not satisfy many of the basic assumptions of the model. The socialist features of the Yugoslav economy remained dominant, suppressing many of the elements of economic democracy. The most significant contributions of the labour-management literature were theoretical, concerning supply responses of worker-controlled firms in a decentralised source allocation mechanism and the incentive, organisational and efficiency aspects of labour-management.
",milica uvalic,,2008.0,10.1057/ces.2008.41,Comparative Economic Studies,Estrin2008,Not available,,Nature,Not available,From Illyria towards Capitalism: Did Labour-Management Theory Teach Us Anything about Yugoslavia and Transition in Its Successor States?,bfbfcc0d3870280123148da83b179852,http://dx.doi.org/10.1057/ces.2008.41
17529,"Many economists favor higher taxes on energy-related products such as gasoline, while the general public is more skeptical. This essay, based on a talk given at the March 2008 meeting of the Eastern Economic Association, discusses various aspects of this policy debate. It focuses, in particular, on the use of these taxes to correct for various externalities — an idea advocated long ago by British economist Arthur Pigou.
",n mankiw,,2009.0,10.1057/eej.2008.43,Eastern Economic Journal,Mankiw2009,Not available,,Nature,Not available,Smart Taxes: An Open Invitation to Join the Pigou Club,2c08277fbecdd789940025c91ce20c6c,http://dx.doi.org/10.1057/eej.2008.43
17530,"A survey of tax administration in developing countries from an economic perspective is warranted because tax administrators in developing countries in effect make tax policy by deciding how to apply tax legislation. Tax administration links legal statutes and the “real,” implemented tax system and thus affects fiscal deficits and the tax burdens of different sectors and income classes. A survey of analytical and empirical work concludes that administrative constraints can severely weaken the tax revenue structure with respect to stabilization, efficiency, and equity goals.
",charles mansfield,,1988.0,10.2307/3867282,Staff Papers - International Monetary Fund,Mansfield1988,Not available,,Nature,Not available,Tax Administration in Developing Countries: An Economic Perspective,e9d3c4929c5f790e34396a53c6192074,http://dx.doi.org/10.2307/3867282
17531,"The European Union has established itself as the leader of attempts to construct a global climate change regime. This has become an important normative stance, part of its self-image and international identity. Yet it has also come to depend on the Union's ability to negotiate internally on the distribution of the burdens necessitated by its external pledges to cut emissions. The paper considers institutionalist hypotheses on cooperative bargaining and normative entrapment in EU internal negotiations before the 1997 Kyoto Protocol negotiations and the more recent approach to negotiations on a post-2012 regime. It finds that there is evidence to support the normative entrapment hypothesis in both cases, but that agreement in 1997 was facilitated by a very favourable context associated with a 1990 baseline.
",john vogler,,2009.0,10.1057/ip.2009.9,International Politics,Vogler2009,Not available,,Nature,Not available,Climate change and EU foreign policy: The negotiation of burden sharing,4910da09f9590f522922ed094f44f0a0,http://dx.doi.org/10.1057/ip.2009.9
17532,"The presence of quotas on imported inputs that are based on installed capacity can lead to capacity underutilization in manufacturing industries of developing countries. A replacement of such quotas by tariffs leads to full-capacity utilization under both perfectly and imperfectly competitive markets. Furthermore, such a policy also eliminates strategic advantages for oligopolistic firms that arise in quota-based regimes.
",ratna sahay,,1990.0,10.2307/3867262,Staff Papers - International Monetary Fund,Sahay1990,Not available,,Nature,Not available,Trade Policy and Excess Capacity in Developing Countries,f4c82982628dae43021c5ce5c7e8f967,http://dx.doi.org/10.2307/3867262
17533,"These are exciting times: the worst economic crisis since the Great Depression, the first global recession in the new era of globalization, and a new President committed to restructuring national priorities, reforming our education, health, and energy sectors, eliminating some long standing distortions arising from corporate welfare, and restructuring our tax code. For economists who have fought long for many of these ideas, the President's budget was a moment of celebration.
",joseph stiglitz,,2009.0,10.1057/eej.2009.24,Eastern Economic Journal,Stiglitz2009,Not available,,Nature,Not available,The Current Economic Crisis and Lessons for Economic Theory,9886a885c04e792e906f5b3710e67a94,http://dx.doi.org/10.1057/eej.2009.24
17534,"This rejoinder restates and develops the central theses of ‘The Myth of 1648: Class, Geopolitics and the Making of Modern International Relations’ in relation to a set of objections raised from the perspective of IR Historical Sociology by Hendrik Spruyt, of Political and Social Theory by Roland Axtmann and of Political Geography by John Agnew. Most centrally, it re-affirms the charge of a defective historicisation and theorisation of ‘Westphalia’ in the discipline of International Relations, while suggesting that a Marxist perspective that emphasises the spatio-temporally differentiated and geopolitically mediated development of Europe is capable of providing a new long-term interpretive framework for the complex co-development of capitalism, state building and the interstate system. It thereby pleads for a paradigm-shift in IR Theory and IR Historical Sociology.
",benno teschke,,2006.0,10.1057/palgrave.ip.8800175,International Politics,Teschke2006,Not available,,Nature,Not available,"Debating ‘The Myth of 1648’: State Formation, the Interstate System and the Emergence of Capitalism in Europe — A Rejoinder",a77c672a6e0d7e94aa7f43e6a719049d,http://dx.doi.org/10.1057/palgrave.ip.8800175
17535,Tired of the constraints of space and time? Intelligently designed network products that understand the needs of individuals will set us free,nicholas negroponte,,1991.0,10.1038/scientificamerican0991-106,Scientific American,Negroponte1991,Not available,,Nature,Not available,Products and Services for Computer Networks,dcbc0320f1344b921b4d10198a7bb616,http://dx.doi.org/10.1038/scientificamerican0991-106
17536,"Research on brand naming has recently taken center stage in marketing literature. This study formulates a comprehensive classification of brand names that incorporates frameworks from existing literature and current naming methods used by practitioners. A content analysis of the top 500 global brand names based on manifest content, across 11 product categories, was conducted to understand the current brand-naming trends. The results confirm extensive use of the promoter’s name and place of origin (39.7 per cent of all brand names coded), compounding (34.1 per cent), abbreviations (18.2 per cent) and blending (7.9 per cent). Category-wise analysis indicates that certain categories, such as durables, follow the aggregate pattern of 61.5 per cent semantic word names, 53.0 per cent invented word names and 23.6 per cent non-word names. FMCG brands, on the other hand, show differing patterns because of disproportionately low abbreviations in the distribution. Further, χ2 tests using equal expected frequencies of the three dimensions; semantic, invented and non-word names, showed that there appears to be significant differences in frequency between these dimensions. Practitioners may consider using these newly defined categories, such as semantically related acronyms, in creating distinctive brand names. This study also analyzes the use of sound symbolic names for brands.
",sunny arora,,2015.0,10.1057/bm.2015.8,Journal of Brand Management,Arora2015,Not available,,Nature,Not available,A comprehensive framework of brand name classification,9f9db2e264e8d2b3abae6b531e6be8fb,http://dx.doi.org/10.1057/bm.2015.8
17537,"This paper integrates a number of disparate research streams which collectively confront the notion that brands can be valued in any meaningful sense. This is not to say that brands are unimportant or that the development of brand management competencies is a waste of resource. On the contrary, in a complex and turbulent marketing environment, a powerful brand, built upon customer trust and repeat-purchase loyalty, should be the driving force of any company's strategic intent. In the sections that follow, the forces which conspire against this common sense advice are profiled and evaluated. The internationalisation of the world economy and associated developments in theories of competitive behaviour are examined. Areas which impinge strongly upon the brand valuation debate but which are typically precluded from it are revealed and discussed. Research directions to expand the parochial nature of the current debate are highlighted and it is concluded that behavioural management processes tend to confound rational attempts at brand valuation.
",colin egan,,1998.0,10.1057/bm.1998.10,Journal of Brand Management,Egan1998,Not available,,Nature,Not available,Chasing the Holy Grail: A critical appraisal of ‘the brand’ and the brand valuation debate,3d046ed82158306378fc6964c31f1f84,http://dx.doi.org/10.1057/bm.1998.10
17538,"Research on brand naming has recently taken center stage in marketing literature. This study formulates a comprehensive classification of brand names that incorporates frameworks from existing literature and current naming methods used by practitioners. A content analysis of the top 500 global brand names based on manifest content, across 11 product categories, was conducted to understand the current brand-naming trends. The results confirm extensive use of the promoter’s name and place of origin (39.7 per cent of all brand names coded), compounding (34.1 per cent), abbreviations (18.2 per cent) and blending (7.9 per cent). Category-wise analysis indicates that certain categories, such as durables, follow the aggregate pattern of 61.5 per cent semantic word names, 53.0 per cent invented word names and 23.6 per cent non-word names. FMCG brands, on the other hand, show differing patterns because of disproportionately low abbreviations in the distribution. Further, χ2 tests using equal expected frequencies of the three dimensions; semantic, invented and non-word names, showed that there appears to be significant differences in frequency between these dimensions. Practitioners may consider using these newly defined categories, such as semantically related acronyms, in creating distinctive brand names. This study also analyzes the use of sound symbolic names for brands.
",arti kalro,,2015.0,10.1057/bm.2015.8,Journal of Brand Management,Arora2015,Not available,,Nature,Not available,A comprehensive framework of brand name classification,9f9db2e264e8d2b3abae6b531e6be8fb,http://dx.doi.org/10.1057/bm.2015.8
17539,"Research on brand naming has recently taken center stage in marketing literature. This study formulates a comprehensive classification of brand names that incorporates frameworks from existing literature and current naming methods used by practitioners. A content analysis of the top 500 global brand names based on manifest content, across 11 product categories, was conducted to understand the current brand-naming trends. The results confirm extensive use of the promoter’s name and place of origin (39.7 per cent of all brand names coded), compounding (34.1 per cent), abbreviations (18.2 per cent) and blending (7.9 per cent). Category-wise analysis indicates that certain categories, such as durables, follow the aggregate pattern of 61.5 per cent semantic word names, 53.0 per cent invented word names and 23.6 per cent non-word names. FMCG brands, on the other hand, show differing patterns because of disproportionately low abbreviations in the distribution. Further, χ2 tests using equal expected frequencies of the three dimensions; semantic, invented and non-word names, showed that there appears to be significant differences in frequency between these dimensions. Practitioners may consider using these newly defined categories, such as semantically related acronyms, in creating distinctive brand names. This study also analyzes the use of sound symbolic names for brands.
",dinesh sharma,,2015.0,10.1057/bm.2015.8,Journal of Brand Management,Arora2015,Not available,,Nature,Not available,A comprehensive framework of brand name classification,9f9db2e264e8d2b3abae6b531e6be8fb,http://dx.doi.org/10.1057/bm.2015.8
17540,"This article presents a new revenue management (RM) concept targeted at businesses struggling to maintain profitability. Destination-centric RM expands the earlier concept of customer-centric RM (Venkat, 2007; Vinod, 2008) and proposes co-operation between businesses to maximize profitability. The concept adopts a destination as the focal point of RM, elevating the management of total customer revenue (Cross et al 2009; Milla and Shoemaker, 2008) to a more aggregate level. Practical application of the concept is demonstrated through existing practices in a ski resort. The model offers insight into practices with the potential to become the next step in the continuous evolution of the discipline.
",henri kuokkanen,,2013.0,10.1057/rpm.2013.2,Journal of Revenue and Pricing Management,Kuokkanen2013,Not available,,Nature,Not available,Improving profitability: A conceptual model of destination-centric revenue management,edda142ffef74ae1bf4eb78bd083b84b,http://dx.doi.org/10.1057/rpm.2013.2
17541,"Many stochastic dynamic sales applications are characterized by time-dependent price elasticities of demand. However, in general, such problems cannot be solved analytically. To determine smart pricing heuristics for general time-dependent dynamic pricing models, we solve a general class of deterministic dynamic pricing problems for perishable and durable goods. The continuous time model has several time-dependent parameters, for example, discount rate, marginal unit costs and price elasticity. We show how to derive the value function and optimal pricing policies. On the basis of the feedback solution to the deterministic model, we propose a method for constructing heuristics to be applied to general stochastic models. For the case of isoelastic demand, we analytically verify the excellent performance of this approach for both small and large inventory levels.
",rainer schlosser,,2015.0,10.1057/rpm.2015.3,Journal of Revenue and Pricing Management,Schlosser2015,Not available,,Nature,Not available,Dynamic pricing with time-dependent elasticities,4d5f643d0e175932fea2ec4a7ace1760,http://dx.doi.org/10.1057/rpm.2015.3
17542,"This article analyzes a dynamic pricing and advertising model for the sale of perishable products under constant absolute risk aversion. We consider a time-dependent version of Gallego and van Ryzin’s dynamic pricing model with exponential demand and include isoelastic advertising effects as well as marginal unit costs. We derive closed-form expressions of the optimal risk-averse pricing and advertising policies of the value function and of the certainty equivalent. The formulas provide insight into the (complex) interplay between risk-sensitive pricing and advertising decisions. Moreover, to evaluate the optimally controlled sales process over time we propose efficient simulation techniques. These are used to analyze the characteristics of different degrees of risk aversion, particularly the concentration of the profit distribution and the impact on the expected evolution of price and advertising rates.
",rainer schlosser,,2015.0,10.1057/rpm.2015.20,Journal of Revenue and Pricing Management,Schlosser2015,Not available,,Nature,Not available,A stochastic dynamic pricing and advertising model under risk aversion,89ab85fb80efa59bd97bfa87f963c33f,http://dx.doi.org/10.1057/rpm.2015.20
17543,"Since 1990, Singapore has sought to control motor vehicle ownership by means of an auction quota system, whereby prospective vehicle buyers need to obtain a quota license before they can make their purchase. This paper assesses the success of the vehicle quota system in meeting its objectives of stability in motor vehicle growth, flexibility in the motor vehicle mix, and equity among motor vehicle buyers. Two important implementation issues – quota subcategorization and license transferability – are highlighted, and policy lessons are drawn for the design of auction quotas in general.
",ling tan,,2003.0,10.2307/4149940,IMF Staff Papers,Tan2003,Not available,,Nature,Not available,Rationing Rules and Outcomes: The Experience of Singapore's Vehicle Quota System,e8b2b96e0f585dd3b2c15472e5655d4f,http://dx.doi.org/10.2307/4149940
17544,"In modern transportation systems, the potential for further decreasing the costs of fulfilling customer requests is severely limited while market competition is constantly reducing revenues. However, increased competitiveness through cost reductions can be achieved if freight carriers cooperate in order to balance their request portfolios. Participation in such coalitions can benefit the entire coalition, as well as each participant individually, thus reinforcing the market position of the partners. The work presented in this paper uniquely combines features of routing and scheduling problems and of cooperative game theory. In the first part, the profit margins resulting from horizontal cooperation among freight carriers are analysed. It is assumed that the structure of customer requests corresponds to that of a pickup and delivery problem with time windows for each freight carrier. In the second part, the possibilities of sharing these profit margins fairly among the partners are discussed. The Shapley value can be used to determine a fair allocation. Numerical results for real-life and artificial instances are presented.
",m krajewska,,2007.0,10.1057/palgrave.jors.2602489,Journal of the Operational Research Society,Krajewska2007,Not available,,Nature,Not available,Horizontal cooperation among freight carriers: request allocation and profit sharing,9750d766080d200d499ec20bcd0ee88e,http://dx.doi.org/10.1057/palgrave.jors.2602489
17545,"In modern transportation systems, the potential for further decreasing the costs of fulfilling customer requests is severely limited while market competition is constantly reducing revenues. However, increased competitiveness through cost reductions can be achieved if freight carriers cooperate in order to balance their request portfolios. Participation in such coalitions can benefit the entire coalition, as well as each participant individually, thus reinforcing the market position of the partners. The work presented in this paper uniquely combines features of routing and scheduling problems and of cooperative game theory. In the first part, the profit margins resulting from horizontal cooperation among freight carriers are analysed. It is assumed that the structure of customer requests corresponds to that of a pickup and delivery problem with time windows for each freight carrier. In the second part, the possibilities of sharing these profit margins fairly among the partners are discussed. The Shapley value can be used to determine a fair allocation. Numerical results for real-life and artificial instances are presented.
",h kopfer,,2007.0,10.1057/palgrave.jors.2602489,Journal of the Operational Research Society,Krajewska2007,Not available,,Nature,Not available,Horizontal cooperation among freight carriers: request allocation and profit sharing,9750d766080d200d499ec20bcd0ee88e,http://dx.doi.org/10.1057/palgrave.jors.2602489
17546,"In modern transportation systems, the potential for further decreasing the costs of fulfilling customer requests is severely limited while market competition is constantly reducing revenues. However, increased competitiveness through cost reductions can be achieved if freight carriers cooperate in order to balance their request portfolios. Participation in such coalitions can benefit the entire coalition, as well as each participant individually, thus reinforcing the market position of the partners. The work presented in this paper uniquely combines features of routing and scheduling problems and of cooperative game theory. In the first part, the profit margins resulting from horizontal cooperation among freight carriers are analysed. It is assumed that the structure of customer requests corresponds to that of a pickup and delivery problem with time windows for each freight carrier. In the second part, the possibilities of sharing these profit margins fairly among the partners are discussed. The Shapley value can be used to determine a fair allocation. Numerical results for real-life and artificial instances are presented.
",g laporte,,2007.0,10.1057/palgrave.jors.2602489,Journal of the Operational Research Society,Krajewska2007,Not available,,Nature,Not available,Horizontal cooperation among freight carriers: request allocation and profit sharing,9750d766080d200d499ec20bcd0ee88e,http://dx.doi.org/10.1057/palgrave.jors.2602489
17547,"In this article we propose a new complementary approach to investigate Inter-Organizational Information Systems (IOIS) adoption called configuration analysis. We motivate the need for a new approach by the common observation that the structure and the strategy of an IOIS are interdependent and that the IOIS adoptions consequently cluster orderly. For example, an IOIS setup with a powerful customer as a hub and many suppliers as spokes frequently surfaces across diffusion studies. Yet, this fact has not been integrated into existing analyses, and its implications have not been fully developed. We propose that IOIS scholars need to look beyond the single adopting organization in IOIS adoption studies and in contrast consider adoption units what we call an adoption configuration. Each such configuration can be further characterized along the following dimensions: (1) vision, (2) key functionality, (3) mode of interaction, (4) structure and (5) mode of appropriation. In addition, these dimensions do not co-vary independently. For example, a particular organizing vision assumes a specific inter-organizational structure. A typology of IOIS configurations for adoption analysis is laid out consisting of dyadic, hub and spoke, industry and community configurations. Specific forms or adoption analysis are suggested for each type of configuration. Overall, configuration analysis redirects IOIS adoption studies both at the theoretical and the methodological level, and a corresponding research agenda is sketched.
",kalle lyytinen,,2011.0,10.1057/ejis.2010.71,European Journal of Information Systems,Lyytinen2011,Not available,,Nature,Not available,Inter-organizational information systems adoption – a configuration analysis approach,7029a38a92e6745ee0a7cd7bc0727a99,http://dx.doi.org/10.1057/ejis.2010.71
17548,"In modern transportation systems, the potential for further decreasing the costs of fulfilling customer requests is severely limited while market competition is constantly reducing revenues. However, increased competitiveness through cost reductions can be achieved if freight carriers cooperate in order to balance their request portfolios. Participation in such coalitions can benefit the entire coalition, as well as each participant individually, thus reinforcing the market position of the partners. The work presented in this paper uniquely combines features of routing and scheduling problems and of cooperative game theory. In the first part, the profit margins resulting from horizontal cooperation among freight carriers are analysed. It is assumed that the structure of customer requests corresponds to that of a pickup and delivery problem with time windows for each freight carrier. In the second part, the possibilities of sharing these profit margins fairly among the partners are discussed. The Shapley value can be used to determine a fair allocation. Numerical results for real-life and artificial instances are presented.
",s ropke,,2007.0,10.1057/palgrave.jors.2602489,Journal of the Operational Research Society,Krajewska2007,Not available,,Nature,Not available,Horizontal cooperation among freight carriers: request allocation and profit sharing,9750d766080d200d499ec20bcd0ee88e,http://dx.doi.org/10.1057/palgrave.jors.2602489
17549,"In modern transportation systems, the potential for further decreasing the costs of fulfilling customer requests is severely limited while market competition is constantly reducing revenues. However, increased competitiveness through cost reductions can be achieved if freight carriers cooperate in order to balance their request portfolios. Participation in such coalitions can benefit the entire coalition, as well as each participant individually, thus reinforcing the market position of the partners. The work presented in this paper uniquely combines features of routing and scheduling problems and of cooperative game theory. In the first part, the profit margins resulting from horizontal cooperation among freight carriers are analysed. It is assumed that the structure of customer requests corresponds to that of a pickup and delivery problem with time windows for each freight carrier. In the second part, the possibilities of sharing these profit margins fairly among the partners are discussed. The Shapley value can be used to determine a fair allocation. Numerical results for real-life and artificial instances are presented.
",g zaccour,,2007.0,10.1057/palgrave.jors.2602489,Journal of the Operational Research Society,Krajewska2007,Not available,,Nature,Not available,Horizontal cooperation among freight carriers: request allocation and profit sharing,9750d766080d200d499ec20bcd0ee88e,http://dx.doi.org/10.1057/palgrave.jors.2602489
17550,"This article investigates the optimal strategy for potential investments that may be adopted by port authorities to attract more carriers. A mathematical model is formulated to represent the main criteria used by carriers to evaluate a specific port. Then, a game theory approach is used to model the competition between several port authorities to attract carriers via maximizing their utility functions. The game type suggested is Sealed-Bid with one round. The results indicate that the optimal investment strategy used by a specific port is dependent on (i) the port’s current state with respect to other competing ports; (ii) resource availability in terms of manpower and funds; (iii) expected profitability and (iv) other players’ reaction toward investment. This study advises the port authority on maximizing the payoff in case of winning the bid by selectively investing in the port and minimizing the potential loss in case of losing the bid by refraining from performing any investment.
Maritime Economics & Logistics advance online publication, 21 May 2015; doi:10.1057/mel.2015.7",isam kaysi,,2015.0,10.1057/mel.2015.7,Maritime Economics & Logistics,Kaysi2015,Not available,,Nature,Not available,Optimal investment strategy in a container terminal: A game theoretic approach,b27929273f4d108033a9ee6d8b253512,http://dx.doi.org/10.1057/mel.2015.7
17551,"This article investigates the optimal strategy for potential investments that may be adopted by port authorities to attract more carriers. A mathematical model is formulated to represent the main criteria used by carriers to evaluate a specific port. Then, a game theory approach is used to model the competition between several port authorities to attract carriers via maximizing their utility functions. The game type suggested is Sealed-Bid with one round. The results indicate that the optimal investment strategy used by a specific port is dependent on (i) the port’s current state with respect to other competing ports; (ii) resource availability in terms of manpower and funds; (iii) expected profitability and (iv) other players’ reaction toward investment. This study advises the port authority on maximizing the payoff in case of winning the bid by selectively investing in the port and minimizing the potential loss in case of losing the bid by refraining from performing any investment.
Maritime Economics & Logistics advance online publication, 21 May 2015; doi:10.1057/mel.2015.7",nabil nehme,,2015.0,10.1057/mel.2015.7,Maritime Economics & Logistics,Kaysi2015,Not available,,Nature,Not available,Optimal investment strategy in a container terminal: A game theoretic approach,b27929273f4d108033a9ee6d8b253512,http://dx.doi.org/10.1057/mel.2015.7
17552,"CONTENTS****
Page
1. The Subject and its Interdisciplinary Background 13-19
1.1. The Growing Acceptance of Uncertainty 15
1.2. Statistical and Epistemological Uncertainty 17
1.3. Scope and Content of this Survey 19
2. Uncertainty in Aggregation 20-33
2.1. Cross-Sectional Aggregates 21
2.2. Intertemporal Aggregates 24
2.3. Organizational Aggregates 30
2.4. Value Dependence and the Several Dimensions of Aggregation 32
3. Uncertainty in Estimation 34-45
3.1. Multiple Equilibria, Disequilibrium, and Switching Regimes 35
3.2. Updating Structural Knowledge during Transitions 38
3.3. Learning Behavior in Stochastic Macroeconomics 40
3.4. Latent Variables Models with Extended Learning 43
4. Uncertainty in Policy Formation 45-60
4.1. Measures of Variability for Modelling Economic Behavior and Control 46
4.2. Policy Surprises and Other Macroeconomic Disturbances 51
4.3. Indexation as a Way of Absorbing Uncertainty 55
4.4. Summary 59
5. Closing Observations on Microfoundations and Information 60-66
5.1. Substansive and Nonsubstansive Micro-Macro Relations under Uncertainty 61
5.2. Uncertainty of Inference and Prerequisites for Processing Information 64
References 66
**** Readers whose interests are primarily multidisciplinary may wish to concentrate on Sections 1 and 5, skimming only the first few pages introducing Sections 2, 3, and 4 and the last section (4) of 4.
12
""Indeed the historian of the modern world is tempted to rgach the depressing conclusion that progress is destructive of certitude.""
(Paul Johnson 1983, p. 697)
1. The Subject and its Interdisciplinary Background
This survey seeks to show what account is taken of uncertainty in contemporary macroeconomics. Before outlining ist scope and content at the end of this section, it may be useful to provide some historical background on how the treatment of uncertainty, and the contexts in which it has been dealt with, have changed not only in economics but also in other disciplines. For it is difficult to gain perspective on the treatment of uncertainty in one discipline without comparing this with another.
",george furstenberg,,1988.0,10.1057/gpp.1988.2,The Geneva Papers on Risk and Insurance,Furstenberg1988,Not available,,Nature,Not available,Owning Up to Uncertainty in Macroeconomics*,3bdda92b885dff5b489027bea19aa5fb,http://dx.doi.org/10.1057/gpp.1988.2
17553,"CONTENTS****
Page
1. The Subject and its Interdisciplinary Background 13-19
1.1. The Growing Acceptance of Uncertainty 15
1.2. Statistical and Epistemological Uncertainty 17
1.3. Scope and Content of this Survey 19
2. Uncertainty in Aggregation 20-33
2.1. Cross-Sectional Aggregates 21
2.2. Intertemporal Aggregates 24
2.3. Organizational Aggregates 30
2.4. Value Dependence and the Several Dimensions of Aggregation 32
3. Uncertainty in Estimation 34-45
3.1. Multiple Equilibria, Disequilibrium, and Switching Regimes 35
3.2. Updating Structural Knowledge during Transitions 38
3.3. Learning Behavior in Stochastic Macroeconomics 40
3.4. Latent Variables Models with Extended Learning 43
4. Uncertainty in Policy Formation 45-60
4.1. Measures of Variability for Modelling Economic Behavior and Control 46
4.2. Policy Surprises and Other Macroeconomic Disturbances 51
4.3. Indexation as a Way of Absorbing Uncertainty 55
4.4. Summary 59
5. Closing Observations on Microfoundations and Information 60-66
5.1. Substansive and Nonsubstansive Micro-Macro Relations under Uncertainty 61
5.2. Uncertainty of Inference and Prerequisites for Processing Information 64
References 66
**** Readers whose interests are primarily multidisciplinary may wish to concentrate on Sections 1 and 5, skimming only the first few pages introducing Sections 2, 3, and 4 and the last section (4) of 4.
12
""Indeed the historian of the modern world is tempted to rgach the depressing conclusion that progress is destructive of certitude.""
(Paul Johnson 1983, p. 697)
1. The Subject and its Interdisciplinary Background
This survey seeks to show what account is taken of uncertainty in contemporary macroeconomics. Before outlining ist scope and content at the end of this section, it may be useful to provide some historical background on how the treatment of uncertainty, and the contexts in which it has been dealt with, have changed not only in economics but also in other disciplines. For it is difficult to gain perspective on the treatment of uncertainty in one discipline without comparing this with another.
",jin-ho jeong,,1988.0,10.1057/gpp.1988.2,The Geneva Papers on Risk and Insurance,Furstenberg1988,Not available,,Nature,Not available,Owning Up to Uncertainty in Macroeconomics*,3bdda92b885dff5b489027bea19aa5fb,http://dx.doi.org/10.1057/gpp.1988.2
17554,Paid search is an important form of online advertisement. Clickthroughs from slots are bid for by advertisers. The process of formulating bids is a complex one involving bidders in competing against other advertisers in multiple auctions. It would be helpful in managing the bidding process if it were possible to determine the values placed on a clickthrough by different advertisers. The theory of two models for estimating advertiser values and associated parameters is presented. The models are applied to a set of data for searches on the term Personal Loans. The results of the model that fits the data better are evaluated. The utility of the model to practitioners is discussed. Some issues raised by the results about the role of bidding agents and the discriminatory power of Customer Relationship Management systems are considered. Ways to develop the preferred model are outlined. It is suggested that the model has implications for evaluating forecasting methods for use in paid search auctions.
,d laffey,,2008.0,10.1057/palgrave.jors.2602570,Journal of the Operational Research Society,Laffey2008,Not available,,Nature,Not available,Estimating advertisers' values for paid search clickthroughs,8e38778c699c9706633d107295c46414,http://dx.doi.org/10.1057/palgrave.jors.2602570
17555,Paid search is an important form of online advertisement. Clickthroughs from slots are bid for by advertisers. The process of formulating bids is a complex one involving bidders in competing against other advertisers in multiple auctions. It would be helpful in managing the bidding process if it were possible to determine the values placed on a clickthrough by different advertisers. The theory of two models for estimating advertiser values and associated parameters is presented. The models are applied to a set of data for searches on the term Personal Loans. The results of the model that fits the data better are evaluated. The utility of the model to practitioners is discussed. Some issues raised by the results about the role of bidding agents and the discriminatory power of Customer Relationship Management systems are considered. Ways to develop the preferred model are outlined. It is suggested that the model has implications for evaluating forecasting methods for use in paid search auctions.
,c hunka,,2008.0,10.1057/palgrave.jors.2602570,Journal of the Operational Research Society,Laffey2008,Not available,,Nature,Not available,Estimating advertisers' values for paid search clickthroughs,8e38778c699c9706633d107295c46414,http://dx.doi.org/10.1057/palgrave.jors.2602570
17556,Paid search is an important form of online advertisement. Clickthroughs from slots are bid for by advertisers. The process of formulating bids is a complex one involving bidders in competing against other advertisers in multiple auctions. It would be helpful in managing the bidding process if it were possible to determine the values placed on a clickthrough by different advertisers. The theory of two models for estimating advertiser values and associated parameters is presented. The models are applied to a set of data for searches on the term Personal Loans. The results of the model that fits the data better are evaluated. The utility of the model to practitioners is discussed. Some issues raised by the results about the role of bidding agents and the discriminatory power of Customer Relationship Management systems are considered. Ways to develop the preferred model are outlined. It is suggested that the model has implications for evaluating forecasting methods for use in paid search auctions.
,j sharp,,2008.0,10.1057/palgrave.jors.2602570,Journal of the Operational Research Society,Laffey2008,Not available,,Nature,Not available,Estimating advertisers' values for paid search clickthroughs,8e38778c699c9706633d107295c46414,http://dx.doi.org/10.1057/palgrave.jors.2602570
17557,Paid search is an important form of online advertisement. Clickthroughs from slots are bid for by advertisers. The process of formulating bids is a complex one involving bidders in competing against other advertisers in multiple auctions. It would be helpful in managing the bidding process if it were possible to determine the values placed on a clickthrough by different advertisers. The theory of two models for estimating advertiser values and associated parameters is presented. The models are applied to a set of data for searches on the term Personal Loans. The results of the model that fits the data better are evaluated. The utility of the model to practitioners is discussed. Some issues raised by the results about the role of bidding agents and the discriminatory power of Customer Relationship Management systems are considered. Ways to develop the preferred model are outlined. It is suggested that the model has implications for evaluating forecasting methods for use in paid search auctions.
,z zeng,,2008.0,10.1057/palgrave.jors.2602570,Journal of the Operational Research Society,Laffey2008,Not available,,Nature,Not available,Estimating advertisers' values for paid search clickthroughs,8e38778c699c9706633d107295c46414,http://dx.doi.org/10.1057/palgrave.jors.2602570
17558,"In this article we propose a new complementary approach to investigate Inter-Organizational Information Systems (IOIS) adoption called configuration analysis. We motivate the need for a new approach by the common observation that the structure and the strategy of an IOIS are interdependent and that the IOIS adoptions consequently cluster orderly. For example, an IOIS setup with a powerful customer as a hub and many suppliers as spokes frequently surfaces across diffusion studies. Yet, this fact has not been integrated into existing analyses, and its implications have not been fully developed. We propose that IOIS scholars need to look beyond the single adopting organization in IOIS adoption studies and in contrast consider adoption units what we call an adoption configuration. Each such configuration can be further characterized along the following dimensions: (1) vision, (2) key functionality, (3) mode of interaction, (4) structure and (5) mode of appropriation. In addition, these dimensions do not co-vary independently. For example, a particular organizing vision assumes a specific inter-organizational structure. A typology of IOIS configurations for adoption analysis is laid out consisting of dyadic, hub and spoke, industry and community configurations. Specific forms or adoption analysis are suggested for each type of configuration. Overall, configuration analysis redirects IOIS adoption studies both at the theoretical and the methodological level, and a corresponding research agenda is sketched.
",jan damsgaard,,2011.0,10.1057/ejis.2010.71,European Journal of Information Systems,Lyytinen2011,Not available,,Nature,Not available,Inter-organizational information systems adoption – a configuration analysis approach,7029a38a92e6745ee0a7cd7bc0727a99,http://dx.doi.org/10.1057/ejis.2010.71
17559,"Games can be easy to construct but difficult to solve due to current methods available for finding the Nash Equilibrium. This issue is one of many that face modern game theorists and those analysts that need to model situations with multiple decision-makers. This paper explores the use of reinforcement learning, a standard artificial intelligence technique, as a means to solve a simple dynamic airline pricing game. Three different reinforcement learning approaches are compared: SARSA, Q-learning and Monte Carlo Learning. The pricing game solution is surprisingly sophisticated given the game's simplicity and this sophistication is reflected in the learning results. The paper also discusses extra analytical benefit obtained from applying reinforcement learning to these types of problems.
",a collins,,2011.0,10.1057/jors.2011.94,Journal of the Operational Research Society,Collins2011,Not available,,Nature,Not available,Comparing reinforcement learning approaches for solving game theoretic models: a dynamic airline pricing game example,7d831309ba4e5cb5bbde9efd280f43d1,http://dx.doi.org/10.1057/jors.2011.94
17560,"Games can be easy to construct but difficult to solve due to current methods available for finding the Nash Equilibrium. This issue is one of many that face modern game theorists and those analysts that need to model situations with multiple decision-makers. This paper explores the use of reinforcement learning, a standard artificial intelligence technique, as a means to solve a simple dynamic airline pricing game. Three different reinforcement learning approaches are compared: SARSA, Q-learning and Monte Carlo Learning. The pricing game solution is surprisingly sophisticated given the game's simplicity and this sophistication is reflected in the learning results. The paper also discusses extra analytical benefit obtained from applying reinforcement learning to these types of problems.
",l thomas,,2011.0,10.1057/jors.2011.94,Journal of the Operational Research Society,Collins2011,Not available,,Nature,Not available,Comparing reinforcement learning approaches for solving game theoretic models: a dynamic airline pricing game example,7d831309ba4e5cb5bbde9efd280f43d1,http://dx.doi.org/10.1057/jors.2011.94
17561,"In 1964, Bell discovered that quantum mechanics is a nonlocal theory. Three years later, in a seemingly unconnected development, Harsanyi introduced the concept of Bayesian games. Here we show that, in fact, there is a deep connection between Bell nonlocality and Bayesian games, and that the same concepts appear in both fields. This link offers interesting possibilities for Bayesian games, namely of allowing the players to receive advice in the form of nonlocal correlations, for instance using entangled quantum particles or more general no-signalling boxes. This will lead to novel joint strategies, impossible to achieve classically. We characterize games for which nonlocal resources offer a genuine advantage over classical ones. Moreover, some of these strategies represent equilibrium points, leading to the notion of quantum/no-signalling Nash equilibrium. Finally, we describe new types of question in the study of nonlocality, namely the consideration of nonlocal advantage given a set of Bell expressions.
",nicolas brunner,Theoretical physics,2013.0,10.1038/ncomms3057,Nature Communications,Brunner2013,Not available,,Nature,Not available,Connection between Bell nonlocality and Bayesian game theory,5a8b6d2e9f24ce1a0b51bef48c4c8edb,http://dx.doi.org/10.1038/ncomms3057
17562,"In 1964, Bell discovered that quantum mechanics is a nonlocal theory. Three years later, in a seemingly unconnected development, Harsanyi introduced the concept of Bayesian games. Here we show that, in fact, there is a deep connection between Bell nonlocality and Bayesian games, and that the same concepts appear in both fields. This link offers interesting possibilities for Bayesian games, namely of allowing the players to receive advice in the form of nonlocal correlations, for instance using entangled quantum particles or more general no-signalling boxes. This will lead to novel joint strategies, impossible to achieve classically. We characterize games for which nonlocal resources offer a genuine advantage over classical ones. Moreover, some of these strategies represent equilibrium points, leading to the notion of quantum/no-signalling Nash equilibrium. Finally, we describe new types of question in the study of nonlocality, namely the consideration of nonlocal advantage given a set of Bell expressions.
",noah linden,Theoretical physics,2013.0,10.1038/ncomms3057,Nature Communications,Brunner2013,Not available,,Nature,Not available,Connection between Bell nonlocality and Bayesian game theory,5a8b6d2e9f24ce1a0b51bef48c4c8edb,http://dx.doi.org/10.1038/ncomms3057
17563,"As the title of the book indicates, this is a nontechnical introduction to game theory. It is one of many textbooks on game theory; its niche appears to be students with little background in Mathematics or Economics.
",myong-hun chang,,2013.0,10.1057/eej.2012.26,Eastern Economic Journal,Chang2013,Not available,,Nature,Not available,"Game Theory: A Nontechnical Introduction to the Analysis of Strategy, by Roger A. McCain",9ec0c6768016903067c3983926e4f4ef,http://dx.doi.org/10.1057/eej.2012.26
17564,"A Nash equilibrium is a highly desirable situation in game theory, which earned its discoverer a Nobel prize. Such a fundamental result might seem hard to improve on, but new work has multiplied the situations in which Nash equilibria can apply.",ivar ekeland,,1999.0,10.1038/23154,Nature,Ekeland1999,Not available,,Nature,Not available,Game theory: Agreeing on strategies,59fc8f18bbca30190501d36988f3d53d,http://dx.doi.org/10.1038/23154
17565,"There has been a gigantic shift from a product based economy to one based on services, specifically digital services. From every indication it is likely to be more than a passing fad and the changes these emerging digital services represent will continue to transform commerce and have yet to reach market saturation. Digital services are being designed for and offered to users, yet very little is known about the design process that goes behind these developments. Is there a science behind designing digital services? By examining 12 leading digital services, we have developed a design taxonomy to be able to classify and contrast digital services. What emerged in the taxonomy were two broad dimensions; a set of fundamental design objectives and a set of fundamental service provider objectives. This paper concludes with an application of the proposed taxonomy to three leading digital services. We hope that the proposed taxonomy will be useful in understanding the science behind the design of digital services.
",kevin williams,,2008.0,10.1057/ejis.2008.38,European Journal of Information Systems,Williams2008,Not available,,Nature,Not available,Design of emerging digital services: a taxonomy,b7dd139687cdb644d973f9a65fbaaa85,http://dx.doi.org/10.1057/ejis.2008.38
17566,"There has been a gigantic shift from a product based economy to one based on services, specifically digital services. From every indication it is likely to be more than a passing fad and the changes these emerging digital services represent will continue to transform commerce and have yet to reach market saturation. Digital services are being designed for and offered to users, yet very little is known about the design process that goes behind these developments. Is there a science behind designing digital services? By examining 12 leading digital services, we have developed a design taxonomy to be able to classify and contrast digital services. What emerged in the taxonomy were two broad dimensions; a set of fundamental design objectives and a set of fundamental service provider objectives. This paper concludes with an application of the proposed taxonomy to three leading digital services. We hope that the proposed taxonomy will be useful in understanding the science behind the design of digital services.
",samir chatterjee,,2008.0,10.1057/ejis.2008.38,European Journal of Information Systems,Williams2008,Not available,,Nature,Not available,Design of emerging digital services: a taxonomy,b7dd139687cdb644d973f9a65fbaaa85,http://dx.doi.org/10.1057/ejis.2008.38
17567,"There has been a gigantic shift from a product based economy to one based on services, specifically digital services. From every indication it is likely to be more than a passing fad and the changes these emerging digital services represent will continue to transform commerce and have yet to reach market saturation. Digital services are being designed for and offered to users, yet very little is known about the design process that goes behind these developments. Is there a science behind designing digital services? By examining 12 leading digital services, we have developed a design taxonomy to be able to classify and contrast digital services. What emerged in the taxonomy were two broad dimensions; a set of fundamental design objectives and a set of fundamental service provider objectives. This paper concludes with an application of the proposed taxonomy to three leading digital services. We hope that the proposed taxonomy will be useful in understanding the science behind the design of digital services.
",matti rossi,,2008.0,10.1057/ejis.2008.38,European Journal of Information Systems,Williams2008,Not available,,Nature,Not available,Design of emerging digital services: a taxonomy,b7dd139687cdb644d973f9a65fbaaa85,http://dx.doi.org/10.1057/ejis.2008.38
17568,"Using panel data, we find evidence for significant direct and indirect effects of trade on carbon emissions. Trade tends to increase the emissions burden, especially in those less industrialized countries. We also find evidence for a U-shaped relationship between income per capita and carbon emissions. We discuss the effectiveness of emission reduction policies in light of these results and advance some proposals to reconcile environmental and trade policymaking.
A partir de données de panel, nous constatons que le commerce a des effets directs et indirects significatifs sur les émissions de carbone. Le commerce tend à accroître la charge des émissions, particulièrement dans les pays moins industrialisés. Les données mettent également en évidence une relation en forme de U entre le revenu par habitant et les émissions de carbone. Au vu de ces résultats, nous évaluons l’efficacité des politiques de réduction des émissions et formulons des propositions pour concilier l’élaboration des politiques commerciales et celle des politiques environnementales
",richard kozul-wright,,2012.0,10.1057/ejdr.2012.15,European Journal of Development Research,Kozul-Wright2012,Not available,,Nature,Not available,International Trade and Carbon Emissions,760c5817d327e1f871907ebac691bd05,http://dx.doi.org/10.1057/ejdr.2012.15
17570,"Using panel data, we find evidence for significant direct and indirect effects of trade on carbon emissions. Trade tends to increase the emissions burden, especially in those less industrialized countries. We also find evidence for a U-shaped relationship between income per capita and carbon emissions. We discuss the effectiveness of emission reduction policies in light of these results and advance some proposals to reconcile environmental and trade policymaking.
A partir de données de panel, nous constatons que le commerce a des effets directs et indirects significatifs sur les émissions de carbone. Le commerce tend à accroître la charge des émissions, particulièrement dans les pays moins industrialisés. Les données mettent également en évidence une relation en forme de U entre le revenu par habitant et les émissions de carbone. Au vu de ces résultats, nous évaluons l’efficacité des politiques de réduction des émissions et formulons des propositions pour concilier l’élaboration des politiques commerciales et celle des politiques environnementales
",piergiuseppe fortunato,,2012.0,10.1057/ejdr.2012.15,European Journal of Development Research,Kozul-Wright2012,Not available,,Nature,Not available,International Trade and Carbon Emissions,760c5817d327e1f871907ebac691bd05,http://dx.doi.org/10.1057/ejdr.2012.15
17571,"Citigroup has served as the poster child for the elusive promises and manifold pitfalls of universal banking. When Citicorp merged with Travelers to form Citigroup in 1998, Citigroup’s leaders and supporters asserted that the new financial conglomerate would offer unparalleled convenience to its customers through ‘one-stop shopping’ for banking, securities and insurance services. They also claimed that Citigroup would have a superior ability to withstand financial shocks due to its broadly diversified activities. By 2009, those bold predictions of Citigroup’s success had turned to ashes. Citigroup pursued a high-risk, high-growth strategy during the 2000s that proved to be disastrous. As a result, the bank recorded more than US$130 billion of losses on its loans and investments from 2007 to 2009. To prevent Citigroup’s failure, the federal government provided $45 billion of new capital to the bank and gave the bank $500 billion of additional help in the form of asset guarantees, debt guarantees and emergency loans. The federal government provided more financial assistance to Citigroup than to any other bank during the financial crisis. During its early years, Citigroup was embroiled in a series of high-profile scandals, including tainted transactions with Enron and WorldCom, biased research advice, corrupt allocations of shares in initial public offerings, predatory subprime lending, and market manipulation in foreign bond markets. Notwithstanding a widely publicized plan to improve corporate risk controls in 2005, Citigroup continued to pursue higher profits through a wide range of speculative activities, including leveraged corporate lending, packaging toxic subprime loans into mortgage-backed securities and collateralized debt obligations, and dumping risky assets into off-balance-sheet conduits for which Citigroup had contractual and reputational exposures. Post-mortem evaluations of Citigroup’s near-collapse revealed that neither Citigroup’s managers nor its regulators recognized the systemic risks embedded in the bank’s far-flung operations. Thus, Citigroup was not only too big to fail but also too large and too complex to manage or regulate effectively. Citigroup’s history raises deeply troubling questions about the ability of bank executives and regulators to supervise and control today’s megabanks. Citigroup’s original creators – John Reed of Citicorp and Sandy Weill of Travelers – admitted in recent years that Citigroup’s universal banking model failed, and they called on Congress to reinstate the Glass-Steagall Act’s separation between commercial and investment banks. As Reed and Weill acknowledged, the universal banking model is deeply flawed by its excessive organizational complexity, its vulnerability to culture clashes and conflicts of interest, and its tendency to permit excessive risk-taking within far-flung, semi-autonomous units that lack adequate oversight from either senior managers or regulators.
",arthur jr,,2014.0,10.1057/jbr.2014.16,Journal of Banking Regulation,Jr2014,Not available,,Nature,Not available,Citigroup’s unfortunate history of managerial and regulatory failures,c34a6b0be81e283ac476796ee5f6d7a0,http://dx.doi.org/10.1057/jbr.2014.16
17572,"Nigeria's response to its 2009 banking crisis, which indicated exogenous and endogenous local and global risk factors for non-performing loans (NPLs), included the apparent orthodoxy of establishing an asset management company (AMCON). This article examines the justifications for and effectiveness of AMCON as a mechanism for resolving NPLs in a developing economy. Having compared Nigeria and Korea, the article argues that relief, restructuring, rehabilitation, recovery, resuscitation, responsibility, restitution and reoccurrence prevention, expressed as RE=7 × Re−Re, are critical goals. Doubting a one-size-fits-all model for resolving systemic banking crises, this article suggests a contextual approach that considers asset characteristics, operative legal and regulatory environment, and market capacity.
",onyeka osuji,,2012.0,10.1057/jbr.2011.28,Journal of Banking Regulation,Osuji2012,Not available,,Nature,Not available,"Asset management companies, non-performing loans and systemic crisis: A developing country perspective",4fb256f1f02f3f80cdd18e658fa6986f,http://dx.doi.org/10.1057/jbr.2011.28
17573,"This article examines the ways in which the increasingly market-based higher education (HE) landscape of UK HE is shaping students’ attitudes and responses towards their HE. Contemporary HE policy has framed HE as a private good that generates largely private benefits. There has also been a concern that these changes will distort institutional relations and the traditional value of participating in HE, reinforcing the growing commodification of UK HE. On the basis of a qualitative study with students in a range of higher education institutions from the four UK countries, it outlines the main impacts recent policy is having on students’ attitudes and relationship to HE. Dominant market-driven discourses around investment, consumerism, employability and competition indicate widespread concerns among contemporary HE students about operating in higher-stakes markets, which are intensified by increased personal financial contribution towards HE. While the data reveal an identification with the student as ‘consumer’ and strigent expectations over what HE provides, it also points to an ethic of self-responsibility that is built on highly individualised discourses of personal application, proactivity and experience optimisation. Goal-driven and instrumental learning are evident, which relate to widespread concerns about future returns and the private good value of HE.
",michael tomlinson,,2015.0,10.1057/hep.2015.17,Higher Education Policy,Tomlinson2015,Not available,,Nature,Not available,"The Impact of Market-Driven Higher Education on Student-University Relations: Investing, Consuming and Competing",39d873433a4d7118174682353939e37a,http://dx.doi.org/10.1057/hep.2015.17
17574,"Do acquisitions lead to instrumental innovations related to the acquired knowledge? Past arguments on vertical integration espouse how a quest for knowledge drives acquisitions culminating in innovation performance. Using Google and Yahoo as cases-in-point, we examine how facets of acquired innovation knowledge impact post-innovation performance. In particular, the apparently opposing fortunes of Google and Yahoo allow us to investigate the pace of their innovation performance as a hazards model. Results from our investigation highlight Google’s ambidexterity over Yahoo with a swifter, systematic pace of innovation performance – from hastening time to patenting new ideas to the time to releasing new applications from acquisitions.
",pratim datta,,2014.0,10.1057/ejis.2014.32,European Journal of Information Systems,Datta2014,Not available,,Nature,Not available,Knowledge-acquisitions and post-acquisition innovation performance: a comparative hazards model,66df6880c810ebef0b6f750a70fd9a67,http://dx.doi.org/10.1057/ejis.2014.32
17575,"Do acquisitions lead to instrumental innovations related to the acquired knowledge? Past arguments on vertical integration espouse how a quest for knowledge drives acquisitions culminating in innovation performance. Using Google and Yahoo as cases-in-point, we examine how facets of acquired innovation knowledge impact post-innovation performance. In particular, the apparently opposing fortunes of Google and Yahoo allow us to investigate the pace of their innovation performance as a hazards model. Results from our investigation highlight Google’s ambidexterity over Yahoo with a swifter, systematic pace of innovation performance – from hastening time to patenting new ideas to the time to releasing new applications from acquisitions.
",yaman roumani,,2014.0,10.1057/ejis.2014.32,European Journal of Information Systems,Datta2014,Not available,,Nature,Not available,Knowledge-acquisitions and post-acquisition innovation performance: a comparative hazards model,66df6880c810ebef0b6f750a70fd9a67,http://dx.doi.org/10.1057/ejis.2014.32
17576,"Two characteristics of e-commerce, the ability to micromarket (ie, customising a marketing plan according to customers’ purchasing patterns) and the ability to selectively offer item availability information (ie, manipulating whether or not to display the total number of items available to customers), considerably increase firms’ potential to improve their performance. This paper considers e-business in the hotel and airline industries, which has two customer segments: one is the leisure segment, which focuses more on price, and the other group is the business segment, which focuses heavily on schedule. We propose an analytical model that determines the optimal pricing and demonstrates that e-business can improve its revenue by taking into account customer segmentation when offering item availability information to customers. We provide numerical examples that demonstrate that accuracy in segmenting customers and the size of each segment will influence the performance of customised marketing planning. We also present managerial implications derived from these analytical findings.
",hisashi kurata,,2007.0,10.1057/palgrave.rpm.5160054,Journal of Revenue and Pricing Management,Kurata2007,Not available,,Nature,Not available,How customisation of pricing and item availability information can improve e-commerce performance,1e47ede11f64f57e807966a7116a6ebf,http://dx.doi.org/10.1057/palgrave.rpm.5160054
17577,"Two characteristics of e-commerce, the ability to micromarket (ie, customising a marketing plan according to customers’ purchasing patterns) and the ability to selectively offer item availability information (ie, manipulating whether or not to display the total number of items available to customers), considerably increase firms’ potential to improve their performance. This paper considers e-business in the hotel and airline industries, which has two customer segments: one is the leisure segment, which focuses more on price, and the other group is the business segment, which focuses heavily on schedule. We propose an analytical model that determines the optimal pricing and demonstrates that e-business can improve its revenue by taking into account customer segmentation when offering item availability information to customers. We provide numerical examples that demonstrate that accuracy in segmenting customers and the size of each segment will influence the performance of customised marketing planning. We also present managerial implications derived from these analytical findings.
",carolyn bonifield,,2007.0,10.1057/palgrave.rpm.5160054,Journal of Revenue and Pricing Management,Kurata2007,Not available,,Nature,Not available,How customisation of pricing and item availability information can improve e-commerce performance,1e47ede11f64f57e807966a7116a6ebf,http://dx.doi.org/10.1057/palgrave.rpm.5160054
17578,"This article presents comparative analysis across time of the European Union (EU) and India in energy affairs. Yet, while there have been a number of signed agreements and completed negotiations, the actual achievements of the dialogue in energy matters are still questionable. This article focuses on how two leading Indian newspapers (The Times of India and The Economic Times) represented India’s dialogue with the EU. The three frames of sustainability, competitiveness and security of supply were traced in three peak time periods: (i) the Joint Declaration of Enhanced Energy Cooperation February 2012, (ii) the Doha Climate Change Conference in 2012 and (iii) the Warsaw Climate Change Conference in 2013. The analysis is guided by two questions: (i) whether the EU is recognised as a ‘normative’ power in the Indian influential press and (ii) whether Indian perceptions of the EU’s ‘normative’ power in energy framework have changed over time? The findings highlight the perceptions of the dynamics of EU cooperation with India in the energy field. Manners’ analysis of the EU’s ‘core’ and ‘minor’ norms provides a valuable perspective to analyse the EU’s international identity in energy affairs with external partners.
Comparative European Politics advance online publication, 20 June 2016; doi:10.1057/cep.2016.13",olga gulyaeva,,2016.0,10.1057/cep.2016.13,Comparative European Politics,Gulyaeva2016,Not available,,Nature,Not available,Challenging the EU’s normative power: Media views on the EU in India,9aff22b61f1aae0bb8c928ba6a8082fc,http://dx.doi.org/10.1057/cep.2016.13
17579,"Deficits in eye contact have been a hallmark of autism since the condition’s initial description. They are cited widely as a diagnostic feature and figure prominently in clinical instruments; however, the early onset of these deficits has not been known. Here we show in a prospective longitudinal study that infants later diagnosed with autism spectrum disorders (ASDs) exhibit mean decline in eye fixation from 2 to 6 months of age, a pattern not observed in infants who do not develop ASD. These observations mark the earliest known indicators of social disability in infancy, but also falsify a prior hypothesis: in the first months of life, this basic mechanism of social adaptive action—eye looking—is not immediately diminished in infants later diagnosed with ASD; instead, eye looking appears to begin at normative levels prior to decline. The timing of decline highlights a narrow developmental window and reveals the early derailment of processes that would otherwise have a key role in canalizing typical social development. Finally, the observation of this decline in eye fixation—rather than outright absence—offers a promising opportunity for early intervention that could build on the apparent preservation of mechanisms subserving reflexive initial orientation towards the eyes.
",warren jones,Autism spectrum disorders,2013.0,10.1038/nature12715,Nature,Jones2013,Not available,,Nature,Not available,Attention to eyes is present but in decline in 2–6-month-old infants later diagnosed with autism,88f9800c420ee8f45980c9ac04d2c9eb,http://dx.doi.org/10.1038/nature12715
17580,"Deficits in eye contact have been a hallmark of autism since the condition’s initial description. They are cited widely as a diagnostic feature and figure prominently in clinical instruments; however, the early onset of these deficits has not been known. Here we show in a prospective longitudinal study that infants later diagnosed with autism spectrum disorders (ASDs) exhibit mean decline in eye fixation from 2 to 6 months of age, a pattern not observed in infants who do not develop ASD. These observations mark the earliest known indicators of social disability in infancy, but also falsify a prior hypothesis: in the first months of life, this basic mechanism of social adaptive action—eye looking—is not immediately diminished in infants later diagnosed with ASD; instead, eye looking appears to begin at normative levels prior to decline. The timing of decline highlights a narrow developmental window and reveals the early derailment of processes that would otherwise have a key role in canalizing typical social development. Finally, the observation of this decline in eye fixation—rather than outright absence—offers a promising opportunity for early intervention that could build on the apparent preservation of mechanisms subserving reflexive initial orientation towards the eyes.
",warren jones,Psychology,2013.0,10.1038/nature12715,Nature,Jones2013,Not available,,Nature,Not available,Attention to eyes is present but in decline in 2–6-month-old infants later diagnosed with autism,88f9800c420ee8f45980c9ac04d2c9eb,http://dx.doi.org/10.1038/nature12715
17581,"Deficits in eye contact have been a hallmark of autism since the condition’s initial description. They are cited widely as a diagnostic feature and figure prominently in clinical instruments; however, the early onset of these deficits has not been known. Here we show in a prospective longitudinal study that infants later diagnosed with autism spectrum disorders (ASDs) exhibit mean decline in eye fixation from 2 to 6 months of age, a pattern not observed in infants who do not develop ASD. These observations mark the earliest known indicators of social disability in infancy, but also falsify a prior hypothesis: in the first months of life, this basic mechanism of social adaptive action—eye looking—is not immediately diminished in infants later diagnosed with ASD; instead, eye looking appears to begin at normative levels prior to decline. The timing of decline highlights a narrow developmental window and reveals the early derailment of processes that would otherwise have a key role in canalizing typical social development. Finally, the observation of this decline in eye fixation—rather than outright absence—offers a promising opportunity for early intervention that could build on the apparent preservation of mechanisms subserving reflexive initial orientation towards the eyes.
",ami klin,Autism spectrum disorders,2013.0,10.1038/nature12715,Nature,Jones2013,Not available,,Nature,Not available,Attention to eyes is present but in decline in 2–6-month-old infants later diagnosed with autism,88f9800c420ee8f45980c9ac04d2c9eb,http://dx.doi.org/10.1038/nature12715
17582,"Deficits in eye contact have been a hallmark of autism since the condition’s initial description. They are cited widely as a diagnostic feature and figure prominently in clinical instruments; however, the early onset of these deficits has not been known. Here we show in a prospective longitudinal study that infants later diagnosed with autism spectrum disorders (ASDs) exhibit mean decline in eye fixation from 2 to 6 months of age, a pattern not observed in infants who do not develop ASD. These observations mark the earliest known indicators of social disability in infancy, but also falsify a prior hypothesis: in the first months of life, this basic mechanism of social adaptive action—eye looking—is not immediately diminished in infants later diagnosed with ASD; instead, eye looking appears to begin at normative levels prior to decline. The timing of decline highlights a narrow developmental window and reveals the early derailment of processes that would otherwise have a key role in canalizing typical social development. Finally, the observation of this decline in eye fixation—rather than outright absence—offers a promising opportunity for early intervention that could build on the apparent preservation of mechanisms subserving reflexive initial orientation towards the eyes.
",ami klin,Psychology,2013.0,10.1038/nature12715,Nature,Jones2013,Not available,,Nature,Not available,Attention to eyes is present but in decline in 2–6-month-old infants later diagnosed with autism,88f9800c420ee8f45980c9ac04d2c9eb,http://dx.doi.org/10.1038/nature12715
17583,"This paper analyzes the statistical properties of real-world networks of people engaged in product development (PD) activities. We show that complex PD networks display similar statistical patterns to other real-world complex social, information, biological and technological networks. The paper lays out the foundations for understanding the properties of other intra- and inter-organizational networks that are realized by specific network architectures. The paper also provides a general framework towards characterizing the functionality, dynamics, robustness, and fragility of smart business networks.
",dan braha,,2004.0,10.1057/palgrave.jit.2000030,Journal of Information Technology,Braha2004,Not available,,Nature,Not available,Information flow structure in large-scale product development organizational networks,47f6e313ebd6f2e0610a66ba06b1d45c,http://dx.doi.org/10.1057/palgrave.jit.2000030
17584,"This paper analyzes the statistical properties of real-world networks of people engaged in product development (PD) activities. We show that complex PD networks display similar statistical patterns to other real-world complex social, information, biological and technological networks. The paper lays out the foundations for understanding the properties of other intra- and inter-organizational networks that are realized by specific network architectures. The paper also provides a general framework towards characterizing the functionality, dynamics, robustness, and fragility of smart business networks.
",yaneer bar-yam,,2004.0,10.1057/palgrave.jit.2000030,Journal of Information Technology,Braha2004,Not available,,Nature,Not available,Information flow structure in large-scale product development organizational networks,47f6e313ebd6f2e0610a66ba06b1d45c,http://dx.doi.org/10.1057/palgrave.jit.2000030
17585,"Intermediaries such as e-Bay and Amazon arise when market system knowledge is dispersed amongst participants (e.g. buyers and suppliers), and generate revenue by providing value-added services to participants in addition to creating and managing the digital infrastructure. Consequently, such intermediaries (which we call electronic marketplaces) play a vital role in facilitating exchanges in networks characterised by disparate knowledge and are essential network orchestrators in peer-to-peer markets and intellectual property exchanges. However, there is a high failure rate associated with electronic marketplaces leading to questions as to the long-term sustainability of emerging inter-organisational networks characterised by dispersed knowledge. This paper draws on research in a number of disciplines as well as a study of eight electronic marketplaces to build a theory of electronic marketplace performance. In doing so, we identify key performance measures for electronic marketplaces as well as the strategic, structural and contextual factors that impact performance. We identify how these factors can be observed, and illustrate how the fit between strategic and contextual factors affects performance. We present our theory as hypotheses and provide the empirical indicators for the constituent constructs.
",philip o'reilly,,2010.0,10.1057/ejis.2010.12,European Journal of Information Systems,O'Reilly2010,Not available,,Nature,Not available,Intermediaries in inter-organisational networks: building a theory of electronic marketplace performance,dafad4a350f38a4e78c840cae31eac77,http://dx.doi.org/10.1057/ejis.2010.12
17586,"Intermediaries such as e-Bay and Amazon arise when market system knowledge is dispersed amongst participants (e.g. buyers and suppliers), and generate revenue by providing value-added services to participants in addition to creating and managing the digital infrastructure. Consequently, such intermediaries (which we call electronic marketplaces) play a vital role in facilitating exchanges in networks characterised by disparate knowledge and are essential network orchestrators in peer-to-peer markets and intellectual property exchanges. However, there is a high failure rate associated with electronic marketplaces leading to questions as to the long-term sustainability of emerging inter-organisational networks characterised by dispersed knowledge. This paper draws on research in a number of disciplines as well as a study of eight electronic marketplaces to build a theory of electronic marketplace performance. In doing so, we identify key performance measures for electronic marketplaces as well as the strategic, structural and contextual factors that impact performance. We identify how these factors can be observed, and illustrate how the fit between strategic and contextual factors affects performance. We present our theory as hypotheses and provide the empirical indicators for the constituent constructs.
",patrick finnegan,,2010.0,10.1057/ejis.2010.12,European Journal of Information Systems,O'Reilly2010,Not available,,Nature,Not available,Intermediaries in inter-organisational networks: building a theory of electronic marketplace performance,dafad4a350f38a4e78c840cae31eac77,http://dx.doi.org/10.1057/ejis.2010.12
17588,"This paper develops a temporal perspective to examine information and communication technologies (ICT) adoption and processes of globalization. The foundations of our theoretical approach explicitly draw upon three intersecting planes of temporality implicit in structuration; namely reversibility, irreversibility and institutionalization. We further develop our theoretical perspective by extending the scope of structuration to incorporate temporal features of Adam's social theory on ‘global time’. We then use this temporal perspective to examine the emergence of electronic trading and the process of globalization across London and Chicago futures exchanges. Our analysis provides insights into the IT-enabled reconfiguration of these exchanges during processes of reproduction and change associated with globalization. We conclude with some key implications for e-trading strategy and consider changes in trader work life associated with the adoption of e-trading.
",michael barrett,,2004.0,10.1057/palgrave.ejis.3000487,European Journal of Information Systems,Barrett2004,Not available,,Nature,Not available,Electronic trading and the process of globalization in traditional futures exchanges: a temporal perspective,99425ee9089e6e066e8289b1f1220657,http://dx.doi.org/10.1057/palgrave.ejis.3000487
17589,"This paper develops a temporal perspective to examine information and communication technologies (ICT) adoption and processes of globalization. The foundations of our theoretical approach explicitly draw upon three intersecting planes of temporality implicit in structuration; namely reversibility, irreversibility and institutionalization. We further develop our theoretical perspective by extending the scope of structuration to incorporate temporal features of Adam's social theory on ‘global time’. We then use this temporal perspective to examine the emergence of electronic trading and the process of globalization across London and Chicago futures exchanges. Our analysis provides insights into the IT-enabled reconfiguration of these exchanges during processes of reproduction and change associated with globalization. We conclude with some key implications for e-trading strategy and consider changes in trader work life associated with the adoption of e-trading.
",susan scott,,2004.0,10.1057/palgrave.ejis.3000487,European Journal of Information Systems,Barrett2004,Not available,,Nature,Not available,Electronic trading and the process of globalization in traditional futures exchanges: a temporal perspective,99425ee9089e6e066e8289b1f1220657,http://dx.doi.org/10.1057/palgrave.ejis.3000487
17590,"There has been recent debate about the practical application of academic revenue management. The concern of practitioners is that any approach they employ must be appealing to higher management, thus there is demand for simple and clear approaches to revenue management. Academic revenue management generally focuses on increasingly complex quantitative techniques requiring assumptions that are hard for the practitioner to justify. In this article we argue that practitioners are not the customers of academic revenue management; rather, the actual customers are other academics. We use this claim to argue that this situation will get worse, not better. As an alternative, we discuss the potential use of qualitative techniques as an alternative to current ‘hard science’ methods.
",andrew collins,,2014.0,10.1057/rpm.2014.34,Journal of Revenue and Pricing Management,Collins2014,Not available,,Nature,Not available,Academic revenue management and quantitative worship,1340bf81ae4354bc3733eac4af76fb84,http://dx.doi.org/10.1057/rpm.2014.34
17591,"This article describes an emergent logic of accumulation in the networked sphere, ‘surveillance capitalism,’ and considers its implications for ‘information civilization.’ The institutionalizing practices and operational assumptions of Google Inc. are the primary lens for this analysis as they are rendered in two recent articles authored by Google Chief Economist Hal Varian. Varian asserts four uses that follow from computer-mediated transactions: ‘data extraction and analysis,’ ‘new contractual forms due to better monitoring,’ ‘personalization and customization,’ and ‘continuous experiments.’ An examination of the nature and consequences of these uses sheds light on the implicit logic of surveillance capitalism and the global architecture of computer mediation upon which it depends. This architecture produces a distributed and largely uncontested new expression of power that I christen: ‘Big Other.’ It is constituted by unexpected and often illegible mechanisms of extraction, commodification, and control that effectively exile persons from their own behavior while producing new markets of behavioral prediction and modification. Surveillance capitalism challenges democratic norms and departs in key ways from the centuries-long evolution of market capitalism.
",shoshana zuboff,,2015.0,10.1057/jit.2015.5,Journal of Information Technology,Zuboff2015,Not available,,Nature,Not available,Big other: surveillance capitalism and the prospects of an information civilization,7d4e9d1442579d6328b5115347464e9a,http://dx.doi.org/10.1057/jit.2015.5
17592,"The Global Financial Crisis has brought about considerable change in consumer behaviour, from increased price sensitivity to the use of price comparison websites. This article goes beyond the surface and identifies 10 consumer behavioural patterns that are occurring now and moving into the future. The pattern of trends, which will impact upon everything from branding to strategy, are: mobile living, price sensitivity, gamification of price, pricing inefficiency, big data and promotions, concierge living, is loyalty dead, discounting forever, managing complexity, and choice and maximising behaviour. Each pattern of behaviour is robustly explained and supported by empirical evidence provided by the Future Foundation. The article concludes with four predictions that the patterns are driving namely, loyalty, temporary permanence, deals and value.
Journal of Revenue and Pricing Management advance online publication, 3 June 2016; doi:10.1057/rpm.2016.35",ian yeoman,,2016.0,10.1057/rpm.2016.35,Journal of Revenue and Pricing Management,Yeoman2016,Not available,,Nature,Not available,Trends in retail pricing: A consumer perspective,9c184106c52a8e53e16de4ef8a9f2fc4,http://dx.doi.org/10.1057/rpm.2016.35
17593,"The Global Financial Crisis has brought about considerable change in consumer behaviour, from increased price sensitivity to the use of price comparison websites. This article goes beyond the surface and identifies 10 consumer behavioural patterns that are occurring now and moving into the future. The pattern of trends, which will impact upon everything from branding to strategy, are: mobile living, price sensitivity, gamification of price, pricing inefficiency, big data and promotions, concierge living, is loyalty dead, discounting forever, managing complexity, and choice and maximising behaviour. Each pattern of behaviour is robustly explained and supported by empirical evidence provided by the Future Foundation. The article concludes with four predictions that the patterns are driving namely, loyalty, temporary permanence, deals and value.
Journal of Revenue and Pricing Management advance online publication, 3 June 2016; doi:10.1057/rpm.2016.35",carol wheatley,,2016.0,10.1057/rpm.2016.35,Journal of Revenue and Pricing Management,Yeoman2016,Not available,,Nature,Not available,Trends in retail pricing: A consumer perspective,9c184106c52a8e53e16de4ef8a9f2fc4,http://dx.doi.org/10.1057/rpm.2016.35
17594,"The Global Financial Crisis has brought about considerable change in consumer behaviour, from increased price sensitivity to the use of price comparison websites. This article goes beyond the surface and identifies 10 consumer behavioural patterns that are occurring now and moving into the future. The pattern of trends, which will impact upon everything from branding to strategy, are: mobile living, price sensitivity, gamification of price, pricing inefficiency, big data and promotions, concierge living, is loyalty dead, discounting forever, managing complexity, and choice and maximising behaviour. Each pattern of behaviour is robustly explained and supported by empirical evidence provided by the Future Foundation. The article concludes with four predictions that the patterns are driving namely, loyalty, temporary permanence, deals and value.
Journal of Revenue and Pricing Management advance online publication, 3 June 2016; doi:10.1057/rpm.2016.35",una mcmahon-beattie,,2016.0,10.1057/rpm.2016.35,Journal of Revenue and Pricing Management,Yeoman2016,Not available,,Nature,Not available,Trends in retail pricing: A consumer perspective,9c184106c52a8e53e16de4ef8a9f2fc4,http://dx.doi.org/10.1057/rpm.2016.35
17596,"The present study investigates the external validity of emotional value measured in economic laboratory experiments by using a physiological indicator of stress, heart rate variability (HRV). While there is ample evidence supporting the external validity of economic experiments, there is little evidence comparing the magnitude of internal levels of emotional stress during decision making with external stress. The current study addresses this gap by comparing the magnitudes of decision stress experienced in the laboratory with the stress from outside the laboratory. To quantify a large change in HRV, measures observed in the laboratory during decision-making are compared to the difference between HRV during a university exam and other mental activity for the same individuals in and outside of the laboratory. The results outside the laboratory inform about the relevance of laboratory findings in terms of their relative magnitude. Results show that psychologically induced HRV changes observed in the laboratory, particularly in connection with social preferences, correspond to large effects outside. This underscores the external validity of laboratory findings and shows the magnitude of emotional value connected to pro-social economic decisions in the laboratory.
",jonas fooken,Social behaviour,2017.0,10.1038/srep44471,Scientific Reports,Fooken2017,Not available,,Nature,Not available,Heart rate variability indicates emotional value during pro-social economic laboratory decisions with large external validity,642edc078a314f38427a80c747b97c76,http://dx.doi.org/10.1038/srep44471
17597,"The present study investigates the external validity of emotional value measured in economic laboratory experiments by using a physiological indicator of stress, heart rate variability (HRV). While there is ample evidence supporting the external validity of economic experiments, there is little evidence comparing the magnitude of internal levels of emotional stress during decision making with external stress. The current study addresses this gap by comparing the magnitudes of decision stress experienced in the laboratory with the stress from outside the laboratory. To quantify a large change in HRV, measures observed in the laboratory during decision-making are compared to the difference between HRV during a university exam and other mental activity for the same individuals in and outside of the laboratory. The results outside the laboratory inform about the relevance of laboratory findings in terms of their relative magnitude. Results show that psychologically induced HRV changes observed in the laboratory, particularly in connection with social preferences, correspond to large effects outside. This underscores the external validity of laboratory findings and shows the magnitude of emotional value connected to pro-social economic decisions in the laboratory.
",jonas fooken,Cooperation,2017.0,10.1038/srep44471,Scientific Reports,Fooken2017,Not available,,Nature,Not available,Heart rate variability indicates emotional value during pro-social economic laboratory decisions with large external validity,642edc078a314f38427a80c747b97c76,http://dx.doi.org/10.1038/srep44471
17598,"The present study investigates the external validity of emotional value measured in economic laboratory experiments by using a physiological indicator of stress, heart rate variability (HRV). While there is ample evidence supporting the external validity of economic experiments, there is little evidence comparing the magnitude of internal levels of emotional stress during decision making with external stress. The current study addresses this gap by comparing the magnitudes of decision stress experienced in the laboratory with the stress from outside the laboratory. To quantify a large change in HRV, measures observed in the laboratory during decision-making are compared to the difference between HRV during a university exam and other mental activity for the same individuals in and outside of the laboratory. The results outside the laboratory inform about the relevance of laboratory findings in terms of their relative magnitude. Results show that psychologically induced HRV changes observed in the laboratory, particularly in connection with social preferences, correspond to large effects outside. This underscores the external validity of laboratory findings and shows the magnitude of emotional value connected to pro-social economic decisions in the laboratory.
",jonas fooken,Neurophysiology,2017.0,10.1038/srep44471,Scientific Reports,Fooken2017,Not available,,Nature,Not available,Heart rate variability indicates emotional value during pro-social economic laboratory decisions with large external validity,642edc078a314f38427a80c747b97c76,http://dx.doi.org/10.1038/srep44471
17599,"In this paper, we consider a start-up service provider that decides whether to advertise its service product by offering temporary daily deal promotion. Based on the repeat purchase mechanism, we show that both the commission rate (ie, the revenue-sharing ratio) charged by the daily deal site and the discount level offered by the service provider play important roles in signalling the initially unobservable quality level of the service provider. A high commission rate can facilitate the signalling of the daily deal promotion, and in equilibrium only the high-quality service provider would do daily deal promotion. We find that if the daily deal site adopts a two-part tariff charging scheme, the high-quality service provider can always signal its quality by offering daily deals. And the two-part tariff leads to a lower signalling cost but a higher repeat purchase rate than those under the revenue-sharing if the variable cost of the low-quality service provider is not too large.
",ming zhao,,2014.0,10.1057/jors.2014.47,Journal of the Operational Research Society,Zhao2014,Not available,,Nature,Not available,Signalling effect of daily deal promotion for a start-up service provider,16414137824534c86d1a5c92d606ee91,http://dx.doi.org/10.1057/jors.2014.47
17600,"There has been recent debate about the practical application of academic revenue management. The concern of practitioners is that any approach they employ must be appealing to higher management, thus there is demand for simple and clear approaches to revenue management. Academic revenue management generally focuses on increasingly complex quantitative techniques requiring assumptions that are hard for the practitioner to justify. In this article we argue that practitioners are not the customers of academic revenue management; rather, the actual customers are other academics. We use this claim to argue that this situation will get worse, not better. As an alternative, we discuss the potential use of qualitative techniques as an alternative to current ‘hard science’ methods.
",erika frydenlund,,2014.0,10.1057/rpm.2014.34,Journal of Revenue and Pricing Management,Collins2014,Not available,,Nature,Not available,Academic revenue management and quantitative worship,1340bf81ae4354bc3733eac4af76fb84,http://dx.doi.org/10.1057/rpm.2014.34
17601,"In this paper, we consider a start-up service provider that decides whether to advertise its service product by offering temporary daily deal promotion. Based on the repeat purchase mechanism, we show that both the commission rate (ie, the revenue-sharing ratio) charged by the daily deal site and the discount level offered by the service provider play important roles in signalling the initially unobservable quality level of the service provider. A high commission rate can facilitate the signalling of the daily deal promotion, and in equilibrium only the high-quality service provider would do daily deal promotion. We find that if the daily deal site adopts a two-part tariff charging scheme, the high-quality service provider can always signal its quality by offering daily deals. And the two-part tariff leads to a lower signalling cost but a higher repeat purchase rate than those under the revenue-sharing if the variable cost of the low-quality service provider is not too large.
",yulan wang,,2014.0,10.1057/jors.2014.47,Journal of the Operational Research Society,Zhao2014,Not available,,Nature,Not available,Signalling effect of daily deal promotion for a start-up service provider,16414137824534c86d1a5c92d606ee91,http://dx.doi.org/10.1057/jors.2014.47
17602,"In this paper, we consider a start-up service provider that decides whether to advertise its service product by offering temporary daily deal promotion. Based on the repeat purchase mechanism, we show that both the commission rate (ie, the revenue-sharing ratio) charged by the daily deal site and the discount level offered by the service provider play important roles in signalling the initially unobservable quality level of the service provider. A high commission rate can facilitate the signalling of the daily deal promotion, and in equilibrium only the high-quality service provider would do daily deal promotion. We find that if the daily deal site adopts a two-part tariff charging scheme, the high-quality service provider can always signal its quality by offering daily deals. And the two-part tariff leads to a lower signalling cost but a higher repeat purchase rate than those under the revenue-sharing if the variable cost of the low-quality service provider is not too large.
",xianghua gan,,2014.0,10.1057/jors.2014.47,Journal of the Operational Research Society,Zhao2014,Not available,,Nature,Not available,Signalling effect of daily deal promotion for a start-up service provider,16414137824534c86d1a5c92d606ee91,http://dx.doi.org/10.1057/jors.2014.47
17603,"In a previous paper about brand strategy in this Journal, the writers argued that a brand can be considered as an integrated marketing idea, a ‘concept that drives the business’. Along with Jean-Noël Kapferer they conceptualised a brand as a particular sense of ‘meaning and direction’. The purpose of brand strategy lies in defining this essence (sense). In this paper they elaborate further on the strategic role of brands in the contemporary business environment. The emphasis shifts from ‘brand strategy’ to ‘brand based strategy’.
Brands should play an important role in selecting and maintaining a strategic direction for a company. A brand can serve as a reference point for making choices in the area of business development by actualising (effecting) a dynamic fit between the company's capabilities and the changing environment. Brands are shared property of both the seller and the buyer and embody the relationship between the company and the environment.
In order to provide an indication of what role the brand can fulfil in strategic management, the writers initially pursue the concept of the ‘strategic reference point’. Subsequently, the strategic opinions of Porter are compared with those of Prahalad and Hamel, based on the strategic reference point placed at the centre of their theories. It will be demonstrated that the brand concept as a strategic reference point overcomes the shortcomings of both approaches and frameworks the contours of a brand based strategy.
",andy mosmans,,1998.0,10.1057/bm.1998.51,Journal of Brand Management,Mosmans1998,Not available,,Nature,Not available,Brand based strategic management,39e35c110deab90c7b57cf6f4f058f1c,http://dx.doi.org/10.1057/bm.1998.51
17604,"In a previous paper about brand strategy in this Journal, the writers argued that a brand can be considered as an integrated marketing idea, a ‘concept that drives the business’. Along with Jean-Noël Kapferer they conceptualised a brand as a particular sense of ‘meaning and direction’. The purpose of brand strategy lies in defining this essence (sense). In this paper they elaborate further on the strategic role of brands in the contemporary business environment. The emphasis shifts from ‘brand strategy’ to ‘brand based strategy’.
Brands should play an important role in selecting and maintaining a strategic direction for a company. A brand can serve as a reference point for making choices in the area of business development by actualising (effecting) a dynamic fit between the company's capabilities and the changing environment. Brands are shared property of both the seller and the buyer and embody the relationship between the company and the environment.
In order to provide an indication of what role the brand can fulfil in strategic management, the writers initially pursue the concept of the ‘strategic reference point’. Subsequently, the strategic opinions of Porter are compared with those of Prahalad and Hamel, based on the strategic reference point placed at the centre of their theories. It will be demonstrated that the brand concept as a strategic reference point overcomes the shortcomings of both approaches and frameworks the contours of a brand based strategy.
",roland vorst,,1998.0,10.1057/bm.1998.51,Journal of Brand Management,Mosmans1998,Not available,,Nature,Not available,Brand based strategic management,39e35c110deab90c7b57cf6f4f058f1c,http://dx.doi.org/10.1057/bm.1998.51
17605,"This paper assesses alternative auction techniques for pricing and allocating various financial instruments, such as government securities, central bank refinance credit, and foreign exchange. Before recommending appropriate formats for auctioning these items, the paper discusses basic auction formats, assessing the advantages and disadvantages of each, based on the existing, mostly theoretical, literature. It is noted that auction techniques can be usefully employed for a broad range of items and that their application is of particular relevance to the impetus in many parts of the world toward establishing market-oriented economies.
",robert feldman,,1993.0,10.2307/3867445,Staff Papers - International Monetary Fund,Feldman1993,Not available,,Nature,Not available,Auctions: Theory and Applications,5110948efc569d3a03b5f168baf1dc2a,http://dx.doi.org/10.2307/3867445
17606,"This paper assesses alternative auction techniques for pricing and allocating various financial instruments, such as government securities, central bank refinance credit, and foreign exchange. Before recommending appropriate formats for auctioning these items, the paper discusses basic auction formats, assessing the advantages and disadvantages of each, based on the existing, mostly theoretical, literature. It is noted that auction techniques can be usefully employed for a broad range of items and that their application is of particular relevance to the impetus in many parts of the world toward establishing market-oriented economies.
",rajnish mehra,,1993.0,10.2307/3867445,Staff Papers - International Monetary Fund,Feldman1993,Not available,,Nature,Not available,Auctions: Theory and Applications,5110948efc569d3a03b5f168baf1dc2a,http://dx.doi.org/10.2307/3867445
17607,"Life is full of events that are basically games, from paying for a meal to bidding in an auction. Can incorporating a quantum strategy into the rule book increase your chances of winning? Navroz Patel reports.",navroz patel,,2007.0,10.1038/445144a,Nature,Patel2007,Not available,,Nature,Not available,Quantum games: States of play,81bc4aaaf4860e3a894159170071ab0a,http://dx.doi.org/10.1038/445144a
17608,"This is a study of social facilitation effects in online auctions. We focus on the growth in online auctions, and the emergence of instant messaging and communication availability technologies. These two trends merge to provide a collaborative online social framework in which computer mediated communication may affect the behaviour of participants in online auctions. The interaction between buyers and sellers in traditional, face-to-face markets creates phenomena such as social facilitation, where the presence of others impacts behaviour and performance. In this study we attempt to replicate and measure social facilitation effects under the conditions of virtual presence. Does social facilitation apply to online auctions, and if so, how can it influence the design of online settings? We developed and used a simulated, Java-based Internet Dutch auction. Our findings indicate that social facilitation does indeed occur. In an experimental examination, participants improve their results and stay longer in the auction under conditions of higher virtual presence. Participants also indicate a preference for auction arrangements with higher degrees of virtual presence. Theoretically, this study contributes to the study of social facilitation, adding evidence of the effect when the presence is virtual.
",s rafaeli,,2002.0,10.1057/palgrave.ejis.3000434,European Journal of Information Systems,Rafaeli2002,Not available,,Nature,Not available,"Online auctions, messaging, communication and social facilitation: a simulation and experimental evidence",fb72591604c53101fed68384c6cf222a,http://dx.doi.org/10.1057/palgrave.ejis.3000434
17609,"This is a study of social facilitation effects in online auctions. We focus on the growth in online auctions, and the emergence of instant messaging and communication availability technologies. These two trends merge to provide a collaborative online social framework in which computer mediated communication may affect the behaviour of participants in online auctions. The interaction between buyers and sellers in traditional, face-to-face markets creates phenomena such as social facilitation, where the presence of others impacts behaviour and performance. In this study we attempt to replicate and measure social facilitation effects under the conditions of virtual presence. Does social facilitation apply to online auctions, and if so, how can it influence the design of online settings? We developed and used a simulated, Java-based Internet Dutch auction. Our findings indicate that social facilitation does indeed occur. In an experimental examination, participants improve their results and stay longer in the auction under conditions of higher virtual presence. Participants also indicate a preference for auction arrangements with higher degrees of virtual presence. Theoretically, this study contributes to the study of social facilitation, adding evidence of the effect when the presence is virtual.
",a noy,,2002.0,10.1057/palgrave.ejis.3000434,European Journal of Information Systems,Rafaeli2002,Not available,,Nature,Not available,"Online auctions, messaging, communication and social facilitation: a simulation and experimental evidence",fb72591604c53101fed68384c6cf222a,http://dx.doi.org/10.1057/palgrave.ejis.3000434
17610,"A generalised bidding model is developed to calculate a bidder’s expected profit and auctioners expected revenue/payment for both a General Independent Value and Independent Private Value (IPV) kmth price sealed-bid auction (where the mth bidder wins at the kth bid payment) using a linear (affine) mark-up function. The Common Value (CV) assumption, and highbid and lowbid symmetric and asymmetric First Price Auctions and Second Price Auctions are included as special cases. The optimal n bidder symmetric analytical results are then provided for the uniform IPV and CV models in equilibrium. Final comments concern implications, the assumptions involved and prospects for further research.
",martin skitmore,,2013.0,10.1057/jors.2013.163,Journal of the Operational Research Society,Skitmore2013,Not available,,Nature,Not available,kmth price sealed-bid auctions with general independent values and equilibrium linear mark-ups,9f8dd3b6f8defaf9ef5bfbdb094ddb0b,http://dx.doi.org/10.1057/jors.2013.163
17611,"This study relies on knowledge regarding the neuroplasticity of dual-system components that govern addiction and excessive behavior and suggests that alterations in the grey matter volumes, i.e., brain morphology, of specific regions of interest are associated with technology-related addictions. Using voxel based morphometry (VBM) applied to structural Magnetic Resonance Imaging (MRI) scans of twenty social network site (SNS) users with varying degrees of SNS addiction, we show that SNS addiction is associated with a presumably more efficient impulsive brain system, manifested through reduced grey matter volumes in the amygdala bilaterally (but not with structural differences in the Nucleus Accumbens). In this regard, SNS addiction is similar in terms of brain anatomy alterations to other (substance, gambling etc.) addictions. We also show that in contrast to other addictions in which the anterior-/ mid- cingulate cortex is impaired and fails to support the needed inhibition, which manifests through reduced grey matter volumes, this region is presumed to be healthy in our sample and its grey matter volume is positively correlated with one’s level of SNS addiction. These findings portray an anatomical morphology model of SNS addiction and point to brain morphology similarities and differences between technology addictions and substance and gambling addictions.
",ofir turel,Brain imaging,2017.0,10.1038/srep45064,Scientific Reports,He2017,Not available,,Nature,Not available,Brain anatomy alterations associated with Social Networking Site (SNS) addiction,355447eac1ec4959117118d905cd56d1,http://dx.doi.org/10.1038/srep45064
17612,"The net-enabled business innovation cycle (NEBIC) model describes a path by which firms employ dynamic capabilities to leverage net-enablement. Some firms strategically aspire to follow this path in a more gradual fashion striving for business process improvements (incremental strategy) while others aspire to exploit rapidly net-enablement to achieve business innovation (leapfrogging strategy) that offers completely new market opportunities. Study results suggest that firms adopt accelerated leapfrogging strategies when faced with more severe external competitive pressures. This combined with strong leadership, a propensity to embrace internal user involvement, IT maturity, and an accommodating firm structure, as indicated by path accelerators, result in higher aspirations for business innovation. Firms shying away from leapfrogging strategies tend to protect existing customers and employees from more radical changes. These firms sometimes lacked the internal capability to enact more aggressive strategies and thus had to acquire the necessary capabilities before aspiring for business innovation.
",gary hackbarth,,2004.0,10.1057/palgrave.ejis.3000511,European Journal of Information Systems,Hackbarth2004,Not available,,Nature,Not available,Strategic aspirations for net-enabled business,023b129ef0e1654dcb12b71abb35a899,http://dx.doi.org/10.1057/palgrave.ejis.3000511
17613,"Online auctions are arguably one of the most important and distinctly new applications of the Internet. The predominant player in online auctions, eBay, has over 42 million users, and it was the host of over $9.3 billion worth of goods sold just in the year 2001. Using methods from approximate dynamic programming and integer programming, we design algorithms for optimally bidding for a single item in an online auction, and in simultaneous or overlapping multiple online auctions. We report computational evidence using data from eBay's website from 1772 completed auctions for personal digital assistants and from 4208 completed auctions for stamp collections that shows that (a) the optimal dynamic policy outperforms simple but widely used static heuristic rules for a single auction, and (b) a new approach for the multiple auctions problem that uses the value functions of single auctions found by dynamic programming in an integer programming framework produces high-quality solutions fast and reliably.
",dimitris bertsimas,,2009.0,10.1057/rpm.2008.49,Journal of Revenue and Pricing Management,Bertsimas2009,Not available,,Nature,Not available,Optimal bidding in online auctions,398c3382dc63cb4ad2dc9dfab0dfbdfe,http://dx.doi.org/10.1057/rpm.2008.49
17614,"Online auctions are arguably one of the most important and distinctly new applications of the Internet. The predominant player in online auctions, eBay, has over 42 million users, and it was the host of over $9.3 billion worth of goods sold just in the year 2001. Using methods from approximate dynamic programming and integer programming, we design algorithms for optimally bidding for a single item in an online auction, and in simultaneous or overlapping multiple online auctions. We report computational evidence using data from eBay's website from 1772 completed auctions for personal digital assistants and from 4208 completed auctions for stamp collections that shows that (a) the optimal dynamic policy outperforms simple but widely used static heuristic rules for a single auction, and (b) a new approach for the multiple auctions problem that uses the value functions of single auctions found by dynamic programming in an integer programming framework produces high-quality solutions fast and reliably.
",jeffrey hawkins,,2009.0,10.1057/rpm.2008.49,Journal of Revenue and Pricing Management,Bertsimas2009,Not available,,Nature,Not available,Optimal bidding in online auctions,398c3382dc63cb4ad2dc9dfab0dfbdfe,http://dx.doi.org/10.1057/rpm.2008.49
17615,"Online auctions are arguably one of the most important and distinctly new applications of the Internet. The predominant player in online auctions, eBay, has over 42 million users, and it was the host of over $9.3 billion worth of goods sold just in the year 2001. Using methods from approximate dynamic programming and integer programming, we design algorithms for optimally bidding for a single item in an online auction, and in simultaneous or overlapping multiple online auctions. We report computational evidence using data from eBay's website from 1772 completed auctions for personal digital assistants and from 4208 completed auctions for stamp collections that shows that (a) the optimal dynamic policy outperforms simple but widely used static heuristic rules for a single auction, and (b) a new approach for the multiple auctions problem that uses the value functions of single auctions found by dynamic programming in an integer programming framework produces high-quality solutions fast and reliably.
",georgia perakis,,2009.0,10.1057/rpm.2008.49,Journal of Revenue and Pricing Management,Bertsimas2009,Not available,,Nature,Not available,Optimal bidding in online auctions,398c3382dc63cb4ad2dc9dfab0dfbdfe,http://dx.doi.org/10.1057/rpm.2008.49
17616,"Electronic markets for land are innovative techniques potentially advantageous to buyers and private sellers, the latter being interested in sealed-bid auctions assuring high levels of competition among many bidders so that coalitions and cooperative games are ruled out. Land electronic tendering requires sufficient online information for bidders such as valuation reports by prestigious surveyors. To help the bidder make his best choice, we propose a decision model for moderately pessimistic players facing tenders where competition among a great number of independent antagonists excludes the use of cooperative games and involves strict uncertainty. A worked example concerning farmland in Spain is presented.
",e ballestero,,2005.0,10.1057/palgrave.jors.2602097,Journal of the Operational Research Society,Ballestero2005,Not available,,Nature,Not available,A decision approach to competitive electronic sealed-bid auctions for land,37c634b713ef3dd8f094e6f9bb4d5bd7,http://dx.doi.org/10.1057/palgrave.jors.2602097
17617,"Electronic markets for land are innovative techniques potentially advantageous to buyers and private sellers, the latter being interested in sealed-bid auctions assuring high levels of competition among many bidders so that coalitions and cooperative games are ruled out. Land electronic tendering requires sufficient online information for bidders such as valuation reports by prestigious surveyors. To help the bidder make his best choice, we propose a decision model for moderately pessimistic players facing tenders where competition among a great number of independent antagonists excludes the use of cooperative games and involves strict uncertainty. A worked example concerning farmland in Spain is presented.
",c bielza,,2005.0,10.1057/palgrave.jors.2602097,Journal of the Operational Research Society,Ballestero2005,Not available,,Nature,Not available,A decision approach to competitive electronic sealed-bid auctions for land,37c634b713ef3dd8f094e6f9bb4d5bd7,http://dx.doi.org/10.1057/palgrave.jors.2602097
17618,"Electronic markets for land are innovative techniques potentially advantageous to buyers and private sellers, the latter being interested in sealed-bid auctions assuring high levels of competition among many bidders so that coalitions and cooperative games are ruled out. Land electronic tendering requires sufficient online information for bidders such as valuation reports by prestigious surveyors. To help the bidder make his best choice, we propose a decision model for moderately pessimistic players facing tenders where competition among a great number of independent antagonists excludes the use of cooperative games and involves strict uncertainty. A worked example concerning farmland in Spain is presented.
",d pla-santamaria,,2005.0,10.1057/palgrave.jors.2602097,Journal of the Operational Research Society,Ballestero2005,Not available,,Nature,Not available,A decision approach to competitive electronic sealed-bid auctions for land,37c634b713ef3dd8f094e6f9bb4d5bd7,http://dx.doi.org/10.1057/palgrave.jors.2602097
17619,"The online auction market has been growing at a spectacular rate. Most auctions
are open-bid auctions where all the participants know the current highest bid.
This knowledge has led to a phenomenon known as sniping, whereby some
bidders may wait until the last possible moment before bidding, thereby
depriving other bidders of the opportunity to respond and also preventing
sellers from obtaining the highest price for an item. This is especially true in
the case of the commonly used second-price, fixed-deadline auction. We consider
a procedure involving a randomly determined stopping time and show that this
approach eliminates the potential benefits to a sniper. The scheme enables all
bidders to compete more fairly and promotes an early bidding strategy, which is
likely to increase the price received by the seller while providing adequate
bidding opportunities for would-be buyers.
",r malaga,,2009.0,10.1057/jors.2009.79,Journal of the Operational Research Society,Malaga2009,Not available,,Nature,Not available,A new end-of-auction model for curbing sniping,7eec756707b7af39efaf1116ddab2307,http://dx.doi.org/10.1057/jors.2009.79
17620,"The online auction market has been growing at a spectacular rate. Most auctions
are open-bid auctions where all the participants know the current highest bid.
This knowledge has led to a phenomenon known as sniping, whereby some
bidders may wait until the last possible moment before bidding, thereby
depriving other bidders of the opportunity to respond and also preventing
sellers from obtaining the highest price for an item. This is especially true in
the case of the commonly used second-price, fixed-deadline auction. We consider
a procedure involving a randomly determined stopping time and show that this
approach eliminates the potential benefits to a sniper. The scheme enables all
bidders to compete more fairly and promotes an early bidding strategy, which is
likely to increase the price received by the seller while providing adequate
bidding opportunities for would-be buyers.
",d porter,,2009.0,10.1057/jors.2009.79,Journal of the Operational Research Society,Malaga2009,Not available,,Nature,Not available,A new end-of-auction model for curbing sniping,7eec756707b7af39efaf1116ddab2307,http://dx.doi.org/10.1057/jors.2009.79
17621,"The online auction market has been growing at a spectacular rate. Most auctions
are open-bid auctions where all the participants know the current highest bid.
This knowledge has led to a phenomenon known as sniping, whereby some
bidders may wait until the last possible moment before bidding, thereby
depriving other bidders of the opportunity to respond and also preventing
sellers from obtaining the highest price for an item. This is especially true in
the case of the commonly used second-price, fixed-deadline auction. We consider
a procedure involving a randomly determined stopping time and show that this
approach eliminates the potential benefits to a sniper. The scheme enables all
bidders to compete more fairly and promotes an early bidding strategy, which is
likely to increase the price received by the seller while providing adequate
bidding opportunities for would-be buyers.
",k ord,,2009.0,10.1057/jors.2009.79,Journal of the Operational Research Society,Malaga2009,Not available,,Nature,Not available,A new end-of-auction model for curbing sniping,7eec756707b7af39efaf1116ddab2307,http://dx.doi.org/10.1057/jors.2009.79
17622,"The online auction market has been growing at a spectacular rate. Most auctions
are open-bid auctions where all the participants know the current highest bid.
This knowledge has led to a phenomenon known as sniping, whereby some
bidders may wait until the last possible moment before bidding, thereby
depriving other bidders of the opportunity to respond and also preventing
sellers from obtaining the highest price for an item. This is especially true in
the case of the commonly used second-price, fixed-deadline auction. We consider
a procedure involving a randomly determined stopping time and show that this
approach eliminates the potential benefits to a sniper. The scheme enables all
bidders to compete more fairly and promotes an early bidding strategy, which is
likely to increase the price received by the seller while providing adequate
bidding opportunities for would-be buyers.
",b montano,,2009.0,10.1057/jors.2009.79,Journal of the Operational Research Society,Malaga2009,Not available,,Nature,Not available,A new end-of-auction model for curbing sniping,7eec756707b7af39efaf1116ddab2307,http://dx.doi.org/10.1057/jors.2009.79
17623,"The net-enabled business innovation cycle (NEBIC) model describes a path by which firms employ dynamic capabilities to leverage net-enablement. Some firms strategically aspire to follow this path in a more gradual fashion striving for business process improvements (incremental strategy) while others aspire to exploit rapidly net-enablement to achieve business innovation (leapfrogging strategy) that offers completely new market opportunities. Study results suggest that firms adopt accelerated leapfrogging strategies when faced with more severe external competitive pressures. This combined with strong leadership, a propensity to embrace internal user involvement, IT maturity, and an accommodating firm structure, as indicated by path accelerators, result in higher aspirations for business innovation. Firms shying away from leapfrogging strategies tend to protect existing customers and employees from more radical changes. These firms sometimes lacked the internal capability to enact more aggressive strategies and thus had to acquire the necessary capabilities before aspiring for business innovation.
",william kettinger,,2004.0,10.1057/palgrave.ejis.3000511,European Journal of Information Systems,Hackbarth2004,Not available,,Nature,Not available,Strategic aspirations for net-enabled business,023b129ef0e1654dcb12b71abb35a899,http://dx.doi.org/10.1057/palgrave.ejis.3000511
17624,"INTRODUCTION
The reliance on hypothetical choices raises obvious questions regarding the validity of the method and the generalizability of the results. By default, the method of hypothetical choices emerges as the simplest procedure by which a large number of theoretical questions can be investigated.
",mark gillis,,2007.0,10.1057/eej.2007.37,Eastern Economic Journal,Gillis2007,Not available,,Nature,Not available,Hypothetical and Real Incentives in the Ultimatum Game and Andreoni's Public Goods Game: An Experimental Study,f61f1feb8d43e9f2386b5a6ccb9645f0,http://dx.doi.org/10.1057/eej.2007.37
17625,"INTRODUCTION
The reliance on hypothetical choices raises obvious questions regarding the validity of the method and the generalizability of the results. By default, the method of hypothetical choices emerges as the simplest procedure by which a large number of theoretical questions can be investigated.
",paul hettler,,2007.0,10.1057/eej.2007.37,Eastern Economic Journal,Gillis2007,Not available,,Nature,Not available,Hypothetical and Real Incentives in the Ultimatum Game and Andreoni's Public Goods Game: An Experimental Study,f61f1feb8d43e9f2386b5a6ccb9645f0,http://dx.doi.org/10.1057/eej.2007.37
17626,"Understanding human cooperation is of major interest across the natural and social sciences. But it is unclear to what extent cooperation is actually a general concept. Most research on cooperation has implicitly assumed that a person’s behaviour in one cooperative context is related to their behaviour in other settings, and at later times. However, there is little empirical evidence in support of this assumption. Here, we provide such evidence by collecting thousands of game decisions from over 1,400 individuals. A person’s decisions in different cooperation games are correlated, as are those decisions and both self-report and real-effort measures of cooperation in non-game contexts. Equally strong correlations exist between cooperative decisions made an average of 124 days apart. Importantly, we find that cooperation is not correlated with norm-enforcing punishment or non-competitiveness. We conclude that there is a domain-general and temporally stable inclination towards paying costs to benefit others, which we dub the ‘cooperative phenotype’.
",alexander peysakhovich,,2014.0,10.1038/ncomms5939,Nature Communications,Peysakhovich2014,Not available,,Nature,Not available,Humans display a ‘cooperative phenotype’ that is domain general and temporally stable,b2cd3f88065ce7140d334a1b9882c192,http://dx.doi.org/10.1038/ncomms5939
17627,"Understanding human cooperation is of major interest across the natural and social sciences. But it is unclear to what extent cooperation is actually a general concept. Most research on cooperation has implicitly assumed that a person’s behaviour in one cooperative context is related to their behaviour in other settings, and at later times. However, there is little empirical evidence in support of this assumption. Here, we provide such evidence by collecting thousands of game decisions from over 1,400 individuals. A person’s decisions in different cooperation games are correlated, as are those decisions and both self-report and real-effort measures of cooperation in non-game contexts. Equally strong correlations exist between cooperative decisions made an average of 124 days apart. Importantly, we find that cooperation is not correlated with norm-enforcing punishment or non-competitiveness. We conclude that there is a domain-general and temporally stable inclination towards paying costs to benefit others, which we dub the ‘cooperative phenotype’.
",martin nowak,,2014.0,10.1038/ncomms5939,Nature Communications,Peysakhovich2014,Not available,,Nature,Not available,Humans display a ‘cooperative phenotype’ that is domain general and temporally stable,b2cd3f88065ce7140d334a1b9882c192,http://dx.doi.org/10.1038/ncomms5939
17628,"Understanding human cooperation is of major interest across the natural and social sciences. But it is unclear to what extent cooperation is actually a general concept. Most research on cooperation has implicitly assumed that a person’s behaviour in one cooperative context is related to their behaviour in other settings, and at later times. However, there is little empirical evidence in support of this assumption. Here, we provide such evidence by collecting thousands of game decisions from over 1,400 individuals. A person’s decisions in different cooperation games are correlated, as are those decisions and both self-report and real-effort measures of cooperation in non-game contexts. Equally strong correlations exist between cooperative decisions made an average of 124 days apart. Importantly, we find that cooperation is not correlated with norm-enforcing punishment or non-competitiveness. We conclude that there is a domain-general and temporally stable inclination towards paying costs to benefit others, which we dub the ‘cooperative phenotype’.
",david rand,,2014.0,10.1038/ncomms5939,Nature Communications,Peysakhovich2014,Not available,,Nature,Not available,Humans display a ‘cooperative phenotype’ that is domain general and temporally stable,b2cd3f88065ce7140d334a1b9882c192,http://dx.doi.org/10.1038/ncomms5939
17629,"Network traffic in mobile networks has grown exponentially in recent years which lead to increasing congestion, especially in data services. A pricing network resource could help resolve congestion and potentially increase revenue. However, determining the suitable congestion prices that optimise both performance and revenue of the networks could be challenging. This paper presents an auction-based pricing model that is used as an admission control mechanism for mobile data services. The proposed modelling approach allows mobile operators to improve performance of mobile data services and optimise network revenue by adjusting the reserve prices of the revenue-maximising auction (optimal auction). The paper presents numerical and simulation results illustrating the impact of the auction reserve prices on both network performance and operator's revenue. The methodology for identifying the suitable auction reserve prices for our application is also investigated.
",saravut yaipairoj,,2006.0,10.1057/palgrave.rpm.5160022,Journal of Revenue and Pricing Management,Yaipairoj2006,Not available,,Nature,Not available,An Auction-based pricing model for performance and revenue optimisation in mobile data services,6770c648448594cef8f2e7eaeb24cf9d,http://dx.doi.org/10.1057/palgrave.rpm.5160022
17630,"Network traffic in mobile networks has grown exponentially in recent years which lead to increasing congestion, especially in data services. A pricing network resource could help resolve congestion and potentially increase revenue. However, determining the suitable congestion prices that optimise both performance and revenue of the networks could be challenging. This paper presents an auction-based pricing model that is used as an admission control mechanism for mobile data services. The proposed modelling approach allows mobile operators to improve performance of mobile data services and optimise network revenue by adjusting the reserve prices of the revenue-maximising auction (optimal auction). The paper presents numerical and simulation results illustrating the impact of the auction reserve prices on both network performance and operator's revenue. The methodology for identifying the suitable auction reserve prices for our application is also investigated.
",fotios harmantzis,,2006.0,10.1057/palgrave.rpm.5160022,Journal of Revenue and Pricing Management,Yaipairoj2006,Not available,,Nature,Not available,An Auction-based pricing model for performance and revenue optimisation in mobile data services,6770c648448594cef8f2e7eaeb24cf9d,http://dx.doi.org/10.1057/palgrave.rpm.5160022
17631,"The paper finds strong evidence that real currency demand in Mexico remained stable throughout and after the financial crisis in Mexico. Cointegration analysis using the Johansen-Juselius technique indicates a strong cointegration relationship between real currency balances, real private consumption expenditures, and the interest rate. The dynamic model for real currency demand exhibits significant parameter constancy even after the financial crisis as indicated by a number of statistical tests. The paper concludes that the significant reduction in real currency demand under the financial crisis in Mexico could be appropriately explained by the change in the variables that historically explained the demand for real cash balances in Mexico. This result supports the Bank of Mexico's use of a reserve money program to implement monetary policy under the financial crisis.
",may khamis,,2001.0,10.2307/4621673,IMF Staff Papers,Khamis2001,Not available,,Nature,Not available,Can Currency Demand Be Stable under a Financial Crisis? The Case of Mexico,71243225ecb9bae9d8110903e34342e1,http://dx.doi.org/10.2307/4621673
17632,"The paper finds strong evidence that real currency demand in Mexico remained stable throughout and after the financial crisis in Mexico. Cointegration analysis using the Johansen-Juselius technique indicates a strong cointegration relationship between real currency balances, real private consumption expenditures, and the interest rate. The dynamic model for real currency demand exhibits significant parameter constancy even after the financial crisis as indicated by a number of statistical tests. The paper concludes that the significant reduction in real currency demand under the financial crisis in Mexico could be appropriately explained by the change in the variables that historically explained the demand for real cash balances in Mexico. This result supports the Bank of Mexico's use of a reserve money program to implement monetary policy under the financial crisis.
",alfredo leone,,2001.0,10.2307/4621673,IMF Staff Papers,Khamis2001,Not available,,Nature,Not available,Can Currency Demand Be Stable under a Financial Crisis? The Case of Mexico,71243225ecb9bae9d8110903e34342e1,http://dx.doi.org/10.2307/4621673
17633,"By analyzing previously overlooked fossils and by taking a second look at some old finds, paleontologists are providing the first glimpses of the actual behavior of the tyrannosaurs",gregory erickson,,1999.0,10.1038/scientificamerican0999-42,Scientific American,Erickson1999,Not available,,Nature,Not available,Breathing Life into Tyrannosaurus rex,5c5aa588940da06dc02e8b0bc4d1ea3d,http://dx.doi.org/10.1038/scientificamerican0999-42
17634,"Nathan P. Myhrvold leans back in his chair, arms folded behind his head, legs stretched out.
",elizabeth corcoran,,1993.0,10.1038/scientificamerican0293-34,Scientific American,Corcoran1993,Not available,,Nature,Not available,The Physicist as a Young Businessman,c85e5cec6c4053aaeef0d8513d683643,http://dx.doi.org/10.1038/scientificamerican0293-34
17635,"By analyzing previously overlooked fossils and by taking a second look at some old finds, paleontologists are providing the first glimpses of the actual behavior of the tyrannosaurs",gregory erickson,,2004.0,10.1038/scientificamerican0304-22sp,Scientific American,Erickson2004,Not available,,Nature,Not available,Breathing Life into Tyrannosaurus Rex,9d71adc2b9e72f47c9e2d7c4418df523,http://dx.doi.org/10.1038/scientificamerican0304-22sp
17636,"The different economic performance of transitioning Asia versus Central and Eastern Europe and the FSU (CEEFSU) has been arguably the most salient fact of transition. Particularly remarkable has been the contrast between China and Russia. This paper argues that the contrasting experiences of China and Russia can in part be explained by the different roles that governments have played in the transition process of these countries. China gave priority to administrative reform, aligned bureaucratic incentives at all levels with growth and development objectives, and enhanced enterprise and local autonomy while preserving the capacity of the centre to exercise control. This approach transformed government bodies into real owners of the reform process and led to privatisation over time that was largely welfare enhancing. Russia, on the contrary gave priority to economic over state restructuring. Major reforms including mass privatisation were implemented in an environment of a weak state, which did not have the capacity to protect its ownership rights and coordinate reforms. As a result, privatisation was a wasteful process associated with asset stripping and consequently with lack of legitimacy of newly established property rights. These contrasting experiences carry broader lessons for the role of the government in large-scale complex reforms.
",jeffrey miller,,2007.0,10.1057/palgrave.ces.8100230,Comparative Economic Studies,Miller2007,Not available,,Nature,Not available,On the Role of Government in Transition: The Experiences of China and Russia Compared,f0e40ab63d972b0531c1a83478e3d658,http://dx.doi.org/10.1057/palgrave.ces.8100230
17637,"The different economic performance of transitioning Asia versus Central and Eastern Europe and the FSU (CEEFSU) has been arguably the most salient fact of transition. Particularly remarkable has been the contrast between China and Russia. This paper argues that the contrasting experiences of China and Russia can in part be explained by the different roles that governments have played in the transition process of these countries. China gave priority to administrative reform, aligned bureaucratic incentives at all levels with growth and development objectives, and enhanced enterprise and local autonomy while preserving the capacity of the centre to exercise control. This approach transformed government bodies into real owners of the reform process and led to privatisation over time that was largely welfare enhancing. Russia, on the contrary gave priority to economic over state restructuring. Major reforms including mass privatisation were implemented in an environment of a weak state, which did not have the capacity to protect its ownership rights and coordinate reforms. As a result, privatisation was a wasteful process associated with asset stripping and consequently with lack of legitimacy of newly established property rights. These contrasting experiences carry broader lessons for the role of the government in large-scale complex reforms.
",stoyan tenev,,2007.0,10.1057/palgrave.ces.8100230,Comparative Economic Studies,Miller2007,Not available,,Nature,Not available,On the Role of Government in Transition: The Experiences of China and Russia Compared,f0e40ab63d972b0531c1a83478e3d658,http://dx.doi.org/10.1057/palgrave.ces.8100230
17638,"The field of time preference is developing rapidly. It concerns important concepts for many economic issues. One important domain of application is health economics. This paper reviews several empirical and theoretical developments for time preference with special attention to the health economics field. In addition, the implications for medical decision making, long-term health-care planning and health economic evaluation are discussed. Recognition of this empirical evidence in health-care policy making is recommended, as well as a more transparent process of the framing and analysis of, and deliberation on, public policy.
",a attema,,2011.0,10.1057/jors.2011.137,Journal of the Operational Research Society,Attema2011,Not available,,Nature,Not available,Developments in time preference and their implications for medical decision making,a8b6aff719ec1ff5ab9b26ff7679148c,http://dx.doi.org/10.1057/jors.2011.137
17640,"This article considers the regulatory issues raised by the increased use of credit default swaps (CDSs). It argues that the current extensive over-the-counter trading of CDSs raises problems in terms of the opacity of risk, misaligned incentives in the event of default by reference entities, and concentrations of counterparty risk, each of which requires addressing. Among the proposals discussed are the standardisation of CDSs, the use of central counterparties, the introduction of exchange trading of CDSs, the imposition of capital penalties on bespoke CDSs or mandatory collateralisation, limiting the extent to which CDSs can be used to hedge a position fully, the prohibition of naked CDS protection and the introduction of large exposure counterparty limits. The article concludes that while standardisation and exchange trading of CDSs should be promoted, mandatory collateralisation is preferable to the use of capital penalties. Naked CDS protection buying should be prohibited unless measures can be put in place to ensure that it does not have adverse effects in the event of default by reference entities. Systemic risk in the CDS markets should be addressed through the introduction of large exposure counterparty limits.
",david mcilroy,,2010.0,10.1057/jbr.2010.14,Journal of Banking Regulation,McIlroy2010,Not available,,Nature,Not available,The regulatory issues raised by credit default swaps,ed06437d6af179e3e0a453b0163971ac,http://dx.doi.org/10.1057/jbr.2010.14
17641,"The current financial crisis has been the key global economic event since it unfolded in earnest in early August 2007. The Federal Reserve has taken aggressive actions—both conventional and unconventional—to counteract the economic and financial fallout. Among these actions have been a number of new special lending programs created under section 13(3) of the Federal Reserve Act, which had not been employed since the 1930s. Academics, policymakers, and the general public have shown great interest in the Federal Reserve's new programs. In this paper, I emphasize two medium-term risks that the Federal Reserve now faces as it continues to confront financial market turmoil and recession. The two medium-term risks are opposites of each other, a “two-headed dragon.” One is a Japanese-style deflation trap, and the other is a breakout of inflation like that seen during the 1970s. An explicit inflation target would help mitigate these very real risks.
",james bullard,,2009.0,10.1057/be.2009.5,Business Economics,Bullard2009,Not available,,Nature,Not available,A Two-Headed Dragon for Monetary Policy,2a8e835a760acce336c3f19967b412a6,http://dx.doi.org/10.1057/be.2009.5
17642,"This article presents a comparative study of two contrasting approaches for modeling the yard crane scheduling problem: centralized and decentralized. It seeks to assess their relative performances and factors that affect their performances. Our analysis shows that the centralized approach outperforms the decentralized approach by 16.5 per cent on average, due to having complete and accurate information about future truck arrivals. While it underperforms the centralized, the decentralized approach can dynamically adapt to real-time truck arrivals, making it better suited for real-life operations. Overall, our analysis suggests that the two approaches offer complementary features that could be integrated into a hybrid approach.
",omor sharif,,2012.0,10.1057/mel.2012.1,Maritime Economics & Logistics,Sharif2012,Not available,,Nature,Not available,Yard crane scheduling at container terminals: A comparative study of centralized and decentralized approaches,3fb6a6ae86427b531964a83605d03415,http://dx.doi.org/10.1057/mel.2012.1
17643,"This article presents a comparative study of two contrasting approaches for modeling the yard crane scheduling problem: centralized and decentralized. It seeks to assess their relative performances and factors that affect their performances. Our analysis shows that the centralized approach outperforms the decentralized approach by 16.5 per cent on average, due to having complete and accurate information about future truck arrivals. While it underperforms the centralized, the decentralized approach can dynamically adapt to real-time truck arrivals, making it better suited for real-life operations. Overall, our analysis suggests that the two approaches offer complementary features that could be integrated into a hybrid approach.
",nathan huynh,,2012.0,10.1057/mel.2012.1,Maritime Economics & Logistics,Sharif2012,Not available,,Nature,Not available,Yard crane scheduling at container terminals: A comparative study of centralized and decentralized approaches,3fb6a6ae86427b531964a83605d03415,http://dx.doi.org/10.1057/mel.2012.1
17644,"Economists are finding that social concerns often trump selfishness in financial decision making, a view that helps to explain why tens of millions of people send money to strangers they find on the Internet",christoph uhlhaas,,2007.0,10.1038/scientificamericanmind0807-60,Scientific American Mind,Uhlhaas2007,Not available,,Nature,Not available,Is Greed Good?,24482a8071c94ab30dc527248d05b463,http://dx.doi.org/10.1038/scientificamericanmind0807-60
17645,"Recently, interest in combinatorial auctions has extended to include trade in multiple units of heterogeneous items. Combinatorial bidding is complex and iterative auctions are used to allow bidders to sequentially express their preferences with the aid of auction market information provided in the form of price feedbacks. There are different competing designs for the provision of item price feedbacks; however, most of these have not been thoroughly studied for multiple unit combinatorial auctions. This paper focuses on addressing this gap by evaluating several feedback schemes or algorithms in the context of multiple unit auctions. We numerically evaluate these algorithms under different scenarios that vary in bidder package selection strategies and in the degree of competition. We observe that auction outcomes are best when bidders use a naïve bidding strategy and competition is strong. Performance deteriorates significantly when bidders strategically select packages to maximize their profit. Finally, the performances of some algorithms are more sensitive to strategic bidding than others.
",m iftekhar,,2012.0,10.1057/jors.2012.121,Journal of the Operational Research Society,Iftekhar2012,Not available,,Nature,Not available,Choice of item pricing feedback schemes for multiple unit reverse combinatorial auctions,84d9cb592e20cba918c2e0226b9d2a9d,http://dx.doi.org/10.1057/jors.2012.121
17646,"Recently, interest in combinatorial auctions has extended to include trade in multiple units of heterogeneous items. Combinatorial bidding is complex and iterative auctions are used to allow bidders to sequentially express their preferences with the aid of auction market information provided in the form of price feedbacks. There are different competing designs for the provision of item price feedbacks; however, most of these have not been thoroughly studied for multiple unit combinatorial auctions. This paper focuses on addressing this gap by evaluating several feedback schemes or algorithms in the context of multiple unit auctions. We numerically evaluate these algorithms under different scenarios that vary in bidder package selection strategies and in the degree of competition. We observe that auction outcomes are best when bidders use a naïve bidding strategy and competition is strong. Performance deteriorates significantly when bidders strategically select packages to maximize their profit. Finally, the performances of some algorithms are more sensitive to strategic bidding than others.
",a hailu,,2012.0,10.1057/jors.2012.121,Journal of the Operational Research Society,Iftekhar2012,Not available,,Nature,Not available,Choice of item pricing feedback schemes for multiple unit reverse combinatorial auctions,84d9cb592e20cba918c2e0226b9d2a9d,http://dx.doi.org/10.1057/jors.2012.121
17647,"Recently, interest in combinatorial auctions has extended to include trade in multiple units of heterogeneous items. Combinatorial bidding is complex and iterative auctions are used to allow bidders to sequentially express their preferences with the aid of auction market information provided in the form of price feedbacks. There are different competing designs for the provision of item price feedbacks; however, most of these have not been thoroughly studied for multiple unit combinatorial auctions. This paper focuses on addressing this gap by evaluating several feedback schemes or algorithms in the context of multiple unit auctions. We numerically evaluate these algorithms under different scenarios that vary in bidder package selection strategies and in the degree of competition. We observe that auction outcomes are best when bidders use a naïve bidding strategy and competition is strong. Performance deteriorates significantly when bidders strategically select packages to maximize their profit. Finally, the performances of some algorithms are more sensitive to strategic bidding than others.
",r lindner,,2012.0,10.1057/jors.2012.121,Journal of the Operational Research Society,Iftekhar2012,Not available,,Nature,Not available,Choice of item pricing feedback schemes for multiple unit reverse combinatorial auctions,84d9cb592e20cba918c2e0226b9d2a9d,http://dx.doi.org/10.1057/jors.2012.121
17648,"Transport companies may cooperate to increase their efficiency levels by, for example, the exchange of orders or vehicle capacity. In this paper a new approach to horizontal carrier collaboration is presented: the sharing of distribution centres (DCs) with partnering organisations. This problem can be classified as a cooperative facility location problem and formulated as an innovative mixed integer linear programme. To ensure cooperation sustainability, collaborative costs need to be allocated fairly to the different participants. To analyse the benefits of cooperative facility location and the effects of different cost allocation techniques, numerical experiments based on experimental design are carried out on a UK case study. Sharing DCs may lead to significant cost savings up to 21.6%. In contrast to the case of sharing orders or vehicles, there are diseconomies of scale in terms of the number of partners and more collaborative benefit can be expected when partners are unequal in size. Moreover, results indicate that horizontal collaboration at the level of DCs works well with a limited number of partners and can be based on intuitively appealing cost sharing techniques, which may reduce alliance complexity and enforce the strength of mutual partner relationships.
",lotte verdonck,,2016.0,10.1057/jors.2015.106,Journal of the Operational Research Society,Verdonck2016,Not available,,Nature,Not available,Analysis of collaborative savings and cost allocation techniques for the cooperative carrier facility location problem,4bb1c4d6b9d88556b3bd313e8dd77276,http://dx.doi.org/10.1057/jors.2015.106
17649,"Transport companies may cooperate to increase their efficiency levels by, for example, the exchange of orders or vehicle capacity. In this paper a new approach to horizontal carrier collaboration is presented: the sharing of distribution centres (DCs) with partnering organisations. This problem can be classified as a cooperative facility location problem and formulated as an innovative mixed integer linear programme. To ensure cooperation sustainability, collaborative costs need to be allocated fairly to the different participants. To analyse the benefits of cooperative facility location and the effects of different cost allocation techniques, numerical experiments based on experimental design are carried out on a UK case study. Sharing DCs may lead to significant cost savings up to 21.6%. In contrast to the case of sharing orders or vehicles, there are diseconomies of scale in terms of the number of partners and more collaborative benefit can be expected when partners are unequal in size. Moreover, results indicate that horizontal collaboration at the level of DCs works well with a limited number of partners and can be based on intuitively appealing cost sharing techniques, which may reduce alliance complexity and enforce the strength of mutual partner relationships.
",patrick beullens,,2016.0,10.1057/jors.2015.106,Journal of the Operational Research Society,Verdonck2016,Not available,,Nature,Not available,Analysis of collaborative savings and cost allocation techniques for the cooperative carrier facility location problem,4bb1c4d6b9d88556b3bd313e8dd77276,http://dx.doi.org/10.1057/jors.2015.106
17650,"Transport companies may cooperate to increase their efficiency levels by, for example, the exchange of orders or vehicle capacity. In this paper a new approach to horizontal carrier collaboration is presented: the sharing of distribution centres (DCs) with partnering organisations. This problem can be classified as a cooperative facility location problem and formulated as an innovative mixed integer linear programme. To ensure cooperation sustainability, collaborative costs need to be allocated fairly to the different participants. To analyse the benefits of cooperative facility location and the effects of different cost allocation techniques, numerical experiments based on experimental design are carried out on a UK case study. Sharing DCs may lead to significant cost savings up to 21.6%. In contrast to the case of sharing orders or vehicles, there are diseconomies of scale in terms of the number of partners and more collaborative benefit can be expected when partners are unequal in size. Moreover, results indicate that horizontal collaboration at the level of DCs works well with a limited number of partners and can be based on intuitively appealing cost sharing techniques, which may reduce alliance complexity and enforce the strength of mutual partner relationships.
",an caris,,2016.0,10.1057/jors.2015.106,Journal of the Operational Research Society,Verdonck2016,Not available,,Nature,Not available,Analysis of collaborative savings and cost allocation techniques for the cooperative carrier facility location problem,4bb1c4d6b9d88556b3bd313e8dd77276,http://dx.doi.org/10.1057/jors.2015.106
17651,"Transport companies may cooperate to increase their efficiency levels by, for example, the exchange of orders or vehicle capacity. In this paper a new approach to horizontal carrier collaboration is presented: the sharing of distribution centres (DCs) with partnering organisations. This problem can be classified as a cooperative facility location problem and formulated as an innovative mixed integer linear programme. To ensure cooperation sustainability, collaborative costs need to be allocated fairly to the different participants. To analyse the benefits of cooperative facility location and the effects of different cost allocation techniques, numerical experiments based on experimental design are carried out on a UK case study. Sharing DCs may lead to significant cost savings up to 21.6%. In contrast to the case of sharing orders or vehicles, there are diseconomies of scale in terms of the number of partners and more collaborative benefit can be expected when partners are unequal in size. Moreover, results indicate that horizontal collaboration at the level of DCs works well with a limited number of partners and can be based on intuitively appealing cost sharing techniques, which may reduce alliance complexity and enforce the strength of mutual partner relationships.
",katrien ramaekers,,2016.0,10.1057/jors.2015.106,Journal of the Operational Research Society,Verdonck2016,Not available,,Nature,Not available,Analysis of collaborative savings and cost allocation techniques for the cooperative carrier facility location problem,4bb1c4d6b9d88556b3bd313e8dd77276,http://dx.doi.org/10.1057/jors.2015.106
17652,"Transport companies may cooperate to increase their efficiency levels by, for example, the exchange of orders or vehicle capacity. In this paper a new approach to horizontal carrier collaboration is presented: the sharing of distribution centres (DCs) with partnering organisations. This problem can be classified as a cooperative facility location problem and formulated as an innovative mixed integer linear programme. To ensure cooperation sustainability, collaborative costs need to be allocated fairly to the different participants. To analyse the benefits of cooperative facility location and the effects of different cost allocation techniques, numerical experiments based on experimental design are carried out on a UK case study. Sharing DCs may lead to significant cost savings up to 21.6%. In contrast to the case of sharing orders or vehicles, there are diseconomies of scale in terms of the number of partners and more collaborative benefit can be expected when partners are unequal in size. Moreover, results indicate that horizontal collaboration at the level of DCs works well with a limited number of partners and can be based on intuitively appealing cost sharing techniques, which may reduce alliance complexity and enforce the strength of mutual partner relationships.
",gerrit janssens,,2016.0,10.1057/jors.2015.106,Journal of the Operational Research Society,Verdonck2016,Not available,,Nature,Not available,Analysis of collaborative savings and cost allocation techniques for the cooperative carrier facility location problem,4bb1c4d6b9d88556b3bd313e8dd77276,http://dx.doi.org/10.1057/jors.2015.106
17653,"A local electricity distribution company (LDC) can reduce its exposure to the inherent risks of spot-price volatility and uncertain future demand via forward contracts. Management's problem is to determine the optimal forward-contract purchase. We propose a practical three-stage approach for dealing with the problem. The first stage determines an optimal purchase by solving a cost-constrained risk-minimization problem. The second stage derives the efficient frontier of tradeoffs between expected cost and cost risk from the first-stage solution, at various bounds on the expected cost. The optimal solution is found by melding the frontier with management's risk preferences. In the third stage, the model's parameters are estimated from data typically available to an LDC and used to determine its forward-contract purchase.
",c-k woo,,2004.0,10.1057/palgrave.jors.2601769,Journal of the Operational Research Society,Woo2004,Not available,,Nature,Not available,The efficient frontier for spot and forward purchases: an application to electricity,506d88df0a3bfd2482ddddfe94512aaf,http://dx.doi.org/10.1057/palgrave.jors.2601769
17654,"To many researchers and practitioners in China, revenue management (RM) was not a familiar term until the beginning of the last decade. As the nation's economy is booming at an unprecedented pace, its service sector has grown rapidly. Managing perishable products, which are commonly seen in service industries, is becoming a frequently faced issue in business practice. Research on and applications of RM in China are drawing increasing attention in both academia and industry. This paper surveys the latest development of RM in China. Although from a theoretical and practical point of view research on RM is fairly new and much behind what is needed, the momentum and potential are visible. The broad spectrum of its application has generated various front-line research problems that will enrich the field.
",wei yang,,2008.0,10.1057/rpm.2008.33,Journal of Revenue and Pricing Management,Yang2008,Not available,,Nature,Not available,Revenue management in China: An industry and research overview,026b90150f0fb6fb0fcfd9ba0cd37115,http://dx.doi.org/10.1057/rpm.2008.33
17655,"To many researchers and practitioners in China, revenue management (RM) was not a familiar term until the beginning of the last decade. As the nation's economy is booming at an unprecedented pace, its service sector has grown rapidly. Managing perishable products, which are commonly seen in service industries, is becoming a frequently faced issue in business practice. Research on and applications of RM in China are drawing increasing attention in both academia and industry. This paper surveys the latest development of RM in China. Although from a theoretical and practical point of view research on RM is fairly new and much behind what is needed, the momentum and potential are visible. The broad spectrum of its application has generated various front-line research problems that will enrich the field.
",xiutian shi,,2008.0,10.1057/rpm.2008.33,Journal of Revenue and Pricing Management,Yang2008,Not available,,Nature,Not available,Revenue management in China: An industry and research overview,026b90150f0fb6fb0fcfd9ba0cd37115,http://dx.doi.org/10.1057/rpm.2008.33
17656,"To many researchers and practitioners in China, revenue management (RM) was not a familiar term until the beginning of the last decade. As the nation's economy is booming at an unprecedented pace, its service sector has grown rapidly. Managing perishable products, which are commonly seen in service industries, is becoming a frequently faced issue in business practice. Research on and applications of RM in China are drawing increasing attention in both academia and industry. This paper surveys the latest development of RM in China. Although from a theoretical and practical point of view research on RM is fairly new and much behind what is needed, the momentum and potential are visible. The broad spectrum of its application has generated various front-line research problems that will enrich the field.
",baichun xiao,,2008.0,10.1057/rpm.2008.33,Journal of Revenue and Pricing Management,Yang2008,Not available,,Nature,Not available,Revenue management in China: An industry and research overview,026b90150f0fb6fb0fcfd9ba0cd37115,http://dx.doi.org/10.1057/rpm.2008.33
17657,"To many researchers and practitioners in China, revenue management (RM) was not a familiar term until the beginning of the last decade. As the nation's economy is booming at an unprecedented pace, its service sector has grown rapidly. Managing perishable products, which are commonly seen in service industries, is becoming a frequently faced issue in business practice. Research on and applications of RM in China are drawing increasing attention in both academia and industry. This paper surveys the latest development of RM in China. Although from a theoretical and practical point of view research on RM is fairly new and much behind what is needed, the momentum and potential are visible. The broad spectrum of its application has generated various front-line research problems that will enrich the field.
",youyi feng,,2008.0,10.1057/rpm.2008.33,Journal of Revenue and Pricing Management,Yang2008,Not available,,Nature,Not available,Revenue management in China: An industry and research overview,026b90150f0fb6fb0fcfd9ba0cd37115,http://dx.doi.org/10.1057/rpm.2008.33
17658,"For the average personal traveler, airfare pricing has some peculiar and unpredictable features. These features may unnecessarily complicate matters for customers in their decision making over whether to purchase a given ticket or not. The objective of this article is to raise the issue whether airlines potentially could be better off by sharing explicitly with their customers what causes these peculiarities. For instance, one scenario could be that when a customer receives an airfare quote they are also provided information regarding, say, the booking curve, the forecast demand and the marginal value of the seat. The purpose of the article is not to provide an in-depth analysis or indisputable conclusion on the topic, but rather to bring the issue to light and comment on a few facets.
",fredrik odegaard,,2014.0,10.1057/rpm.2014.35,Journal of Revenue and Pricing Management,Ødegaard2014,Not available,,Nature,Not available,Better informed customers for improved revenue management?,d40900077a8ec2a9f3fde024e750e74e,http://dx.doi.org/10.1057/rpm.2014.35
17659,"In the traditional lot sentencing rule, a buyer arrives to one of two decisions regarding lot disposition; either accept or reject a lot. However, it is more appropriate to consider choices between those two extreme decisions. A clear case where the traditional lot sentencing rule is not flexible is when a buyer purchases a lot from an English auction. In this paper, we propose a model that helps a buyer in estimating the value of a production lot. This model can be used by a bidder before the bidding process starts to estimate the value of an auctioned lot. The model provides an action plan that includes the estimated acquisition cost as a function of the number of defective items found in a random sample. Unlike the traditional lot sentencing rule, the proposed rule is more flexible and provides buyers with wider range of possible actions.
",mohammed darwish,,2016.0,10.1057/jors.2015.112,Journal of the Operational Research Society,Darwish2016,Not available,,Nature,Not available,Generalized lot sentencing rule for estimating the value of production lot for sale,8a052b36089bac00e4a73c3b0adfc573,http://dx.doi.org/10.1057/jors.2015.112
17660,"In the traditional lot sentencing rule, a buyer arrives to one of two decisions regarding lot disposition; either accept or reject a lot. However, it is more appropriate to consider choices between those two extreme decisions. A clear case where the traditional lot sentencing rule is not flexible is when a buyer purchases a lot from an English auction. In this paper, we propose a model that helps a buyer in estimating the value of a production lot. This model can be used by a bidder before the bidding process starts to estimate the value of an auctioned lot. The model provides an action plan that includes the estimated acquisition cost as a function of the number of defective items found in a random sample. Unlike the traditional lot sentencing rule, the proposed rule is more flexible and provides buyers with wider range of possible actions.
",fawaz abdulmalek,,2016.0,10.1057/jors.2015.112,Journal of the Operational Research Society,Darwish2016,Not available,,Nature,Not available,Generalized lot sentencing rule for estimating the value of production lot for sale,8a052b36089bac00e4a73c3b0adfc573,http://dx.doi.org/10.1057/jors.2015.112
17661,"Understanding the various variables that influence the success of eBay auctions has recently received much interest from academic researchers. Most studies concentrate on the effects of auction characteristics on the success of eBay auctions. A few studies identify consumer demographics that help predict the likelihood of consumers participating in eBay auctions. To date, however, no study has attempted to identify consumer personality traits, values and attitudes that may help predict whether or not a consumer will participate in eBay purchasing. This study examines a set of consumer demographics, values, attitudes and personality traits to assess these variables’ impact on eBay participation by consumers. Findings indicate that a consumer's family size and consumption motivation have an impact on their participation in eBay. Also this study finds that eBay consumers have a more positive attitude towards eBay than do eBay nonconsumers.
",gregory black,,2007.0,10.1057/palgrave.dddmp.4350066,"Journal of Direct, Data and Digital Marketing Practice",Black2007,Not available,,Nature,Not available,A comparison of the characteristics of eBay consumers and eBay nonconsumers,ff029f8361ba7e4e6a7e6b50df1fd5b1,http://dx.doi.org/10.1057/palgrave.dddmp.4350066
17662,"The market for modern Indian art is an emerging art market, having come into a proper existence only in the late 1990s. This market saw tremendous growth in its initial years, then a downturn that started around 2007–2008. Using data from auctions conducted by a major Indian art auctioneer, we estimate via hedonic regression a price index for paintings and drawings by Indian artists sold during 2000–2013. We are able to thus estimate a rate of return on Indian art as an investment and also shed light on what drives the price of a painting in the Indian market.
",jenny hawkins,,2015.0,10.1057/eej.2015.39,Eastern Economic Journal,Hawkins2015,Not available,,Nature,Not available,Returns on Indian Art during 2000–2013,a7dc293bdeee935b282aebbc40c271ea,http://dx.doi.org/10.1057/eej.2015.39
17663,"The market for modern Indian art is an emerging art market, having come into a proper existence only in the late 1990s. This market saw tremendous growth in its initial years, then a downturn that started around 2007–2008. Using data from auctions conducted by a major Indian art auctioneer, we estimate via hedonic regression a price index for paintings and drawings by Indian artists sold during 2000–2013. We are able to thus estimate a rate of return on Indian art as an investment and also shed light on what drives the price of a painting in the Indian market.
",viplav saini,,2015.0,10.1057/eej.2015.39,Eastern Economic Journal,Hawkins2015,Not available,,Nature,Not available,Returns on Indian Art during 2000–2013,a7dc293bdeee935b282aebbc40c271ea,http://dx.doi.org/10.1057/eej.2015.39
17664,"A local electricity distribution company (LDC) can reduce its exposure to the inherent risks of spot-price volatility and uncertain future demand via forward contracts. Management's problem is to determine the optimal forward-contract purchase. We propose a practical three-stage approach for dealing with the problem. The first stage determines an optimal purchase by solving a cost-constrained risk-minimization problem. The second stage derives the efficient frontier of tradeoffs between expected cost and cost risk from the first-stage solution, at various bounds on the expected cost. The optimal solution is found by melding the frontier with management's risk preferences. In the third stage, the model's parameters are estimated from data typically available to an LDC and used to determine its forward-contract purchase.
",i horowitz,,2004.0,10.1057/palgrave.jors.2601769,Journal of the Operational Research Society,Woo2004,Not available,,Nature,Not available,The efficient frontier for spot and forward purchases: an application to electricity,506d88df0a3bfd2482ddddfe94512aaf,http://dx.doi.org/10.1057/palgrave.jors.2601769
17665,Artificial-intelligence programs harness game-theory strategies and deep learning to defeat human professionals in two-player hold 'em.,elizabeth gibney,,2017.0,10.1038/nature.2017.21580,Nature,Gibney2017,Not available,,Nature,Not available,How rival bots battled their way to poker supremacy,106b249d7a6d4dc104d22f43dab819a5,http://dx.doi.org/10.1038/nature.2017.21580
17667,"Climate policy after 2012, when the Kyoto treaty expires, needs a radical rethink. More of the same won't do, argue Gwyn Prins and Steve Rayner.",gwyn prins,,2007.0,10.1038/449973a,Nature,Prins2007,Not available,,Nature,Not available,Time to ditch Kyoto,4c33fff4b77e1a3149111af30f435f0b,http://dx.doi.org/10.1038/449973a
17668,"Climate policy after 2012, when the Kyoto treaty expires, needs a radical rethink. More of the same won't do, argue Gwyn Prins and Steve Rayner.",steve rayner,,2007.0,10.1038/449973a,Nature,Prins2007,Not available,,Nature,Not available,Time to ditch Kyoto,4c33fff4b77e1a3149111af30f435f0b,http://dx.doi.org/10.1038/449973a
17669,"This article discusses the general legal principles governing the relationships between banks issuing over-the-counter structured derivatives to non-bank clients. After a discussion of the evident informational asymmetries between the counterparties to such deals, a representative sample is presented of recent deals failed from the clent's viewpoint, all the subject of current negotiation or ligitation with banks in Germany. Mathematical (mis)pricing and (asymmetric) counterparty risk asessments for these examples are summarised graphically before discussing the legal implications of their egregious features and their possible mitigation in future deals by appropriate regulation and interpretation in the world's courts.
",michael dempster,,2011.0,10.1057/jbr.2011.9,Journal of Banking Regulation,Dempster2011,Not available,,Nature,Not available,Regulating complex derivatives: Can the opaque be made transparent?,9f4077713d42f6a87ddf713be21539ac,http://dx.doi.org/10.1057/jbr.2011.9
17670,"A local electricity distribution company (LDC) can reduce its exposure to the inherent risks of spot-price volatility and uncertain future demand via forward contracts. Management's problem is to determine the optimal forward-contract purchase. We propose a practical three-stage approach for dealing with the problem. The first stage determines an optimal purchase by solving a cost-constrained risk-minimization problem. The second stage derives the efficient frontier of tradeoffs between expected cost and cost risk from the first-stage solution, at various bounds on the expected cost. The optimal solution is found by melding the frontier with management's risk preferences. In the third stage, the model's parameters are estimated from data typically available to an LDC and used to determine its forward-contract purchase.
",b horii,,2004.0,10.1057/palgrave.jors.2601769,Journal of the Operational Research Society,Woo2004,Not available,,Nature,Not available,The efficient frontier for spot and forward purchases: an application to electricity,506d88df0a3bfd2482ddddfe94512aaf,http://dx.doi.org/10.1057/palgrave.jors.2601769
17671,"This article discusses the general legal principles governing the relationships between banks issuing over-the-counter structured derivatives to non-bank clients. After a discussion of the evident informational asymmetries between the counterparties to such deals, a representative sample is presented of recent deals failed from the clent's viewpoint, all the subject of current negotiation or ligitation with banks in Germany. Mathematical (mis)pricing and (asymmetric) counterparty risk asessments for these examples are summarised graphically before discussing the legal implications of their egregious features and their possible mitigation in future deals by appropriate regulation and interpretation in the world's courts.
",elena medova,,2011.0,10.1057/jbr.2011.9,Journal of Banking Regulation,Dempster2011,Not available,,Nature,Not available,Regulating complex derivatives: Can the opaque be made transparent?,9f4077713d42f6a87ddf713be21539ac,http://dx.doi.org/10.1057/jbr.2011.9
17672,"This article discusses the general legal principles governing the relationships between banks issuing over-the-counter structured derivatives to non-bank clients. After a discussion of the evident informational asymmetries between the counterparties to such deals, a representative sample is presented of recent deals failed from the clent's viewpoint, all the subject of current negotiation or ligitation with banks in Germany. Mathematical (mis)pricing and (asymmetric) counterparty risk asessments for these examples are summarised graphically before discussing the legal implications of their egregious features and their possible mitigation in future deals by appropriate regulation and interpretation in the world's courts.
",julian roberts,,2011.0,10.1057/jbr.2011.9,Journal of Banking Regulation,Dempster2011,Not available,,Nature,Not available,Regulating complex derivatives: Can the opaque be made transparent?,9f4077713d42f6a87ddf713be21539ac,http://dx.doi.org/10.1057/jbr.2011.9
17673,"Using a threshold citation approach, we rank institutions in economic research based on the influence of research works produced by their staff or faculty members. The top five economics departments are the University of Chicago, Harvard University, Princeton University, MIT, and Northwestern University with research works by scholars at the University of Chicago having the most influence on economic research.
",kam chan,,2008.0,10.1057/palgrave.eej.9050035,Eastern Economic Journal,Chan2008,Not available,,Nature,Not available,Ranking of Institutions in Economic Research: a Threshold Citation Approach,8a6b71f4b591981b727002ad6ca36e02,http://dx.doi.org/10.1057/palgrave.eej.9050035
17674,"Using a threshold citation approach, we rank institutions in economic research based on the influence of research works produced by their staff or faculty members. The top five economics departments are the University of Chicago, Harvard University, Princeton University, MIT, and Northwestern University with research works by scholars at the University of Chicago having the most influence on economic research.
",kartono liano,,2008.0,10.1057/palgrave.eej.9050035,Eastern Economic Journal,Chan2008,Not available,,Nature,Not available,Ranking of Institutions in Economic Research: a Threshold Citation Approach,8a6b71f4b591981b727002ad6ca36e02,http://dx.doi.org/10.1057/palgrave.eej.9050035
17675,"The high level of investigative activity to date into information systems and information technology acceptance and diffusion has witnessed the use of a wide range of exploratory techniques, examining many different systems and technologies in countless different contexts and geographical locations. The aim of this paper is to provide a comprehensive and systematic review of the literature pertaining to such adoption and diffusion issues in order to observe trends, ascertain the current ‘state of play’, and to highlight promising lines of inquiry including those lacking investigative activity or simply being in need of renewed interest. Previous research activity was analysed along a number dimensions including units of analysis, research paradigms, methodologies, and methods, theories and theoretical constructs, and technologies/contexts examined. Information on these and other variables was extracted during an examination of 345 papers on innovation adoption, acceptance and diffusion appearing in 19 peer-reviewed journals between 1985 and 2007. Findings suggest that the positivist paradigm, empirical and quantitative research, the survey method and Technology Acceptance Model theory (and its associated constructs) were predominantly used in the body of work examined, revealing clear opportunities for researchers to make original contributions by making greater use of the theoretical and methodological variety available to them, and consequently reducing the risk of research in the area moving toward overall homogeneity.
",michael williams,,2009.0,10.1057/jit.2008.30,Journal of Information Technology,Williams2009,Not available,,Nature,Not available,Contemporary trends and issues in IT adoption and diffusion research,edbba6b02d198eb118fa9f04909cfada,http://dx.doi.org/10.1057/jit.2008.30
17676,"The high level of investigative activity to date into information systems and information technology acceptance and diffusion has witnessed the use of a wide range of exploratory techniques, examining many different systems and technologies in countless different contexts and geographical locations. The aim of this paper is to provide a comprehensive and systematic review of the literature pertaining to such adoption and diffusion issues in order to observe trends, ascertain the current ‘state of play’, and to highlight promising lines of inquiry including those lacking investigative activity or simply being in need of renewed interest. Previous research activity was analysed along a number dimensions including units of analysis, research paradigms, methodologies, and methods, theories and theoretical constructs, and technologies/contexts examined. Information on these and other variables was extracted during an examination of 345 papers on innovation adoption, acceptance and diffusion appearing in 19 peer-reviewed journals between 1985 and 2007. Findings suggest that the positivist paradigm, empirical and quantitative research, the survey method and Technology Acceptance Model theory (and its associated constructs) were predominantly used in the body of work examined, revealing clear opportunities for researchers to make original contributions by making greater use of the theoretical and methodological variety available to them, and consequently reducing the risk of research in the area moving toward overall homogeneity.
",yogesh dwivedi,,2009.0,10.1057/jit.2008.30,Journal of Information Technology,Williams2009,Not available,,Nature,Not available,Contemporary trends and issues in IT adoption and diffusion research,edbba6b02d198eb118fa9f04909cfada,http://dx.doi.org/10.1057/jit.2008.30
17677,"The high level of investigative activity to date into information systems and information technology acceptance and diffusion has witnessed the use of a wide range of exploratory techniques, examining many different systems and technologies in countless different contexts and geographical locations. The aim of this paper is to provide a comprehensive and systematic review of the literature pertaining to such adoption and diffusion issues in order to observe trends, ascertain the current ‘state of play’, and to highlight promising lines of inquiry including those lacking investigative activity or simply being in need of renewed interest. Previous research activity was analysed along a number dimensions including units of analysis, research paradigms, methodologies, and methods, theories and theoretical constructs, and technologies/contexts examined. Information on these and other variables was extracted during an examination of 345 papers on innovation adoption, acceptance and diffusion appearing in 19 peer-reviewed journals between 1985 and 2007. Findings suggest that the positivist paradigm, empirical and quantitative research, the survey method and Technology Acceptance Model theory (and its associated constructs) were predominantly used in the body of work examined, revealing clear opportunities for researchers to make original contributions by making greater use of the theoretical and methodological variety available to them, and consequently reducing the risk of research in the area moving toward overall homogeneity.
",banita lal,,2009.0,10.1057/jit.2008.30,Journal of Information Technology,Williams2009,Not available,,Nature,Not available,Contemporary trends and issues in IT adoption and diffusion research,edbba6b02d198eb118fa9f04909cfada,http://dx.doi.org/10.1057/jit.2008.30
17678,"The high level of investigative activity to date into information systems and information technology acceptance and diffusion has witnessed the use of a wide range of exploratory techniques, examining many different systems and technologies in countless different contexts and geographical locations. The aim of this paper is to provide a comprehensive and systematic review of the literature pertaining to such adoption and diffusion issues in order to observe trends, ascertain the current ‘state of play’, and to highlight promising lines of inquiry including those lacking investigative activity or simply being in need of renewed interest. Previous research activity was analysed along a number dimensions including units of analysis, research paradigms, methodologies, and methods, theories and theoretical constructs, and technologies/contexts examined. Information on these and other variables was extracted during an examination of 345 papers on innovation adoption, acceptance and diffusion appearing in 19 peer-reviewed journals between 1985 and 2007. Findings suggest that the positivist paradigm, empirical and quantitative research, the survey method and Technology Acceptance Model theory (and its associated constructs) were predominantly used in the body of work examined, revealing clear opportunities for researchers to make original contributions by making greater use of the theoretical and methodological variety available to them, and consequently reducing the risk of research in the area moving toward overall homogeneity.
",andrew schwarz,,2009.0,10.1057/jit.2008.30,Journal of Information Technology,Williams2009,Not available,,Nature,Not available,Contemporary trends and issues in IT adoption and diffusion research,edbba6b02d198eb118fa9f04909cfada,http://dx.doi.org/10.1057/jit.2008.30
17679,"A local electricity distribution company (LDC) can reduce its exposure to the inherent risks of spot-price volatility and uncertain future demand via forward contracts. Management's problem is to determine the optimal forward-contract purchase. We propose a practical three-stage approach for dealing with the problem. The first stage determines an optimal purchase by solving a cost-constrained risk-minimization problem. The second stage derives the efficient frontier of tradeoffs between expected cost and cost risk from the first-stage solution, at various bounds on the expected cost. The optimal solution is found by melding the frontier with management's risk preferences. In the third stage, the model's parameters are estimated from data typically available to an LDC and used to determine its forward-contract purchase.
",r karimov,,2004.0,10.1057/palgrave.jors.2601769,Journal of the Operational Research Society,Woo2004,Not available,,Nature,Not available,The efficient frontier for spot and forward purchases: an application to electricity,506d88df0a3bfd2482ddddfe94512aaf,http://dx.doi.org/10.1057/palgrave.jors.2601769
17680,"This article aims to disturb the received wisdom ‘tidy house, tidy mind’ by tracing its emergence and consolidation: from psychoanalysis to clinical psychology through to philosophy and reality television. The contention here is that the commanding presence of the mirror as a clinical apparatus serves to eclipse a full consideration of the hoarding situation as one involving not only mental health professionals and clients, that is, ‘hoarders’, but also the materials of the heap – as the ‘hoard’ is read straightforwardly as a reflection of the hoarder’s mind. It is argued, further, that the conspicuous neglect of things, that is, material objects, in the modelling of the hoarding ‘problem’ – the aetiology of Hoarding Disorder is cast in entirely human terms – serves to frame ‘hoarders’ as individually culpable. By extending the forensic logic of both clinical and popular psychology, it is argued that such framing amounts to securing forced confessions, where hoarders are left to bear total responsibility for a situation, which is, ultimately, a question of distributed agency between human and non-human entities.
",tracey potts,,2015.0,10.1057/sub.2015.1,Subjectivity,Potts2015,Not available,,Nature,Not available,"Tidy house, tidy mind? Non-human agency in the hoarding situation",fc2b78695ed476447e920b6b4fe10468,http://dx.doi.org/10.1057/sub.2015.1
17681,"This article introduces the novel concept of smart business networks. The authors see the future as a developing web of people and organizations, bound together in a dynamic and unpredictable way, creating smart outcomes from quickly (re-) configuring links between actors. The question is: What should be done to make the outcomes of such a network ‘smart’, that is, just a little better than that of your competitor? More agile, with less pain, with more return to all the members of the network, now and over time? The technical answer is to create a ‘business operating system’ that should run business processes on different organizational platforms. Business processes would become portable: The end-to-end management of processes running across many different organizations in many different forms would become possible. This article presents an energizing discussion of smart business networks and the research challenges ahead.
",peter vervest,,2004.0,10.1057/palgrave.jit.2000024,Journal of Information Technology,Vervest2004,Not available,,Nature,Not available,The emergence of smart business networks,a65549cdb429a7fde05e8f27d3c39a1e,http://dx.doi.org/10.1057/palgrave.jit.2000024
17682,"This study relies on knowledge regarding the neuroplasticity of dual-system components that govern addiction and excessive behavior and suggests that alterations in the grey matter volumes, i.e., brain morphology, of specific regions of interest are associated with technology-related addictions. Using voxel based morphometry (VBM) applied to structural Magnetic Resonance Imaging (MRI) scans of twenty social network site (SNS) users with varying degrees of SNS addiction, we show that SNS addiction is associated with a presumably more efficient impulsive brain system, manifested through reduced grey matter volumes in the amygdala bilaterally (but not with structural differences in the Nucleus Accumbens). In this regard, SNS addiction is similar in terms of brain anatomy alterations to other (substance, gambling etc.) addictions. We also show that in contrast to other addictions in which the anterior-/ mid- cingulate cortex is impaired and fails to support the needed inhibition, which manifests through reduced grey matter volumes, this region is presumed to be healthy in our sample and its grey matter volume is positively correlated with one’s level of SNS addiction. These findings portray an anatomical morphology model of SNS addiction and point to brain morphology similarities and differences between technology addictions and substance and gambling addictions.
",antoine bechara,Brain,2017.0,10.1038/srep45064,Scientific Reports,He2017,Not available,,Nature,Not available,Brain anatomy alterations associated with Social Networking Site (SNS) addiction,355447eac1ec4959117118d905cd56d1,http://dx.doi.org/10.1038/srep45064
17683,"This article introduces the novel concept of smart business networks. The authors see the future as a developing web of people and organizations, bound together in a dynamic and unpredictable way, creating smart outcomes from quickly (re-) configuring links between actors. The question is: What should be done to make the outcomes of such a network ‘smart’, that is, just a little better than that of your competitor? More agile, with less pain, with more return to all the members of the network, now and over time? The technical answer is to create a ‘business operating system’ that should run business processes on different organizational platforms. Business processes would become portable: The end-to-end management of processes running across many different organizations in many different forms would become possible. This article presents an energizing discussion of smart business networks and the research challenges ahead.
",kenneth preiss,,2004.0,10.1057/palgrave.jit.2000024,Journal of Information Technology,Vervest2004,Not available,,Nature,Not available,The emergence of smart business networks,a65549cdb429a7fde05e8f27d3c39a1e,http://dx.doi.org/10.1057/palgrave.jit.2000024
17684,"This article introduces the novel concept of smart business networks. The authors see the future as a developing web of people and organizations, bound together in a dynamic and unpredictable way, creating smart outcomes from quickly (re-) configuring links between actors. The question is: What should be done to make the outcomes of such a network ‘smart’, that is, just a little better than that of your competitor? More agile, with less pain, with more return to all the members of the network, now and over time? The technical answer is to create a ‘business operating system’ that should run business processes on different organizational platforms. Business processes would become portable: The end-to-end management of processes running across many different organizations in many different forms would become possible. This article presents an energizing discussion of smart business networks and the research challenges ahead.
",eric heck,,2004.0,10.1057/palgrave.jit.2000024,Journal of Information Technology,Vervest2004,Not available,,Nature,Not available,The emergence of smart business networks,a65549cdb429a7fde05e8f27d3c39a1e,http://dx.doi.org/10.1057/palgrave.jit.2000024
17685,"This article introduces the novel concept of smart business networks. The authors see the future as a developing web of people and organizations, bound together in a dynamic and unpredictable way, creating smart outcomes from quickly (re-) configuring links between actors. The question is: What should be done to make the outcomes of such a network ‘smart’, that is, just a little better than that of your competitor? More agile, with less pain, with more return to all the members of the network, now and over time? The technical answer is to create a ‘business operating system’ that should run business processes on different organizational platforms. Business processes would become portable: The end-to-end management of processes running across many different organizations in many different forms would become possible. This article presents an energizing discussion of smart business networks and the research challenges ahead.
",louis-francois pau,,2004.0,10.1057/palgrave.jit.2000024,Journal of Information Technology,Vervest2004,Not available,,Nature,Not available,The emergence of smart business networks,a65549cdb429a7fde05e8f27d3c39a1e,http://dx.doi.org/10.1057/palgrave.jit.2000024
17686,"Virtual communities and social networks assume and consume more aspects of people's lives. In these evolving social spaces, the boundaries between actual and virtual reality, between living individuals and their virtual bodies, and between private and public domains are becoming ever more blurred. As a result, users and their presentations of self, as expressed through virtual bodies, are increasingly entangled. Consequently, more and more Internet users are cyborgs. For this reason, the ethical guidelines necessary for Internet research need to be revisited. We contend that the IS community has paid insufficient attention to the ethics of Internet research. To this end, we develop an understanding of issues related to online human subjects research by distinguishing between a disembodied and an entangled view of the Internet. We outline a framework to guide investigators and research ethics committees in answering a key question in the age of cyborgism: When does a proposed Internet study deal with human subjects as opposed to digital material?
",ulrike schultze,,2012.0,10.1057/jit.2012.30,Journal of Information Technology,Schultze2012,Not available,,Nature,Not available,Studying cyborgs: re-examining internet studies as human subjects research,baa4778b07abe8675054036aa05bb4fa,http://dx.doi.org/10.1057/jit.2012.30
17687,"Virtual communities and social networks assume and consume more aspects of people's lives. In these evolving social spaces, the boundaries between actual and virtual reality, between living individuals and their virtual bodies, and between private and public domains are becoming ever more blurred. As a result, users and their presentations of self, as expressed through virtual bodies, are increasingly entangled. Consequently, more and more Internet users are cyborgs. For this reason, the ethical guidelines necessary for Internet research need to be revisited. We contend that the IS community has paid insufficient attention to the ethics of Internet research. To this end, we develop an understanding of issues related to online human subjects research by distinguishing between a disembodied and an entangled view of the Internet. We outline a framework to guide investigators and research ethics committees in answering a key question in the age of cyborgism: When does a proposed Internet study deal with human subjects as opposed to digital material?
",richard mason,,2012.0,10.1057/jit.2012.30,Journal of Information Technology,Schultze2012,Not available,,Nature,Not available,Studying cyborgs: re-examining internet studies as human subjects research,baa4778b07abe8675054036aa05bb4fa,http://dx.doi.org/10.1057/jit.2012.30
17688,"INTRODUCTORY REMARKS LeBaron: Agent-based economics, and more generally agent-based social sciences, have been around in various forms for over 30 years. The advent of higher speed computing and new tools for the computational learning fields led to a major increase in activity in the early 1990s through today.
",jason barr,,2008.0,10.1057/eej.2008.31,Eastern Economic Journal,Barr2008,Not available,,Nature,Not available,"The Future of Agent-Based Research in Economics: A Panel Discussion, Eastern Economic Association Annual Meetings, Boston, March 7, 20081",68ddd1863e999ebf2758f2f01faad9ea,http://dx.doi.org/10.1057/eej.2008.31
17689,"INTRODUCTORY REMARKS LeBaron: Agent-based economics, and more generally agent-based social sciences, have been around in various forms for over 30 years. The advent of higher speed computing and new tools for the computational learning fields led to a major increase in activity in the early 1990s through today.
",troy tassier,,2008.0,10.1057/eej.2008.31,Eastern Economic Journal,Barr2008,Not available,,Nature,Not available,"The Future of Agent-Based Research in Economics: A Panel Discussion, Eastern Economic Association Annual Meetings, Boston, March 7, 20081",68ddd1863e999ebf2758f2f01faad9ea,http://dx.doi.org/10.1057/eej.2008.31
17690,"INTRODUCTORY REMARKS LeBaron: Agent-based economics, and more generally agent-based social sciences, have been around in various forms for over 30 years. The advent of higher speed computing and new tools for the computational learning fields led to a major increase in activity in the early 1990s through today.
",leanne ussher,,2008.0,10.1057/eej.2008.31,Eastern Economic Journal,Barr2008,Not available,,Nature,Not available,"The Future of Agent-Based Research in Economics: A Panel Discussion, Eastern Economic Association Annual Meetings, Boston, March 7, 20081",68ddd1863e999ebf2758f2f01faad9ea,http://dx.doi.org/10.1057/eej.2008.31
17691,"This study relies on knowledge regarding the neuroplasticity of dual-system components that govern addiction and excessive behavior and suggests that alterations in the grey matter volumes, i.e., brain morphology, of specific regions of interest are associated with technology-related addictions. Using voxel based morphometry (VBM) applied to structural Magnetic Resonance Imaging (MRI) scans of twenty social network site (SNS) users with varying degrees of SNS addiction, we show that SNS addiction is associated with a presumably more efficient impulsive brain system, manifested through reduced grey matter volumes in the amygdala bilaterally (but not with structural differences in the Nucleus Accumbens). In this regard, SNS addiction is similar in terms of brain anatomy alterations to other (substance, gambling etc.) addictions. We also show that in contrast to other addictions in which the anterior-/ mid- cingulate cortex is impaired and fails to support the needed inhibition, which manifests through reduced grey matter volumes, this region is presumed to be healthy in our sample and its grey matter volume is positively correlated with one’s level of SNS addiction. These findings portray an anatomical morphology model of SNS addiction and point to brain morphology similarities and differences between technology addictions and substance and gambling addictions.
",antoine bechara,Brain imaging,2017.0,10.1038/srep45064,Scientific Reports,He2017,Not available,,Nature,Not available,Brain anatomy alterations associated with Social Networking Site (SNS) addiction,355447eac1ec4959117118d905cd56d1,http://dx.doi.org/10.1038/srep45064
17692,"INTRODUCTORY REMARKS LeBaron: Agent-based economics, and more generally agent-based social sciences, have been around in various forms for over 30 years. The advent of higher speed computing and new tools for the computational learning fields led to a major increase in activity in the early 1990s through today.
",blake lebaron,,2008.0,10.1057/eej.2008.31,Eastern Economic Journal,Barr2008,Not available,,Nature,Not available,"The Future of Agent-Based Research in Economics: A Panel Discussion, Eastern Economic Association Annual Meetings, Boston, March 7, 20081",68ddd1863e999ebf2758f2f01faad9ea,http://dx.doi.org/10.1057/eej.2008.31
17693,"INTRODUCTORY REMARKS LeBaron: Agent-based economics, and more generally agent-based social sciences, have been around in various forms for over 30 years. The advent of higher speed computing and new tools for the computational learning fields led to a major increase in activity in the early 1990s through today.
",shu-heng chen,,2008.0,10.1057/eej.2008.31,Eastern Economic Journal,Barr2008,Not available,,Nature,Not available,"The Future of Agent-Based Research in Economics: A Panel Discussion, Eastern Economic Association Annual Meetings, Boston, March 7, 20081",68ddd1863e999ebf2758f2f01faad9ea,http://dx.doi.org/10.1057/eej.2008.31
17694,"INTRODUCTORY REMARKS LeBaron: Agent-based economics, and more generally agent-based social sciences, have been around in various forms for over 30 years. The advent of higher speed computing and new tools for the computational learning fields led to a major increase in activity in the early 1990s through today.
",shyam sunder,,2008.0,10.1057/eej.2008.31,Eastern Economic Journal,Barr2008,Not available,,Nature,Not available,"The Future of Agent-Based Research in Economics: A Panel Discussion, Eastern Economic Association Annual Meetings, Boston, March 7, 20081",68ddd1863e999ebf2758f2f01faad9ea,http://dx.doi.org/10.1057/eej.2008.31
17695,"Trust is a complex phenomenon that pervades human relations. It is essential for the success of business-to-consumer electronic commerce, where many of the tools that can be used in its absence (contracts, advance payments, insurance, etc.) may not be available. However, research as to how consumer trust can be built in an online environment is limited and varies considerably in terms of the dimensions of the problem that are examined. Consequently, much of our understanding of the antecedents of trust in online shopping context remains fragmented. This study uses a previously validated measurement instrument to investigate, in an Irish context, the existence and importance of specific perceptions and factors that are thought to predict the generation of consumer trust in Internet shopping. The research results provide evidence that Irish consumers' perception of vendor trustworthiness is the result of specific factors that it is possible for vendors to manage. A modified model that addresses the key dimensions of consumer trust in Internet shopping in Ireland is proposed.
",regina connolly,,2006.0,10.1057/palgrave.jit.2000071,Journal of Information Technology,Connolly2006,Not available,,Nature,Not available,Consumer trust in Internet shopping in Ireland: towards the development of a more effective trust measurement instrument,a8062b3821d6eb2422842faac362a16a,http://dx.doi.org/10.1057/palgrave.jit.2000071
17696,"Trust is a complex phenomenon that pervades human relations. It is essential for the success of business-to-consumer electronic commerce, where many of the tools that can be used in its absence (contracts, advance payments, insurance, etc.) may not be available. However, research as to how consumer trust can be built in an online environment is limited and varies considerably in terms of the dimensions of the problem that are examined. Consequently, much of our understanding of the antecedents of trust in online shopping context remains fragmented. This study uses a previously validated measurement instrument to investigate, in an Irish context, the existence and importance of specific perceptions and factors that are thought to predict the generation of consumer trust in Internet shopping. The research results provide evidence that Irish consumers' perception of vendor trustworthiness is the result of specific factors that it is possible for vendors to manage. A modified model that addresses the key dimensions of consumer trust in Internet shopping in Ireland is proposed.
",frank bannister,,2006.0,10.1057/palgrave.jit.2000071,Journal of Information Technology,Connolly2006,Not available,,Nature,Not available,Consumer trust in Internet shopping in Ireland: towards the development of a more effective trust measurement instrument,a8062b3821d6eb2422842faac362a16a,http://dx.doi.org/10.1057/palgrave.jit.2000071
17697,"We provide an experimental analysis of competitive insurance markets with adverse selection. Our parameterised version of the lemons’ model of Akerlof in the insurance context predicts total crowding-out of low risks when insurers offer a single full insurance contract. The therapy proposed by Rothschild and Stiglitz consists of adding a partial insurance contract so as to obtain self-selection of risks. We test the theoretical predictions of these two models in two experiments. A clean test is obtained by matching the parameters of these experiments and by controlling for the risk neutrality of insurers and the common risk aversion of their clients by means of the binary lottery procedure. The results reveal a partial crowding-out of low risks in the first experiment. Crowding-out is not eliminated in the second experiment and it is not even significantly reduced. Finally, instead of the predicted separating equilibrium, we find pooling equilibria. The latter can be sustained because insureds who objectively differ in their risk level do not perceive themselves as being so much different.
",dorra riahi,,2013.0,10.1057/grir.2012.5,The Geneva Risk and Insurance Review,Riahi2013,Not available,,Nature,Not available,Competitive Insurance Markets and Adverse Selection in the Lab,4ce52d13710fd3a378b6805a2215ef73,http://dx.doi.org/10.1057/grir.2012.5
17698,"We provide an experimental analysis of competitive insurance markets with adverse selection. Our parameterised version of the lemons’ model of Akerlof in the insurance context predicts total crowding-out of low risks when insurers offer a single full insurance contract. The therapy proposed by Rothschild and Stiglitz consists of adding a partial insurance contract so as to obtain self-selection of risks. We test the theoretical predictions of these two models in two experiments. A clean test is obtained by matching the parameters of these experiments and by controlling for the risk neutrality of insurers and the common risk aversion of their clients by means of the binary lottery procedure. The results reveal a partial crowding-out of low risks in the first experiment. Crowding-out is not eliminated in the second experiment and it is not even significantly reduced. Finally, instead of the predicted separating equilibrium, we find pooling equilibria. The latter can be sustained because insureds who objectively differ in their risk level do not perceive themselves as being so much different.
",louis levy-garboua,,2013.0,10.1057/grir.2012.5,The Geneva Risk and Insurance Review,Riahi2013,Not available,,Nature,Not available,Competitive Insurance Markets and Adverse Selection in the Lab,4ce52d13710fd3a378b6805a2215ef73,http://dx.doi.org/10.1057/grir.2012.5
17699,"We provide an experimental analysis of competitive insurance markets with adverse selection. Our parameterised version of the lemons’ model of Akerlof in the insurance context predicts total crowding-out of low risks when insurers offer a single full insurance contract. The therapy proposed by Rothschild and Stiglitz consists of adding a partial insurance contract so as to obtain self-selection of risks. We test the theoretical predictions of these two models in two experiments. A clean test is obtained by matching the parameters of these experiments and by controlling for the risk neutrality of insurers and the common risk aversion of their clients by means of the binary lottery procedure. The results reveal a partial crowding-out of low risks in the first experiment. Crowding-out is not eliminated in the second experiment and it is not even significantly reduced. Finally, instead of the predicted separating equilibrium, we find pooling equilibria. The latter can be sustained because insureds who objectively differ in their risk level do not perceive themselves as being so much different.
",claude montmarquette,,2013.0,10.1057/grir.2012.5,The Geneva Risk and Insurance Review,Riahi2013,Not available,,Nature,Not available,Competitive Insurance Markets and Adverse Selection in the Lab,4ce52d13710fd3a378b6805a2215ef73,http://dx.doi.org/10.1057/grir.2012.5
17700,"The software industry is changing as a result of the rising influence both of packaged and of Free/Libre/Open Source Software (FLOSS), but the change trajectory of the industry is still not well understood. This paper aims to contribute to clarifying software industry evolution through a longitudinal study, using Industry Change Trajectory Theory to explain and predict the evolution of the Content Management Systems (CMS) segment and the extent to which its results can be generalized to the overall software industry. Our data analysis shows that CMS players are experiencing a modification of their segments’ change trajectory. While McGahan in 2004 recognized that the software industry was in a creative change trajectory, it has subsequently faced strong competition on its core assets, (i.e. applications) and the empirical results of our longitudinal study from 2002 to 2007 show the CMS segment is now in a radical change trajectory, due to the rapid obsolescence of its core activities. Changes affecting the segment include the continuous development of CMS applications, the faster extension of functions for FLOSS CMS than for packaged CMS, the diffusion of the practice of providing services as well as delivering of software solutions.
",claudio vitari,,2009.0,10.1057/ejis.2009.13,European Journal of Information Systems,Vitari2009,Not available,,Nature,Not available,A longitudinal analysis of trajectory changes in the software industry: the case of the content management application segment,c8a74cda441aa5b81eb9765041ee6aa3,http://dx.doi.org/10.1057/ejis.2009.13
17701,"The software industry is changing as a result of the rising influence both of packaged and of Free/Libre/Open Source Software (FLOSS), but the change trajectory of the industry is still not well understood. This paper aims to contribute to clarifying software industry evolution through a longitudinal study, using Industry Change Trajectory Theory to explain and predict the evolution of the Content Management Systems (CMS) segment and the extent to which its results can be generalized to the overall software industry. Our data analysis shows that CMS players are experiencing a modification of their segments’ change trajectory. While McGahan in 2004 recognized that the software industry was in a creative change trajectory, it has subsequently faced strong competition on its core assets, (i.e. applications) and the empirical results of our longitudinal study from 2002 to 2007 show the CMS segment is now in a radical change trajectory, due to the rapid obsolescence of its core activities. Changes affecting the segment include the continuous development of CMS applications, the faster extension of functions for FLOSS CMS than for packaged CMS, the diffusion of the practice of providing services as well as delivering of software solutions.
",aurelio ravarini,,2009.0,10.1057/ejis.2009.13,European Journal of Information Systems,Vitari2009,Not available,,Nature,Not available,A longitudinal analysis of trajectory changes in the software industry: the case of the content management application segment,c8a74cda441aa5b81eb9765041ee6aa3,http://dx.doi.org/10.1057/ejis.2009.13
17702,"An effective Decision Support System (DSS) should help its users improve decision making in complex, information-rich environments. We present a feature gap analysis that shows that current decision support technologies lack important qualities for a new generation of agile business models that require easy, temporary integration across organisational boundaries. We enumerate these qualities as DSS Desiderata, properties that can contribute both effectiveness and flexibility to users in such environments. To address this gap, we describe a new design approach that enables users to compose decision behaviours from separate, configurable components, and allows dynamic construction of analysis and modelling tools from small, single-purpose evaluator services. The result is what we call an ‘evaluator service network’ that can easily be configured to test hypotheses and analyse the impact of various choices for elements of decision processes. We have implemented and tested this design in an interactive version of the MinneTAC trading agent, an agent designed for the Trading Agent Competition for Supply Chain Management.
",john collins,,2010.0,10.1057/ejis.2010.24,European Journal of Information Systems,Collins2010,Not available,,Nature,Not available,Flexible decision support in dynamic inter-organisational networks,02fa88455c665ac55bd2b00324e9a0a2,http://dx.doi.org/10.1057/ejis.2010.24
17703,"Website designers are beginning to incorporate social cues, such as helpfulness and familiarity, into e-commerce sites to facilitate the exchange relationship. Website socialness elicits a social response from users of the site and this response produces enjoyment. Users patronize websites that are exciting, entertaining and stimulating. The purpose of our study is to explore the effects of website socialness perceptions on the formation of users’ beliefs, attitudes and subsequent behavioral intentions. We manipulate website socialness perceptions across two different online shopping contexts, one for functional products and the other for pleasure-oriented products, and draw from the responses of 300 Internet users. Our findings show that website socialness perceptions lead to enjoyment, have a strong influence on user intentions and these effects are invariant across shopping contexts.
",robin wakefield,,2010.0,10.1057/ejis.2010.47,European Journal of Information Systems,Wakefield2010,Not available,,Nature,Not available,How website socialness leads to website use,82ee69c51e0f5559c46f18d5652073ba,http://dx.doi.org/10.1057/ejis.2010.47
17704,"Website designers are beginning to incorporate social cues, such as helpfulness and familiarity, into e-commerce sites to facilitate the exchange relationship. Website socialness elicits a social response from users of the site and this response produces enjoyment. Users patronize websites that are exciting, entertaining and stimulating. The purpose of our study is to explore the effects of website socialness perceptions on the formation of users’ beliefs, attitudes and subsequent behavioral intentions. We manipulate website socialness perceptions across two different online shopping contexts, one for functional products and the other for pleasure-oriented products, and draw from the responses of 300 Internet users. Our findings show that website socialness perceptions lead to enjoyment, have a strong influence on user intentions and these effects are invariant across shopping contexts.
",kirk wakefield,,2010.0,10.1057/ejis.2010.47,European Journal of Information Systems,Wakefield2010,Not available,,Nature,Not available,How website socialness leads to website use,82ee69c51e0f5559c46f18d5652073ba,http://dx.doi.org/10.1057/ejis.2010.47
17705,"Website designers are beginning to incorporate social cues, such as helpfulness and familiarity, into e-commerce sites to facilitate the exchange relationship. Website socialness elicits a social response from users of the site and this response produces enjoyment. Users patronize websites that are exciting, entertaining and stimulating. The purpose of our study is to explore the effects of website socialness perceptions on the formation of users’ beliefs, attitudes and subsequent behavioral intentions. We manipulate website socialness perceptions across two different online shopping contexts, one for functional products and the other for pleasure-oriented products, and draw from the responses of 300 Internet users. Our findings show that website socialness perceptions lead to enjoyment, have a strong influence on user intentions and these effects are invariant across shopping contexts.
",julie baker,,2010.0,10.1057/ejis.2010.47,European Journal of Information Systems,Wakefield2010,Not available,,Nature,Not available,How website socialness leads to website use,82ee69c51e0f5559c46f18d5652073ba,http://dx.doi.org/10.1057/ejis.2010.47
17706,"Website designers are beginning to incorporate social cues, such as helpfulness and familiarity, into e-commerce sites to facilitate the exchange relationship. Website socialness elicits a social response from users of the site and this response produces enjoyment. Users patronize websites that are exciting, entertaining and stimulating. The purpose of our study is to explore the effects of website socialness perceptions on the formation of users’ beliefs, attitudes and subsequent behavioral intentions. We manipulate website socialness perceptions across two different online shopping contexts, one for functional products and the other for pleasure-oriented products, and draw from the responses of 300 Internet users. Our findings show that website socialness perceptions lead to enjoyment, have a strong influence on user intentions and these effects are invariant across shopping contexts.
",liz wang,,2010.0,10.1057/ejis.2010.47,European Journal of Information Systems,Wakefield2010,Not available,,Nature,Not available,How website socialness leads to website use,82ee69c51e0f5559c46f18d5652073ba,http://dx.doi.org/10.1057/ejis.2010.47
17707,"There is no more important process than the way a business makes pricing decisions. Companies can no longer afford to fail in their pricing decisions: all products and services must be priced right, all the time. Today's rapidly changing market conditions make determining the right price extremely complicated. Accordingly, the field of scientific pricing is in the midst of revolutionary changes. This paper calls attention to some of these changes and makes ‘guesstimates’ about emerging trends. These involve the topics of competitive pricing, information asymmetry, segmentation, performance evaluation, price sensitivity and pricing education.
",harun kuyumcu,,2007.0,10.1057/palgrave.rpm.5160101,Journal of Revenue and Pricing Management,Kuyumcu2007,Not available,,Nature,Not available,Emerging trends in scientific pricing,21ade9e55925431b708cb93f4b12a812,http://dx.doi.org/10.1057/palgrave.rpm.5160101
17708,"We consider a market with two capacity providers – an entrant and an incumbent – each with fixed capacity, who compete to sell in a spot market and a forward market. Prices are fixed and the providers make strategic capacity allocation decisions. The model is designed to study the competitive interactions between a low cost entrant, who sells at a lower price in both the forward and the spot market, and an established incumbent. The key question we investigate is the type of equilibrium behaviour we could expect from the two providers under different assumptions about market structure, demand and available capacity. We study two cases: (a) when there is a single buyer and (b) when there are multiple independent consumers. For the single buyer case, we identify the conditions under which the buyer would buy forward solely from the entrant or solely from the incumbent or from both. The conditions for the existence of a pure strategy Nash equilibrium for the game between the two providers are established. For the case of multiple independent consumers, we identify competitive strategies for the entrant and incumbent and establish conditions for a pure strategy Nash equilibrium. The results show how competitive considerations can motivate capacity providers to offer capacity in a forward market even in the absence of market segmentation.
",guillermo gallego,,2009.0,10.1057/rpm.2009.13,Journal of Revenue and Pricing Management,Gallego2009,Not available,,Nature,Not available,Competitive revenue management with forward and spot markets,6a08bcdb7e42171b657e8d381a68a8e0,http://dx.doi.org/10.1057/rpm.2009.13
17709,"We consider a market with two capacity providers – an entrant and an incumbent – each with fixed capacity, who compete to sell in a spot market and a forward market. Prices are fixed and the providers make strategic capacity allocation decisions. The model is designed to study the competitive interactions between a low cost entrant, who sells at a lower price in both the forward and the spot market, and an established incumbent. The key question we investigate is the type of equilibrium behaviour we could expect from the two providers under different assumptions about market structure, demand and available capacity. We study two cases: (a) when there is a single buyer and (b) when there are multiple independent consumers. For the single buyer case, we identify the conditions under which the buyer would buy forward solely from the entrant or solely from the incumbent or from both. The conditions for the existence of a pure strategy Nash equilibrium for the game between the two providers are established. For the case of multiple independent consumers, we identify competitive strategies for the entrant and incumbent and establish conditions for a pure strategy Nash equilibrium. The results show how competitive considerations can motivate capacity providers to offer capacity in a forward market even in the absence of market segmentation.
",srinivas krishnamoorthy,,2009.0,10.1057/rpm.2009.13,Journal of Revenue and Pricing Management,Gallego2009,Not available,,Nature,Not available,Competitive revenue management with forward and spot markets,6a08bcdb7e42171b657e8d381a68a8e0,http://dx.doi.org/10.1057/rpm.2009.13
17710,"We consider a market with two capacity providers – an entrant and an incumbent – each with fixed capacity, who compete to sell in a spot market and a forward market. Prices are fixed and the providers make strategic capacity allocation decisions. The model is designed to study the competitive interactions between a low cost entrant, who sells at a lower price in both the forward and the spot market, and an established incumbent. The key question we investigate is the type of equilibrium behaviour we could expect from the two providers under different assumptions about market structure, demand and available capacity. We study two cases: (a) when there is a single buyer and (b) when there are multiple independent consumers. For the single buyer case, we identify the conditions under which the buyer would buy forward solely from the entrant or solely from the incumbent or from both. The conditions for the existence of a pure strategy Nash equilibrium for the game between the two providers are established. For the case of multiple independent consumers, we identify competitive strategies for the entrant and incumbent and establish conditions for a pure strategy Nash equilibrium. The results show how competitive considerations can motivate capacity providers to offer capacity in a forward market even in the absence of market segmentation.
",robert phillips,,2009.0,10.1057/rpm.2009.13,Journal of Revenue and Pricing Management,Gallego2009,Not available,,Nature,Not available,Competitive revenue management with forward and spot markets,6a08bcdb7e42171b657e8d381a68a8e0,http://dx.doi.org/10.1057/rpm.2009.13
17711,"This paper addresses the issue of truth and knowledge in management generally and knowledge management in particular. Based on ideas from critical realism and critical theory, it argues against the monovalent conceptualization of knowledge implicitly or explicitly held by many authors and aims instead to develop a characterization that recognizes the rich and varied ways in which human beings may be said ‘to know’. It points out and conceptualizes a fundamental dimension of knowledge that is generally ignored or cursorily treated within the literature, that is, ‘truth’. It identifies four forms of knowledge – propositional, experiential, performative and epistemological – and explores their characteristics, especially in terms of truth and validity. It points out some implications for knowledge management.
",john mingers,,2008.0,10.1057/palgrave.kmrp.8500161,Knowledge Management Research & Practice,Mingers2008,Not available,,Nature,Not available,Management knowledge and knowledge management: realism and forms of truth,616d1fd4b005a07860325be3bf9df509,http://dx.doi.org/10.1057/palgrave.kmrp.8500161
17712,"An effective Decision Support System (DSS) should help its users improve decision making in complex, information-rich environments. We present a feature gap analysis that shows that current decision support technologies lack important qualities for a new generation of agile business models that require easy, temporary integration across organisational boundaries. We enumerate these qualities as DSS Desiderata, properties that can contribute both effectiveness and flexibility to users in such environments. To address this gap, we describe a new design approach that enables users to compose decision behaviours from separate, configurable components, and allows dynamic construction of analysis and modelling tools from small, single-purpose evaluator services. The result is what we call an ‘evaluator service network’ that can easily be configured to test hypotheses and analyse the impact of various choices for elements of decision processes. We have implemented and tested this design in an interactive version of the MinneTAC trading agent, an agent designed for the Trading Agent Competition for Supply Chain Management.
",wolfgang ketter,,2010.0,10.1057/ejis.2010.24,European Journal of Information Systems,Collins2010,Not available,,Nature,Not available,Flexible decision support in dynamic inter-organisational networks,02fa88455c665ac55bd2b00324e9a0a2,http://dx.doi.org/10.1057/ejis.2010.24
17713,"Within social situations individuals can be punished with social ostracism. Ostracized individuals are removed from social aspects of the group but remain formal members. Social ostracism may occur in the workplace when workers produce a joint good among their inputs. Since workers are homogeneous, no worker has the ability to unilaterally punish free-riding behavior. Yet, the group as a whole has the capacity to punish free-riding group members with social punishments. We examine the effectiveness of social ostracism as a punishment for free-riding. We find that social ostracism helps maintain cooperation but only after prior experience without possible social punishment.
",brent davis,,2014.0,10.1057/eej.2014.2,Eastern Economic Journal,Davis2014,Not available,,Nature,Not available,Water Cooler Ostracism: Social Exclusion as a Punishment Mechanism,1d792ef12641768818c9148ffa434c3d,http://dx.doi.org/10.1057/eej.2014.2
17714,"Within social situations individuals can be punished with social ostracism. Ostracized individuals are removed from social aspects of the group but remain formal members. Social ostracism may occur in the workplace when workers produce a joint good among their inputs. Since workers are homogeneous, no worker has the ability to unilaterally punish free-riding behavior. Yet, the group as a whole has the capacity to punish free-riding group members with social punishments. We examine the effectiveness of social ostracism as a punishment for free-riding. We find that social ostracism helps maintain cooperation but only after prior experience without possible social punishment.
",david johnson,,2014.0,10.1057/eej.2014.2,Eastern Economic Journal,Davis2014,Not available,,Nature,Not available,Water Cooler Ostracism: Social Exclusion as a Punishment Mechanism,1d792ef12641768818c9148ffa434c3d,http://dx.doi.org/10.1057/eej.2014.2
17715,"Studies of reward learning have implicated the striatum as part of a neural circuit that guides and adjusts future behavior on the basis of reward feedback. Here we investigate whether prior social and moral information about potential trading partners affects this neural circuitry. Participants made risky choices about whether to trust hypothetical trading partners after having read vivid descriptions of life events indicating praiseworthy, neutral or suspect moral character. Despite equivalent reinforcement rates for all partners, participants were persistently more likely to make risky choices with the 'good' partner. As expected from previous studies, activation of the caudate nucleus differentiated between positive and negative feedback, but only for the 'neutral' partner. Notably, it did not do so for the 'good' partner and did so only weakly for the 'bad' partner, suggesting that prior social and moral perceptions can diminish reliance on feedback mechanisms in the neural circuitry of trial-and-error reward learning.
",m delgado,,2005.0,10.1038/nn1575,Nature Neuroscience,Delgado2005,Not available,,Nature,Not available,Perceptions of moral character modulate the neural systems of reward during the trust game,64e60158a54817fa797767d1965aa2cd,http://dx.doi.org/10.1038/nn1575
17716,"Studies of reward learning have implicated the striatum as part of a neural circuit that guides and adjusts future behavior on the basis of reward feedback. Here we investigate whether prior social and moral information about potential trading partners affects this neural circuitry. Participants made risky choices about whether to trust hypothetical trading partners after having read vivid descriptions of life events indicating praiseworthy, neutral or suspect moral character. Despite equivalent reinforcement rates for all partners, participants were persistently more likely to make risky choices with the 'good' partner. As expected from previous studies, activation of the caudate nucleus differentiated between positive and negative feedback, but only for the 'neutral' partner. Notably, it did not do so for the 'good' partner and did so only weakly for the 'bad' partner, suggesting that prior social and moral perceptions can diminish reliance on feedback mechanisms in the neural circuitry of trial-and-error reward learning.
",r frank,,2005.0,10.1038/nn1575,Nature Neuroscience,Delgado2005,Not available,,Nature,Not available,Perceptions of moral character modulate the neural systems of reward during the trust game,64e60158a54817fa797767d1965aa2cd,http://dx.doi.org/10.1038/nn1575
17717,"Studies of reward learning have implicated the striatum as part of a neural circuit that guides and adjusts future behavior on the basis of reward feedback. Here we investigate whether prior social and moral information about potential trading partners affects this neural circuitry. Participants made risky choices about whether to trust hypothetical trading partners after having read vivid descriptions of life events indicating praiseworthy, neutral or suspect moral character. Despite equivalent reinforcement rates for all partners, participants were persistently more likely to make risky choices with the 'good' partner. As expected from previous studies, activation of the caudate nucleus differentiated between positive and negative feedback, but only for the 'neutral' partner. Notably, it did not do so for the 'good' partner and did so only weakly for the 'bad' partner, suggesting that prior social and moral perceptions can diminish reliance on feedback mechanisms in the neural circuitry of trial-and-error reward learning.
",e phelps,,2005.0,10.1038/nn1575,Nature Neuroscience,Delgado2005,Not available,,Nature,Not available,Perceptions of moral character modulate the neural systems of reward during the trust game,64e60158a54817fa797767d1965aa2cd,http://dx.doi.org/10.1038/nn1575
17718,"INTRODUCTION
ONE OF the most striking features of modern society is the growing interest shown by every man and woman in the working of the society of which they are a part. However, when as responsible citizens they try to understand this subject they arefaced with complexities, indeed with apparent contradictions, of a magnitude that makes
it difficult for decisions to be made or accepted.
",charles goodeve,,1978.0,10.1057/jors.1978.67,Journal of the Operational Research Society,Goodeve1978,Not available,,Nature,Not available,Science and Social Conflict,10c501ad7a47f067eb730f9479c34ad8,http://dx.doi.org/10.1057/jors.1978.67
17720,"This paper presents an analytical discussion of IMF conditionality based on the theory of special interest politics. We outline a simple political–economy model of special interest group politics, extended to include the interaction of the IMF with the government of a country making use of IMF resources. Conditional lending turns the IMF into a benevolent lobby that can exert beneficial impacts on the government's policy choices. In addition to addressing the international spillover effects of national economic policies, conditionality can help reduce policy inefficiencies generated by domestic conflicts of interest and limited ownership.
",wolfgang mayer,,2004.0,10.1057/palgrave.ces.8100064,Comparative Economic Studies,Mayer2004,Not available,,Nature,Not available,IMF Conditionality and the Theory of Special Interest Politics1,327692dc30b5b89eb7bf157bad371fe3,http://dx.doi.org/10.1057/palgrave.ces.8100064