Learning, favoritism and incentive provision within organizations
MetadataShow full item record
This doctoral dissertation provides new theoretical and empirical analysis on employer learning and its impact on employees’ incentive provisions within organizations. In the first chapter, we show with 20 years of personnel data from a large U.S. firm, that employee performance displays a unique pattern that cannot be explained by human capital or incentive theories under the classical principal-agent framework. To explain the observed pattern, we propose an enriched principal-manager-employee framework that captures real life complications such as favoritism and influence activities. We show that supervisors are disciplined to give less biased subjective evaluations under promotion-based incentive schemes compared to bonus-based incentive schemes and the costs of wasteful influence activities could constrain the firm's ability to optimize employees’ effort in a way that generates equilibrium performance patterns we observe in the data. In the second chapter, we study the credibility of the firing threat, which is widely used as a disciplinary device in the workplace. Despite its prevalence, theoretical foundations on the credibility of firing threats are not well studied. When firing is costly to the employer, it is not credible to carry out a firing threat unless a decrease in expected future return is associated with the employee’s misbehavior. We explore the role of learning in ensuring the credibility of firing threats and how a certain level of uncertainty is necessary to effectively induce compliance. Peter Principle arises as an outcome of the model, as workers who are known to be competent almost certainly can no longer be disciplined and need to be promoted to more difficult tasks, even though they may be less productive at those tasks. In the third chapter, we propose a new method to test asymmetric learning in a multi-period framework and derive testable implications based on easily observable dynamic wage patterns. We test our model predictions using the NLSY97 data. The empirical results are consistent with symmetric learning and show no evidence of asymmetric learning.