Hi, I’m Alex.

I’m an Assistant Professor of Economics at the University of Regensburg. I use economic experiments to study human behavior in (mostly) labor contexts. Currently I’m exploring preferences for using AI in the workplace. I am also interested in the methodology of experimental economics. I approach research from a holistic perspective, combining behavioral insights, economic modeling, experimental design, structural econometric methods, and machine learning. My work has been published in journals like Experimental Economics, Journal of Economic Behavior and Organization, Journal of the Economic Science Association, Annals of Finance, and Mathematical Social Sciences.

Interests

  • Applied microeconomics
  • Behavioral/experimental economics
  • Labor/education economics
  • Organizational/personnel economics
  • Economics of Al
  • Empirical methods in economics

Education

  • Ph.D. in Economics, 2018

    Georgia State University

  • M.A. in Economics, 2011

    European University at St. Petersburg

  • Specialist (B.A. equivalent) in Economics (Summa Cum Laude), 2009

    St. Petersburg State University

Academic Positions

 
 
 
 
 

Assistant Professor, non-tenure track (Akademischen Rat auf Zeit)

Department of Economics | University of Regensburg

Sep 2020 – Present Regensburg, Germany
 
 
 
 
 

Postdoctoral Fellow

Economic Science Institute | Chapman University

Sep 2018 – Aug 2020 Orange, California

Journal Articles

Filter research items by type.

Experiments on the Fly

How do exogenous increases in resources to a government affect its expenditure decisions? Economic theory typically predicts that a lump-sum grant will have the same impact on government expenditures as an increase in income. However, empirical studies consistently find that government spending is stimulated far more by grants than by income; that is, grants have a ‘flypaper effect’ because the money ‘sticks where it hits’. We conduct a laboratory experiment that controls for the most important factors that have been suggested in explaining the existence of the flypaper effect. Our experimental design crosses four transfer delivery methods with three voting frameworks. We examine three payoff-equivalent transfer delivery methods, all relative to a fourth baseline treatment with no transfer: an increase in income, a subsidy (repayment) for expenditures on the public good, and a lump-sum grant. Our two alternative voting frameworks are voting over levels of expenditures and voting over changes with information on public good externalities, each relative to a third baseline treatment where voting is over changes from a default (reference) level of expenditures. We find robust evidence of a flypaper effect: both the subsidy and the lump-sum grant increase expenditures more than does an equivalent increase in income. Our results are largely consistent with, and explained by, theoretical models that rely upon behavioral economics.

How to Measure the Average Rate of Change?

This paper contributes to the theory of average rate of change (ARC) measurement. We use an axiomatic approach to generalize the conventional ARC measures (such as the difference quotient and the continuously compounded growth rate) in several directions: to outcome variables with arbitrary connected domains, to not necessarily time-shift invariant dependence on time, to more general (than an interval) time sets, to a path-dependent setting, and to a benchmark-based evaluation. We also revisit and generalize the relationship between the ARC measurement and intertemporal choice models.

Give Me a Challenge or Give Me a Raise

I study the effect of task difficulty on workers’ effort. I find that task difficulty has an inverse-U effect on effort and that this effect is quantitatively large, especially when compared to the effect of conditional monetary rewards. Difficulty acts as a mediator of monetary rewards: conditional rewards are most effective at the intermediate or high levels of difficulty. The inverse-U pattern of effort response to difficulty is inconsistent with many popular models in the literature, including the Expected Utility models with the additively separable cost of effort. I propose an alternative mechanism for the observed behavior based on non-linear probability weighting. I structurally estimate the proposed model and find that it successfully captures the behavioral patterns observed in the data. I discuss the implications of my findings for the design of optimal incentive schemes for workers and for the models of effort provision.

Using Response Times to Measure Ability on a Cognitive Task

I show how using response times as a proxy for effort can address a long-standing issue of how to separate the effect of cognitive ability on performance from the effect of motivation. My method is based on a dynamic stochastic model of optimal effort choice in which ability and motivation are the structural parameters. I show how to estimate these parameters from the data on outcomes and response times in a cognitive task. In a laboratory experiment, I find that performance on a digit-symbol test is a noisy and biased measure of cognitive ability. Ranking subjects by their performance leads to an incorrect ranking by their ability in a substantial number of cases. These results suggest that interpreting performance on a cognitive task as ability may be misleading.

Experimental Methods: When and Why Contextual Instructions Are Important

An important methodological issue in experimental research is the extent to which one should use context-rich or abstract language in the instructions for an experiment. The traditional use of abstract context in experimental economics is commonly viewed as a way to achieve experimental control. However, there are some advantages to using context-framed instructions, such as “employer and worker” instead of “player 1 and player 2.” Meaningful context can enhance understanding of an environment and reduce confusion among participants, particularly when a task requires sophisticated reasoning, and hence may yield responses of better quality. In emotionally-charged research questions, such as pollution or bribes, contextual instructions may affect behavior in the experiment, but this effect may be appropriate as it relates to the research question. Our review of the evidence from the literature indicates that in the great majority of cases meaningful language is either useful or produces no change in behavior. Nevertheless, a few important considerations are worth keeping in mind when using rich context. Finally, we see the choice of context as being an expansion of the experimenter’s toolkit and a factor to consider in experimental design.

Benchmark-Based Evaluation of Portfolio Performance: A Characterization

Benchmarking is a universal practice in portfolio management and is well-studied in the optimal portfolio selection literature. This paper derives axiomatic foundations of the relative return, which underlies a benchmark-based evaluation of portfolio performance. We show that the existence of a benchmark naturally arises from a few basic axioms and is tightly linked to the economic theory. Our method relies on the use of both axiomatic and economic approaches to index number theory. We also analyze the problem of optimal portfolio selection under complete uncertainty about a future price system, where the objective function is the relative return.

A Theory of Average Growth Rate Indices

This paper develops an axiomatic theory of an economic variable average growth rate (average rate of change) measurement. The obtained structures generalize the conventional measures for average rate of growth (such as the difference quotient, and the continuously compounded growth rate) to an arbitrary domain of the underlying variable and comprise various models of growth. These structures can be described with the help of intertemporal choice theory by means of parametric families of time preference relations on the “prize-time” space with a parameter representing the subjective discount rate.

Working Papers

A Taxonomy of AI Experiments

We introduce a taxonomy of artificial intelligence (AI) experiments. Our taxonomy produces four types of AI experiments: conceptual AI experiments, stylized AI experiments, quasi-natural AI experiments, and natural AI experiments. At the core of our taxonomy is the sophistication of AI used, which we evaluate using a simple and robust proxy test of whether AI is developed exclusively for a research study. We discuss the advantages, disadvantages, and best use cases for each type and illustrate the use of each type in various examples. We provide a guide on how to choose the type of AI experiment that best fits a given research question.

The Economics of Babysitting a Robot

I theoretically propose and experimentally test a novel behavioral channel, robot-babysitting, that can impose switching and learning costs on workers when automation is complementary. My theoretical model shows that one can identify these costs by varying the automatability of the production process and observing the resulting change in the demand for automation. The experimental results support the hypothesis that the costs due to robot babysitting are empirically relevant. Subjects with high cognitive flexibility and reflectivity are less susceptible to these costs. I quantify the costs by structurally estimating the model and find that the average learning costs are small, while the average switching costs are large in absolute value but negative. The textual analysis of subjects’ choice reasons reveals that many of them found the task-switching environment stimulating and non-monotonous. My results suggest that the net effect of complementary automation on workers is a priori ambiguous and that complementary automation will be most beneficial for workers with high cognitive flexibility and reflectivity.

The (Statistical) Power of Incentives

I study an optimal design of monetary incentives in experiments where incentives are a treatment variable. I introduce the Budget Minimization problem in which a researcher chooses the level of incentives that allows her to detect a predicted treatment effect while minimizing her expected budget. The Budget Minimization problem builds upon the power analysis and structural modeling. It extends the standard optimal design approach by explicitly making the budget a part of the objective function. I show theoretically that the problem has an interior solution under fairly mild conditions. I illustrate the applications of the Budget Minimization problem using existing experiments and offer a practical guide for implementing it. My approach adds to the experimental economists’ toolkit for an optimal design, however, it also challenges some conventional design recommendations.

Deciphering the Noise: The Welfare Costs of Noisy Behavior

Theoretical work on stochastic choice mainly focuses on the sources of choice randomness, and less on its economic consequences. We close this gap by developing a method of extracting information about the costs of noise from structural estimates of preferences and choice randomness. Our method is based on interpreting the degree of noise in choices as a way to rationalize them by a given structural model. We consider risky binary choices made by a sample of the general Danish population in an artefactual field experiment. The estimated welfare costs are small in terms of everyday economic activity, but they are considerable in terms of the actual stakes of the choice environment. Higher welfare costs are associated with higher age, lower education, and certain employment status.

Selection in the Lab: A Network Approach

We study the dynamics of the selection problem in economic experiments. We show that adding dynamics significantly complicates the effect of the selection problem on external validity and can explain some contradictory results in the literature. We model the dynamics of the selection problem using a network model of diffusion in which agents’ participation is driven by the two channels: the direct channel of recruitment and the indirect channel of agent interaction. Using rich recruitment data from a large public university, we find that the patterns of participation and biases are consistent with the model. We find evidence of both short- and long-run selection biases between student types. Our empirical findings suggest that network effects play an important role in shaping the dynamics of the selection problem. We discuss the implications of our results for experimental methodology, design of experiments, and recruitment procedures.

Courses

 
 
 
 
 

Introductory Econometrics

Undergraduate

Oct 2022 – Feb 2023 University of Regensburg

This course introduces students to econometric methods with a focus on linear models. Subjects include the classical linear regression model, ordinary least squares estimator and its properties, multiple regression, inference and hypotheses testing, predictions, heteroskedasticity, and model diagnostics. Students will learn the tools needed to conduct their own empirical economic research. The course will guide students through the intuition behind the econometric methods, formal derivations and proofs, as well as practical tools to implement these methods in the R programming language.

📋 Syllabus 🗂️ Materials

 
 
 
 
 

Impact Evaluation Methods

Graduate

Apr 2022 – Present University of Regensburg

The course introduces students to the modern framework for causal inference in economics and other social sciences. The students will learn about the concepts of research design and identification strategy and how to apply these concepts to answer various research questions. The course starts by introducing the two workhorse models for understanding identification: the potential outcomes model and causal diagrams. We will then cover most commonly used tools for identifying and estimating causal effects: regression, matching, instrumental variables, regression discontinuity, and difference-in-differences, as well as a few recent developments, if time permits. Mastering these tools will allow students to answer their own research questions in academic, public, and private-sector contexts. The course will guide students through the intuition behind the methodology, formal derivations and proofs, as well as practical tools to implement each method. The course content relies on a mix of textbooks, article readings, and practical exercises in R.

📋 Syllabus 🗂️ Materials 🎥 Videos

 
 
 
 
 

Topics in Behavioral Labor Economics

Undergraduate

Oct 2021 – Feb 2022 University of Regensburg

The course serves as a reading seminar that provides students with an overview of the latest trends in behavioral labor economics-an intersection of behavioral/ experimental economics and labor economics-as well as an opportunity for an in-depth study of a topic of interest. The course offers four topics that occupy a prominent place in the field: goal setting, loss aversion, gender differences, and peer effects. Each topic is represented by a few recent papers published in top-ranked journals. The students are given an option to select a paper from the menu and critically evaluate it by the means of giving an oral presentation and a written report.

📋 Syllabus 🗂️ Materials

 
 
 
 
 

Introduction to Data Analysis with STATA

Graduate and Undergraduate

Apr 2021 – Jul 2023 University of Regensburg

The course provides students with the basic skills in using the STATA software, such as installation, using the interface, loading and managing data sets, programming the do-files, calculating descriptive statistics, estimating simple and multiple regression models, creating graphs, and programming loops.

📋 Syllabus 🗂️ Materials

 
 
 
 
 

Advanced Microeconomics

Graduate

Oct 2020 – Present University of Regensburg

The course provides students with a rigorous introduction into the fundamental concepts and models of the microeconomic theory. The course consists of two parts. The first part on the Mathematical Methods of Microeconomics is a part of a two–week math boot camp at the beginning of the semester. This part introduces students to the mathematical methods that are essential for the analysis of microeconomic models. The second part of the course introduces students to the central concepts of game theory, incentives and contract theory, and behavioral economics.

📋 Syllabus 🗂️ Materials 🎥 Videos

 
 
 
 
 

Mathematics For Economics

Graduate and Undergraduate

Aug 2016 – Dec 2016 Georgia State University

This course provides an introduction to mathematical techniques that are frequently used in economic analysis. Topics covered include differential and integral calculus and matrix algebra. Emphasis is placed on the applications of mathematics to topics in economic theory.

📋 Syllabus

 
 
 
 
 

Principles of Microeconomics

Undergraduate

Jun 2015 – Aug 2015 Georgia State University

This course provides a systematic study of human and firm behavior within the context of the production, distribution, and consumption of goods. The goal of the course is to provide an introduction to the economic way of thinking and to the economist’s view of the world. The course attempts to develop a student’s ability to think analytically about the economic forces at work in society. Students learn both a specific set of analytical tools and how to apply them to current policy issues.

📋 Syllabus

Apps & Code

AER Citations Downloader

A web app that downloads citations for all the articles in a selected issue of the American Economic Review. Source code.

Big Five

A web app that lets you take the Big Five personality test.

Posts

How to write in Latex

This is not a technical guide. It’s just a bunch of tips on how to make writing experience in Latex a little more personal and …

Contact