Dataset Viewer
Auto-converted to Parquet Duplicate
context
stringlengths
100
3.14k
A
stringlengths
100
2.56k
B
stringlengths
100
3.26k
C
stringlengths
100
3.55k
D
stringlengths
100
3.43k
label
stringclasses
4 values
We complement a literature that incorporates RKHS techniques into the estimation of other causal functions.
For experimental data with joint observations of the randomized actions and final rewards, previous works analyze heterogeneous effects [Nie and Wager, 2021], dose response curves [Singh et al., 2024], and time-varying dose response curves [Singh et al., 2021]. Another strand of this literature incorporates instrumenta...
Subsequently to this paper’s circulation [Singh, 2022], [Zeng et al., 2024] provide complementary results for the long term dose response curve.
This generalizes the insight of [Singh et al., 2024], who study dose response curves when experimental actions and final rewards are jointly observed.
The difficulty in estimating long term dose response curves is the complex nonlinearity and heterogeneity in the link between short term response curve 𝔼⁢{S(d)}𝔼superscript𝑆𝑑\mathbb{E}\{S^{(d)}\}blackboard_E { italic_S start_POSTSUPERSCRIPT ( italic_d ) end_POSTSUPERSCRIPT } and long term response curve 𝔼⁢{Y(d)}𝔼...
A
In general, a wide range of bargaining outcomes can be consistent with subgame perfect Nash equilibrium, but we view some of them as being unrealistic.
x𝑥xitalic_x denote the equilibrium offer that a worker makes to the firm as the worker’s own share, and y𝑦yitalic_y denote the equilibrium offer that a firm proposes to the worker as the worker’s share.
Denote the total surplus between the firm and the worker by Vvsuperscript𝑉𝑣V^{v}italic_V start_POSTSUPERSCRIPT italic_v end_POSTSUPERSCRIPT, namely as defined in (1), and let V¯vsuperscript¯𝑉𝑣\bar{V}^{v}over¯ start_ARG italic_V end_ARG start_POSTSUPERSCRIPT italic_v end_POSTSUPERSCRIPT denote the worker’s continuat...
First, our model contains a continuum of infinitesimally small workers. This implies that firms can “throw their weight around” in potentially unrealistic ways because the surplus obtained through bargaining with any single worker is always zero. Suppose, for instance, a firm’s strategy were to propose a wage of 0 and ...
While we note that Nash presents a cooperative solution to the bargaining game and our model necessarily studies bargaining in the context of a non-cooperative game, it is possible to nest Nash-like bargaining into our model a la Horn and Wolinsky (1988). Specifically, this “Nash-in-Nash” solution takes the following s...
C
Hi⁢i=−12⁢c0−12⁢ci;Hi⁢j=−12⁢c0.formulae-sequencesubscript𝐻𝑖𝑖12subscript𝑐012subscript𝑐𝑖subscript𝐻𝑖𝑗12subscript𝑐0H_{ii}=-\frac{1}{2c_{0}}-\frac{1}{2c_{i}}\ ;\ H_{ij}=-\frac{1}{2c_{0}}.italic_H start_POSTSUBSCRIPT italic_i italic_i end_POSTSUBSCRIPT = - divide start_ARG 1 end_ARG start_ARG 2 italic_c start_POSTSU...
We show the quadratic term of the objective is negative definite, which implies the Hessian is negative definite.
is negative definite, meaning that the solutions representing the objective’s critical points represent global maxima. (In our case these are attainable by solving a system of linear equations.)
By identical logic used in Lemma 10.5, notice that −H𝐻-H- italic_H is a diagonal-dominant Hermitian matrix with real and positive diagonals, which must be positive definite. Thus H𝐻Hitalic_H is negative definite.
Notice the Hessian is symmetric (by definition), real-valued, and its diagonal entries are negatively diagonal-dominant. It is established that a Hermitian (in our case, symmetrical matrix) that is strictly diagonal-dominant (meaning diagonal entries are greater in magnitude than the sum of non-diagonal members of the ...
C
This incentive is valuable, considering that a kidney from a living donor typically lasts approximately 15 to 20 years.
program and were open to ideas for improvement. Naturally, members of the transplantation community had less experience organizing a system of exchanges than with the ethical and medical aspects of the resulting transplants.
list exchange by Ross and Woodle (2000), DD chains by Roth et al. (2004), the voucher program by Veale et al. (2017), and the unpaired kidney exchange by Akbarpour et al. (2024).
Consequently, the welfare gains presented in Roth et al. (2004, 2005b, 2007); Ergin et al. (2020) received a favorable response from members of the transplantation community.
The ethical aspects of this policy have been discussed favorably by members of the Canadian transplantation community (Gill et al., 2017).
D
Since the utility expression (5) is separable in Prsubscript𝑃𝑟P_{r}italic_P start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT and Rcsubscript𝑅𝑐R_{c}italic_R start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT, two independent utility maximization problems can be stated:
Suen, 2000). Specifically, we conduct three comparative statics. First, we analyze the effect of parameter wpsubscript𝑤𝑝w_{p}italic_w start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT, which is the unit power price; second, the parameter wwsubscript𝑤𝑤w_{w}italic_w start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT, which ...
We, first, derived the expression for the service demand by the users, under the assumption of utility maximization, and then derive the expression for the service supply by the operator, under the assumption of profit maximization.
The solution to each utility maximization problem yields a demand function, which relates price and optimum quantity. We are interested in interior maxima, so that we seek solutions to the unconstrained first-order conditions (FOCs).
We assume that the user takes prices p1subscript𝑝1p_{1}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and p2subscript𝑝2p_{2}italic_p start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT as given, and therefore the decision on how much of commodities Prsubscript𝑃𝑟P_{r}italic_P start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT and...
C
The dependence of the bitcoin exchange rate on energy consumption has been discussed in [Menati et al., 2023], which motivated us to perform this analysis. First, we only have access to historical daily Bitcoin exchange rate data, making it difficult to compare it against hourly cryptocurrency miners’ electricity consu...
As shown in Fig. 1 (which is corroborated by slide 3 of [ERCOT, 2024]), the Texas electric grid is facing a rapid cryptocurrency mining data-center-driven load growth. The Electricity Reliability Commission of Texas (ERCOT)—the market operator in charge of the largest part of the Texas electricity grid—allows both gene...
To compute the overall accuracy of the model, we need to compare how much of the variability is explained using correlation analysis alone versus the additional use of an autoregressive model. The mean squared error (MSE) and mean absolute percentage error (MAPE) of the correlation analysis-only model are 25.10 and 3.2...
The scatter plots of the RSI of the Bitcoin exchange rate considering the 7, 14, and 21-day window and the daily energy consumption of crypto-mining firms, depicted in Fig. 4, show p-values of correlation coefficient of ≥\geq≥ 0.05 in all three cases. This suggests that, given the panel data concerned, cryptocurrency m...
The dependence of the bitcoin exchange rate on energy consumption has been discussed in [Menati et al., 2023], which motivated us to perform this analysis. First, we only have access to historical daily Bitcoin exchange rate data, making it difficult to compare it against hourly cryptocurrency miners’ electricity consu...
C
Leveraging response times for preference learning presents notable challenges. Psychological research has extensively studied the relationship between human choices and response times [17, 19] using complex models like Drift-Diffusion Models [51] and Race Models [66, 12]. While these models align with both behavioral a...
Figure 3(c) shows that the choice-decision-time estimator consistently outperforms the choice-only estimators under both the transductive and weak-preference designs, particularly for strong preferences. This suggests that for queries with strong preferences, decision times complement choices and improve estimation, co...
To address these challenges, we propose a computationally efficient method for estimating linear human utility functions from both choices and response times, grounded in the difference-based EZ diffusion model [67, 8]. Our method leverages response times to transform binary choices into richer continuous signals, fram...
Our linear-regression-based estimator integrates seamlessly into algorithms for preference-based bandits with linear human utility functions [3, 31], enabling interactive learning systems to leverage response times for faster learning. We specifically integrated our estimator into the Generalized Successive Elimination...
This work is the first to leverages human response times to improve fixed-budget best-arm identification in preference-based linear bandits. We proposed a utility estimator that combines choices and response times. Both theoretical and empirical analyses show that response times provide complementary information about ...
B
Nonetheless, as hinted at in the introduction, it is important to note that one does not need to accept Dworkin’s definition of paternalism to recognize the profound role that Chooser knowledge plays in policy.
Here, a CA is neutral and reacts only to informational disadvantages: if the pedestrian knew what he was doing, it would not be justifiable to intervene. The pedestrian lacks knowledge and warning him is not possible, ipso facto intervention is justified. In the language of our formal model, the pedestrian makes a mist...
Second, if it is certain that the Chooser obtains his choice after information is provided, the CA may help to effectuate the Chooser’s preferences by providing information. In this sense, the provision may activate a utilitarian system in CAs, where they may disagree with what the Chooser does, but where they keep him...
CAs can elect to have the Chooser have his own choice, or to impose one option on him.222We follow the convention that the Choice Architect is female and the Chooser male.
If either a public officer or any one else saw a person attempting to cross a bridge which had been ascertained to be unsafe, and there were no time to warn him of his danger, they might seize him and turn him back without any real infringement of his liberty; for liberty consists in doing what one desires, and he does...
D
We simplify the original non-convex constrained optimization problem into one that can be solved using numerical optimization algorithms.
In the presence of private information, optimal contract design with models as the rewards poses unique challenges that distinguish it from its economic counterparts. For one, unlike money, models are a non-rivalrous and non-exclusive good, and can be replicated and offered to the participants at a nominal cost if not ...
When data costs are private information of the parties, it could have serious implications on the CML scheme without contract. As we demonstrate in Appendix B, a complete collaboration failure could occur where all parties contribute nothing to the scheme. In other cases, equilibrium does not exist, making the learning...
We conduct theoretical analysis of the constraints, delineating the properties optimally designed contracts should obey for both scenarios when the coordinator can and cannot observe parties’ contribution costs.
Prior to our work, there has been a line of research that resorts to contract theory to address the incentive issue in collaborative machine learning (Kang et al. 2019; Ding, Fang, and Huang 2020; Karimireddy, Guo, and Jordan 2022; Liu et al. 2023), but most of them focus on using money as the reward for the collaborat...
C
Isen (2014), for instance, estimates the effect of Yjsubscript𝑌𝑗Y_{j}italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT
We measure outcome Yisubscript𝑌𝑖Y_{i}italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT for a set of units i=1,…,N𝑖1…𝑁i=1,\dots,Nitalic_i = 1 , … , italic_N, which
multiple expressions for Yisubscript𝑌𝑖Y_{i}italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT of the same unit. One may also find
Isen (2014), for instance, estimates the effect of Yjsubscript𝑌𝑗Y_{j}italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT
effect of the average outcome of i𝑖iitalic_i’s neighbors on Yisubscript𝑌𝑖Y_{i}italic_Y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. See
D
One might want to include a graduate student on an expert panel but nonetheless treat that student differently from a Nobel laureate.
Community standards do not necessarily exist to serve a consequentialist goal.252525While the reasonable person standard may exist to reduce the cost of accidents, this is not a universally accepted goal. In the context of obscenity, offense to community standards is often the justification (and not merely the test) fo...
These standards are used in cases where absolute standards of behavior are undesirable because the standard is hard to define or expected to change over time.444Community standards are also found outside of the common law. For example, the Internal Revenue Code exempts from the prepaid interest rule points paid on the ...
In the law of accidents, negligence is determined according to the standard of the reasonable person.111“We apply the standards which guide the great mass of mankind in determining what is proper conduct of an individual under all the circumstances and say that he was or was not justified in doing the act in question.”...
This may also apply in select cases of community standards, such as a “reasonable doctor” standard that relies on the doctors’ expertise.
D
This assumption allows individual preferences to be correlated with budget sets with stationarity over time identifying the individual specific parameters.
We utilize individual-specific ridge regressions to allow for weak identification and debias the average ridge elasticity estimator.
This assumption allows individual preferences to be correlated with budget sets with stationarity over time identifying the individual specific parameters.
In this setting, individual specific regressions are used to estimate preference parameters for each individual.
This assumption enables identification of averages of individual-specific parameters and counterfactual effects.
C
Under deliberation, this hypothesis implies an increase in beliefs along with the norm level. It is easy to see why: If CAs generally believe that Choosers come closer to CAs’ norms, Choosers should be believed to open more boxes under deliberation if the CA has a high norm level, but fewer boxes if the CA has a low no...
Story 1 implies that norm levels moderate the effect of deliberation on beliefs, and an absence of moderation of the effect on errors.
On the other hand, CAs could believe that deliberation causes fewer boxes to be opened independent of the norm level. The reduction in error may be driven by CA’s moderate norm levels and belief that Choosers open too many boxes.
Models 3 and 4 provide further evidence for story 2. While CAs generally forecast a reduction in error that is caused by deliberation, that reduction is moderated by the norm (p=0.038𝑝0.038p=0.038italic_p = 0.038). Ceteris paribus, the higher a CA’s injunctive norm, the lower the reduction in error through deliberatio...
Story 2 implies that norm levels do not moderate the effect of deliberation on beliefs, and that there is such moderation of the effect on errors.
A
Consider many profit-maximizing, single-product firms simultaneously setting prices, with arbitrary complementarities and substitutabilities across products. For instance, one firm’s product—e.g., a Samsung smartphone—may be a substitute to some products—e.g., Apple smartphones—and a complement to others—compatible acc...
We are interested in the nature of inefficiencies in such an environment and policies to respond to them. We consider these issues from the perspective of an authority that recognizes the possibility of inefficiency due to market power and can intervene through taxes and subsidies on firms’ sales. For concreteness, we ...
However, our main result can be extended once we allow the authority to use nonlinear interventions, i.e., to commit a vector of functions specifying a payment to each producer i𝑖iitalic_i as a function of all prices and quantities realized after the intervention. With this broader set of instruments, the authority ca...
What makes the problem challenging is that, once we broaden our perspective beyond one traditionally defined market (e.g., smartphones) and consider spillovers to a variety of other complements and substitutes, there is a kind of curse of dimensionality. In a marketplace with numerous and changing goods, the demand sys...
Teng, 1996). Once again, the spectral statistics used in these techniques tend to pick up interpretable “broad” features of the products, rather than idiosyncrasies specific to individual products. As a result, the policies our interventions recommend—which project all variation onto these eigenvectors—will often be cl...
A
We analyze trial data from waves 1-5, that is, data collected from one to five years after random assignment. Sample sizes decrease over time since participants who enrolled in the trial later contribute fewer observations ahead of the last follow-up date in December 2018. Roughly 4300 participants contribute to the an...
The left-hand side of this expression is the average effect of revascularization exposure, Y1⁢(1)−Y1⁢(0)subscript𝑌11subscript𝑌10Y_{1}(1)-Y_{1}(0)italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( 1 ) - italic_Y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( 0 ), on wave-1 compliers. This is obtained by dividing the wave-1...
2SLS (IV) estimates of any-revascularization effects are larger and more stable over time than ITT effects. For both SAQ outcomes, the latter fall from around four in the first wave to under two in wave 5. This is consistent with the fact that the ITT estimand is diluted by a declining first stage (reported in column 3...
In practice, many ISCHEMIA participants received a treatment different from that assigned. Revascularization rates in the control group, reported in the first column of Table 1, increase from 12% in wave 1 to 29% in wave 5. At the same time, only 80% of those assigned invasive were revascularized initially. Revasculari...
These findings need not be the last word from ISCHEMIA, however, as analyses of ISCHEMIA trial data face important methodological challenges. Problems arise from the fact that the trial saw high rates of treatment group non-adherence (i.e., some assigned to invasive treatment were not revascularized) as well as control...
C
{+}\to\mathbf{U}_{i}\ \emph{bounded}\ \ \forall i=1,...,N\bigg{\}}.where bold_L start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : blackboard_R start_POSTSUBSCRIPT + end_POSTSUBSCRIPT → caligraphic_L ( bold_X , bold_U start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) and bold_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRI...
We illustrate the ITM in the context of a dynamic analytical integrated assessment model. Following the general approach in Nordhaus and Boyer, (2003) and Nordhaus, (2018), the model accounts for climate damages created by economic activity.444See, for example, Weyant, (2017) and the references therein for a review ...
We illustrated the ITM in the context of climate economics, by developing a two-country extension of the integrated assessment model in Golosov et al., (2014). We characterized emissions, consumption, transfers, and welfare by computing the Nash equilibria of the associated dynamic game and comparing them to efficient...
The analytical advantages of the ITM over standard optimization methods can be exploited in a variety of dynamic models in economics and beyond. In this section, we will illustrate the method in the context of an application to climate economics. Provided that the conditions for the applicability of the ITM hold, a var...
In what follows, we will illustrate the ITM method in the context of the analytical integrated assessment model in section 3. We first consider two normative benchmarks by characterizing the solutions to two social planner problems. We then study the Nash equilibria of a suitable non-cooperative dynamic game. We will d...
C
With 36-month-long sequences, the 1D CNN yields an overall best relative RMSE of 0.9260.9260.9260.926 towards the DFM in the nowcasting scenario.
Compared with the DFM, the 1D CNN generates almost identically accurate nowcasts for all of the training configurations.
Table 2: Accuracy of nowcasts generated by the naive constant growth model and the benchmark DFM: RMSE and MAE evaluation.
In general, the 1D CNN yields strong nowcasting performance across all the training configurations, beating the benchmark model on a 5% significance level regarding both RMSE and MAE evaluation.
While the Elman network slightly lags behind the best MLP and 1D CNN configurations in terms of nowcasting accuracy, it still manages to beat both benchmark models in some training configurations.
C
μ∈ℍBMO2𝜇subscriptsuperscriptℍ2BMO\mu\in\mathbb{H}^{2}_{\rm BMO}italic_μ ∈ blackboard_H start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT roman_BMO end_POSTSUBSCRIPT is a very strong assumption.
|θt|2superscriptsubscript𝜃𝑡2|\theta_{t}|^{2}| italic_θ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-term is contained in the driver.
In fact, typical Gaussian processes such as the Ornstein-Uhlenbeck process are not contained in this class.
the family of processes {Rp,p∈𝒜1}superscript𝑅𝑝𝑝superscript𝒜1\{R^{p},p\in{\cal A}^{1}\}{ italic_R start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT , italic_p ∈ caligraphic_A start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT } defined by (3.3)3.3(\ref{eq-R-hypo})( ) with the process Y𝑌Yitalic_Y as the unique bounded so...
is not guaranteed in general, we cannot apply the extensions on the qg-BSDE theories such as [3, 4, 28], either.
B
1994199419941994199819981998199820022002200220022006200620062006201020102010201020142014201420142018201820182018202220222022202200101010102020202030303030Last FIFA World Cup consideredNumber of slotsUpdate frequency: Round, Seeded set S119941994199419941998199819981998200220022002200220062006200620062010201020102010201...
1994199419941994199819981998199820022002200220022006200620062006201020102010201020142014201420142018201820182018202220222022202200101010102020202030303030Last FIFA World Cup consideredNumber of slotsUpdate frequency: Stage, Seeded set S019941994199419941998199819981998200220022002200220062006200620062010201020102010201...
1994199419941994199819981998199820022002200220022006200620062006201020102010201020142014201420142018201820182018202220222022202200101010102020202030303030Last FIFA World Cup consideredNumber of slotsUpdate frequency: Stage, Seeded set S119941994199419941998199819981998200220022002200220062006200620062010201020102010201...
1994199419941994199819981998199820022002200220022006200620062006201020102010201020142014201420142018201820182018202220222022202200101010102020202030303030Last FIFA World Cup consideredNumber of slotsUpdate frequency: Round, Seeded set S119941994199419941998199819981998200220022002200220062006200620062010201020102010201...
1994199419941994199819981998199820022002200220022006200620062006201020102010201020142014201420142018201820182018202220222022202200101010102020202030303030Last FIFA World Cup consideredNumber of slotsUpdate frequency: Round, Seeded set S019941994199419941998199819981998200220022002200220062006200620062010201020102010201...
B
Step 3: Set Rn1⁢#=(R1∖{kn})∪{jn}subscriptsuperscript𝑅1#𝑛superscript𝑅1subscript𝑘𝑛subscript𝑗𝑛R^{1\#}_{n}=(R^{1}\setminus\{k_{n}\})\cup\{j_{n}\}italic_R start_POSTSUPERSCRIPT 1 # end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT = ( italic_R start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ∖ { italic_k ...
Note that I would only have to evaluate the objective function less than K×J𝐾𝐽K\times Jitalic_K × italic_J times under the MIA instead of for every possible |ℛ|ℛ|\mathcal{R}|| caligraphic_R | ROLs.
The second column uses the interaction with the applicant’s priority score, β^jsubscript^𝛽𝑗\hat{\beta}_{j}over^ start_ARG italic_β end_ARG start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, as the dependent variable. Here, the estimates generally have opposite signs, with significant results for distance and the total n...
Denoting the vector of moments as hi⁢(θ)subscriptℎ𝑖𝜃h_{i}(\theta)italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_θ ), I minimize the following objective function:
Note that Step 2 requires me to evaluate the objective function less than K×J𝐾𝐽K\times Jitalic_K × italic_J times.
D
More detailed information on saving rates depending on income in Germany can be found in e.g. [20], confirming our linear interpolation as an appropriate approximation. Of course, there is much more refined research on the distribution of income like [15], most of which include capital returns in their data and are the...
In official macroeconomic accounting, the labor share is defined as the part of the national income allocated to wages, which fluctuates in Germany between 64% and 72% since reunification 1991 [11]. In our model, however, the parameter r𝑟ritalic_r rather represents the part of the wealth increase that is due to saving...
In our model, the distribution of wages is an exogenous parameter, which is invariant in time and is represented by the normalized vector γ𝛾\gammaitalic_γ. Wages have a significant impact on the wealth of poor agents, whereas the wealth of the rich is mainly determined by capital returns. As this work focuses on model...
First, we pick up the main idea of Pólyas urn model and distribute the wealth created in an economy step by step among a fixed number A∈{2,3,…}𝐴23…A\in\{2,3,\ldots\}italic_A ∈ { 2 , 3 , … } of agents. But in this paper, we distinguish two different mechanisms of assigning an abstract unit of additional wealth to an ag...
Figure 11 (a) presents the evolution of the wealth share of the top 1% in our simulations and in reality, all of which reveal a moderate increase. The small differences in the total level of that share have already been discussed above. It should also be noted that the development is much smoother in our simulations th...
A
We prove that the designs in a general family of nonlinear rerandomization methods are all asymptotically equivalent to standard rerandomization based on a difference of covariate means, with an implicit choice of covariates and balance criterion, which we characterize.
We propose a novel minimax approach that accepts or rejects based on the value of a convex penalty function, tailored to prior information provided by the researcher or estimated from a pilot.
Next, we discuss a convenient rerandomization scheme that allows the researcher to select the approximate number of draws until acceptance.
For belief set B⊆ℝdh𝐵superscriptℝsubscript𝑑ℎB\subseteq\mathbb{R}^{d_{h}}italic_B ⊆ blackboard_R start_POSTSUPERSCRIPT italic_d start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT end_POSTSUPERSCRIPT specified by the researcher, consider rerandomizing until the worst case imbalance consistent with B𝐵Bitalic_B is small eno...
We propose a novel minimax scheme that allows the researcher to specify prior information about the relationship between covariates and outcomes, then rerandomizes until the worst case covariate imbalance consistent with this prior is small.
D
RNNs introduce recurrent connections, allowing information to persist and be shared across different time steps in sequential data, where temporal dependencies are crucial. These models are able to handle variable-length sequences, allowing them to adapt dynamically to different lengths of input data. Their architectur...
Memory cells with input, forget, and output gates regulate the flow of information, allowing the model to selectively remember or forget information. They are trained using backpropagation through time (BPTT), adjusting the weights to minimize the error in the predictions.
LSTMs are a type of recurrent neural network and function very similarly to the human brain. They are designed to address the vanishing gradient problem in traditional RNNs, allowing them to better capture time dependencies in sequential data, and they achieve this through the use of gated units called memory cells tha...
RNNs introduce recurrent connections, allowing information to persist and be shared across different time steps in sequential data, where temporal dependencies are crucial. These models are able to handle variable-length sequences, allowing them to adapt dynamically to different lengths of input data. Their architectur...
and we can see a significant improvement in all 3 metrics by adding the means as features to the dataset. This improvement indicates that the LSTM model effectively leveraged the additional information provided by the means, allowing it to better capture the underlying time dependencies present in the data. Visually, w...
B
Income inequality, on the other hand, rises between 1997 and 2003, followed by gradual a decline until 2016.
In the group which is mainly made of advanced economies, with low inequality and capital shares, the transmission coefficient is found to be stable in the last four decades, while both capital and labor income inequality rise steadily until 2008.
According to Panel (b), labor income inequality increased significantly, around 5 pp, from the beginning of the period until 2008, followed by a gradual decrease toward the end of the sample (around 1.8 pp).
Overall, the effect of the capital share on income inequality remains stable in Group 1, rises in Group 2, and declines in Group 3 over the observed period.
Group 3 shows a stable picture of income inequality until the late 1980s while the capital share fluctuates around 0.3.
D
A market is defined by a store, with the consumers’ choice set comprising all products available in that store. The market share of a product is calculated as the quantity sold divided by the population size within the local area surrounding the store, as provided by the IRI dataset. In total, there are 5,927 unique ma...
Table 3 presents summary statistics for several randomly selected products. The “No. of Market” column indicates the markets where each product is available, highlighting substantial variations in consumers’ choice sets. The “Market Share (%)” and “Price” columns, as well as the marketing mix columns, report the averag...
A market is defined by a store, with the consumers’ choice set comprising all products available in that store. The market share of a product is calculated as the quantity sold divided by the population size within the local area surrounding the store, as provided by the IRI dataset. In total, there are 5,927 unique ma...
Note: This table presents summary statistics for a selection of randomly chosen products in the yogurt category. The first number in each cell represents the mean, while the second number in parentheses indicates the standard deviation across markets. If the standard deviation is zero, it is omitted.
We now turn to discuss the latent sparsity structure of the market-product shocks ηj⁢tsubscript𝜂𝑗𝑡\eta_{jt}italic_η start_POSTSUBSCRIPT italic_j italic_t end_POSTSUBSCRIPT uncovered by our procedure, as summarized in Figure 1. The solid lines in Figure 1(a) represent the posterior means of ϕtsubscriptitalic-ϕ𝑡\phi_...
A
In our initial analysis, we have focused on a static one-period model to evaluate the impact of transition investment on enterprise profitability. This framework provided basic but useful insights into the immediate effects of various variables on a company’s profit. However, real-world scenarios are dynamic in nature ...
The design of this production function originates from a Cobb-Douglas production function, a widely used model in microeconomics that represents the relationship between inputs (typically capital and labor) and output in a production process. The general form of the Cobb-Douglas function is Q=A⋅Kβ⋅L1−β𝑄⋅𝐴superscript�...
By analyzing these different scenarios, we can observe how different levels of investment and urgency in transitioning to low-carbon technologies affect key variables of the framework over time. This analysis helps us understand the trade-offs between immediate costs and long-term benefits, guiding enterprises in makin...
This formulation captures the dynamic nature of transition investments, where p𝑝pitalic_p is the selling price per unit (assume p𝑝pitalic_p is a constant), ctsubscript𝑐𝑡c_{t}italic_c start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is the production cost per unit at time t,α𝑡𝛼t,\alphaitalic_t , italic_α is the tran...
To achieve this, we revise the profit function to account for time dependency, reflecting the changes in key variables in the framework over multiple different periods. The time-dependent profit function is defined as follows:
D
_{t-1:t-\tau}^{2},X_{t:t-\tau}^{3}italic_X start_POSTSUBSCRIPT italic_t - italic_τ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT over? start_ARG ⟶ end_ARG italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT | italic_X start_POSTSUBSCRIPT italic_t - ( itali...
Bivariate Granger Causality (BVGC) tests the association between two variables, say X1superscript𝑋1X^{1}italic_X start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT and X2superscript𝑋2X^{2}italic_X start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT by comparing the variance between the predictions and the target resulting from usin...
Proposition 3: Combining222Combination operation using the logical-and symbolized as ∧\land∧ propositions 1111 and 2222 enforces all RCCPs and reveals only true causal links.
Following these illustrations, we can see variations in the results obtained in both GC cases. BVGC and MVGC did brilliantly well, complementing each other’s shortcomings. In the case of the true causal links, both BVGC and MVGC fulfil both propositions 1111 and 2222, hence the discovery of true causal links (in colour...
From the above analogies, a combination of dependencies from both propositions 1111 and 2222 must hold in other to infer a causal link between two variables. The propositions are as follows:
D
If the bad events defined in 𝖧𝗒𝖻2subscript𝖧𝗒𝖻2{\sf Hyb}_{2}sansserif_Hyb start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT do not happen, then
𝖠𝗈𝖪.𝖼𝗋𝗌⁢←$⁢𝖠𝗈𝖪.𝖦𝖾𝗇⁢(1λ)formulae-sequence𝖠𝗈𝖪𝖼𝗋𝗌currency-dollar←𝖠𝗈𝖪𝖦𝖾𝗇superscript1𝜆{\sf AoK}.{{\sf crs}}{\overset{\$}{\leftarrow}}{\sf AoK}.{\sf Gen}(1^{\lambda})sansserif_AoK . sansserif_crs over$ start_ARG ← end_ARG sansserif_AoK . sansserif_Gen ( 1 start_POSTSUPERSCRIPT italic_λ end_POSTSUPERS...
the extracted witness — here we assume that v0=⊥subscript𝑣0bottomv_{0}=\botitalic_v start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = ⊥ if the seller
separate 𝖠𝗈𝖪𝖠𝗈𝖪{\sf AoK}sansserif_AoK instance with the platform using 𝖠𝗈𝖪.𝖼𝗋𝗌formulae-sequence𝖠𝗈𝖪𝖼𝗋𝗌{\sf AoK}.{{\sf crs}}sansserif_AoK . sansserif_crs,
1) the witness extracted by the 𝖠𝗈𝖪𝖠𝗈𝖪{\sf AoK}sansserif_AoK extractor is valid, i.e., the conditions checked
D
The application of mathematics by Hong and Page in their theorem presents its findings in a manner that may hide the straightforward nature of its conclusions. Through the use of mathematical formalism, aspects that might be considered basic are portrayed with a level of complexity that suggests deeper insights, potent...
To the extent that cognitive diversity is a key ingredient of collective intelligence, and specifically one that matters more than average individual ability, the more inclusive the deliberation process is, the smarter the solutions resulting from it should be, overall.
This is mathematically incorrect and entirely wrong: the effect is undetermined, it’s not of the same magnitude, and it’s not necessarily a reduction, as explained in Section C.2, see Error 1, and Section C.3. It is a significant mathematical error to assume that the terms in Theorem C.1 can be changed while the other ...
Perhaps because The Difference takes time to digest, eventually, accurate readings won out. Reviewers recognized that The Difference explores the pragmatic, bottom-line contributions of diversity. It does so using models and logic, not metaphor. The book’s claims that “collective ability equals individual ability plus
Moreover, it leads her to compare the Hong-Page theorem, which is trivial (Section 4.1), with a genuinely profound and counterintuitive theorem, such as Arrow’s impossibility theorem. However, she believes the difference in treatment between the theorems is based on the difference in their conclusions:
C
In this study, 24 weeks spanned across 3 scenario years were analyzed: 1989 (weeks 04, 10, 11, 17, 20, 31, 40, 52), 1995 (weeks 02, 12, 16, 21, 27, 36, 38, 49) and 2009 (weeks 04, 08, 11, 15, 16, 21, 31, 48).
Next, we identify the prices corresponding to the topological nodes. We do this based on the IDs of these nodes and the IDs of the 6,090 nodes included in the file containing LMPs.
For each day in these weeks, nodal prices for every 2-hour interval are available. The file containing the prices for the German-Luxembourgish bidding zone includes 6,090 unique node IDs and 14,170,464 nodal prices. Moreover, for 1,893,024 (≈13%absentpercent13\approx 13\%≈ 13 %) nodal prices, which correspond to 939 di...
We observe that these prices are similar to the average yearly prices shown in Table 13, in which clusterings were computed without clipping prices.
The density plots from Figure 5 illustrate the distribution of average nodal prices in the two zones of the DE2 (k-means) configuration, with the average zonal prices highlighted for comparison. The average nodal prices were computed considering all prices from the LMP Study (see Section 4). We observe that the distrib...
B
E⁢[Di,1−Di,0|Ei=1]−E⁢[Di,1−Di,0|Ei=0]>0.𝐸delimited-[]subscript𝐷𝑖1conditionalsubscript𝐷𝑖0subscript𝐸𝑖1𝐸delimited-[]subscript𝐷𝑖1conditionalsubscript𝐷𝑖0subscript𝐸𝑖00\displaystyle E[D_{i,1}-D_{i,0}|E_{i}=1]-E[D_{i,1}-D_{i,0}|E_{i}=0]>0.italic_E [ italic_D start_POSTSUBSCRIPT italic_i , 1 end_POSTSUBSCRIPT - it...
The theorem below shows that if Assumptions 1-7 hold, the Wald-DID estimand captures the LATET in period 1111.
Assumption 6 is required for DID-IV identification strategy because in the numerator of the Wald-DID estimand, we should isolate the effect of the instrument on the outcome through treatment from that of time in an exposed group. The underlying idea to capture the former effect is parallel to that in DID designs: we pu...
Online Appendix Section A.1 shows that if we have a non-binary, ordered treatment, the Wald-DID estimand captures the ACRT under the same assumptions in Theorem 1.
If Assumptions 1-7 hold, the Wald-DID estimand wD⁢I⁢Dsubscript𝑤𝐷𝐼𝐷w_{DID}italic_w start_POSTSUBSCRIPT italic_D italic_I italic_D end_POSTSUBSCRIPT is equal to the LATET in period t=1𝑡1t=1italic_t = 1; that is,
A
5,0.1\},\quad k_{4}\in\{0.01,0.02,0.05\}.italic_k start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ∈ { 0.005 , 0.01 , 0.02 } , italic_k start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∈ { 0.5 , 1 , 2 } , italic_k start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT ∈ { 0.02 , 0.05 , 0.1 } , italic_k start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT ∈ { 0.01 ...
We define the level of traffic congestion, denoted as C⁢(t)𝐶𝑡C(t)italic_C ( italic_t ), and the level of AI adoption, denoted as A⁢(t)𝐴𝑡A(t)italic_A ( italic_t ), both as functions of time. The system of differential equations that governs the dynamics of C⁢(t)𝐶𝑡C(t)italic_C ( italic_t ) and A⁢(t)𝐴𝑡A(t)italic_A...
The system of differential equations governing the dynamics of traffic congestion C⁢(t)𝐶𝑡C(t)italic_C ( italic_t ) and AI adoption A⁢(t)𝐴𝑡A(t)italic_A ( italic_t ) was solved for each combination of parameter values:
The system of differential equations is solved numerically using Python’s SciPy library. The time evolution of C⁢(t)𝐶𝑡C(t)italic_C ( italic_t ) and A⁢(t)𝐴𝑡A(t)italic_A ( italic_t ) under each scenario is simulated over a 100-time unit period. The initial conditions for traffic congestion and AI adoption are set to ...
The solutions for C⁢(t)𝐶𝑡C(t)italic_C ( italic_t ) and A⁢(t)𝐴𝑡A(t)italic_A ( italic_t ) were computed numerically using a time range t∈[0,100]𝑡0100t\in[0,100]italic_t ∈ [ 0 , 100 ] with 1000 evenly spaced points. The results were analyzed to observe how changes in parameter values influenced the dynamics of C⁢(t)�...
B
The growing concern about the economic and ecological impacts of anthropogenic climate change has rapidly increased the need for policies aimed at reducing CO2 emissions. Currently, national emissions of CO2 and other GHGs are extensively regulated by the United Nations Framework Convention on Climate Change (UNFCCC). ...
The combination of frequencies in Equation 5 offers the advantage of considering all the information in a single regression. Our strategy is based on the montecarlo results of Foroni et al. (2015), which indicate that while distributed lag functions, such as the Almon lag functions, are effective for high-frequency ind...
In this paper, we introduced a panel nowcasting methodology to obtain high-frequency nowcasts of state-level per-capita energy consumption and CO2 emissions growth in the U.S. Our methodology extends the approach of Fosten and Nandi (2023b) by incorporating the state-level weekly economic conditions index of Baumeister...
Our nowcasting approach is implemented in two stages. In the first stage, we propose a panel MIDAS model to predict energy consumption growth, utilizing quarterly, monthly, and weekly economic predictors. A restricted Almon lag polynomial approximation of the weekly high-frequency component is employed, as outlined in ...
In this paper, we introduce a panel nowcasting methodology to simultaneously obtain high-frequency state-level nowcasts of annual energy consumption and CO2 emissions growth rate in the United States (U.S.). The set of economic predictors includes the quarterly real personal income from the Bureau of Economic Analysis ...
D
In this paper, we addressed the limitations of traditional regression models in handling spatial data, particularly when the relationships between variables exhibit heterogeneity across different clusters. Our motivation is based on the need to account for spatial autocorrelation in regression models with spatial clust...
In this paper, we propose a solution to this still open issue in the clusterwise regression literature, that is, an extension of the spatially-clustered linear regression models called the spatially-clustered spatial autoregression models (SCSAR). We extend a classical spatial econometrics model, the spatial autoregres...
However, we notice that the use of spatial clustering in the clusterwise regression model enhances the presence of spatial autocorrelation in the within-clusters residuals. This violates the strong assumption of iid error terms in traditional clusterwise regression models and can lead to biased estimates of regression ...
Spatially clustered regression models are valuable tools in spatial analysis as they allow us to identify and account for similarities among neighboring units, which can improve the accuracy of the regression results. However, it is important to recognize that by grouping spatially contiguous units into the same cluste...
To address this gap, we proposed the Spatially-Clustered Spatial Autoregression (SCSAR) model. This model extends the classical spatial autoregressive model by allowing the regression coefficients to vary spatially according to a cluster-wise structure. We also discussed extensions to other spatial models, such as the ...
D
In our setup, in addition to the buyer and the seller, there is a broker who governs the entire trade, through whom the buyer and seller exclusively interact.
Formally, a posted-pricing mechanism in our setting consists of two fixed prices (p,q).𝑝𝑞(p,q).( italic_p , italic_q ) . The broker simultaneously offers to buy the item from the seller at price q𝑞qitalic_q and sell the item to the buyer at price p.𝑝p.italic_p . Since the broker strategizes to maximize her profit, ...
The buyer and the seller exclusively trade through a broker, who tries to maximize her expected profit by constituting an arbitrage on the prices to the buyer and seller. We consider Bayesian mechanisms in which the broker knows F𝐹Fitalic_F and G𝐺Gitalic_G.333In comparison, the literature often considers prior-indepe...
Consider bilateral trade without a broker666Alternatively, a broker who tries to maximize the gains-from-trade instead of her expected profit. and a posted-pricing mechanism with fixed price p𝑝pitalic_p.
The broker is not interested in maximizing the GFT but instead tries to maximize her own profit by buying from the seller at a lower price and selling to the buyer at a higher price.
D
It is noteworthy that during the preparation of this paper, NYC adopted the revealing policy for the 2022-2023 season. Initially, NYC refused to reveal lottery numbers to parents. However, following a parent-led campaign under the New York State’s Freedom of Information Law, NYC first agreed to reveal lotteries upon re...
Our models are built on the work of Abdulkadiroglu, Che and Yasuda (2011), who compare the Boston mechanism (BM) and DA under unconstrained preference lists and the covering policy. Differently, our study focuses on comparing the two lottery policies under constrained preference lists, given DA is employed. Our insight...
Our first model deviates from Abdulkadiroglu, Che and Yasuda (2011) by imposing a restriction on the number of schools that students may report. Therefore, students need to decide which schools to rank. The revealing policy effectively resolves uncertainties for students by essentially informing them of their attainabl...
Our second model is related to Akbarpour et al. (2022) who add unequal outside options to the model of Abdulkadiroglu, Che and Yasuda (2011) and similarly show that students with outside options gain an undue advantage in manipulable mechanisms. They consider complete preference lists and the covering policy, which all...
Our third model modifies the first one by assuming that every school prioritizes a subset of students based on neighborhood proximity, ranking them above others, while still resolving ties within the same group through a single lottery. This model addresses the complexities associated with nontrivial priorities in prac...
A
We evaluate the accuracy of the MF-VAR’s point and density estimates and nowcasts against the subsequent outturns from the BEA using the root mean square forecast error (RMSFE) and the continuous ranked probability score (CRPS), respectively. Lower values of each of these metrics indicate improved accuracy.181818In onl...
Our MF-VAR is fundamentally a very large one, involving over 50505050 equations (for each state) and a three-way frequency mismatch that changes over time (that is, state level GDP is initially annual, then quarterly, and then other variables are either quarterly or monthly). Accordingly, we use the horseshoe prior of ...
We also compare the accuracy of the density nowcasts and estimates from our MF-VAR model, (1), against those from a benchmark model. The benchmark model is a restricted case of our MF-VAR model that neither models cross-state linkages nor imposes the cross-sectional aggregation constraint, (6). Specifically, the benchm...
This paper develops a “big data” MF-VAR model that is used to produce both historical monthly estimates and more timely nowcasts of state GDP. The estimates and nowcasts are both cross-sectionally and temporally consistent with official lower-frequency, and less timely, state and U.S. GDP data from the BEA. Our model e...
Tables 3 and 4 confirm that from the third month of the current quarter our (cross-state) MF-VAR model, (1), consistently produces more accurate point and density nowcasts and estimates than the benchmark state-specific model. The RMSE and CRPS ratios in Tables 3 and 4 are consistently below unity, both at the state le...
B
We now have a Borda count election without any partial ballots, and since A𝐴Aitalic_A gained as many or more points than any other candidate, A𝐴Aitalic_A remains the winner of the election. Although new last-place votes were introduced, the number of last-place votes for A𝐴Aitalic_A remained the same, and hence this...
We have shown that A𝐴Aitalic_A wins in this Borda count election with each ballot being complete. Additionally, we see that A𝐴Aitalic_A still loses each pairwise comparison, making them the Condorcet loser candidate. Hence we have a Condorcet loser failure for the Borda count, which is a contradiction. Thus, it is im...
It was shown (Fishburn and Gehrlein 1976) that the Borda count, assuming all ballots are complete, cannot have a Condorcet loser failure. This implies a majority loser failure is also impossible. We now show two variations of Borda, particularly MBC and ABC, are also immune to majority loser failures, and in the case o...
The averaged Borda count (ABC) can never exhibit a Condorcet loser failure, and thus also cannot have a majority loser failure.
A primary criticism of the Borda count is its susceptibility to majority winner failures, but note how rare these failures actually are. BCU and MBC’s 1.3%percent\%% correspond to just 3 elections with majority winner failures, with ABC exhibiting this failure only once. Meanwhile, EBC and QBC never have this failure, ...
C
While some studies have explored SDG interconnections in India, there remains a need for detailed analyses of state-level contributions toward SDG achievements. As each Indian state contributes uniquely to SDG progress, establishing a bipartite network that links states to their SDG achievements could facilitate system...
In conclusion, the study acknowledges the complexity of achieving the Agenda 2030 goals and emphasizes the need for a comprehensive analysis using advanced complexity mapping models. For developing nations like India, accelerating efforts towards sustainability is critical. The study introduces an SDGs-GENEPY framework...
For a long time, economic discussions have centered around growth as a key development measure. However, relying solely on growth as an indicator has limitations and certainly does not capture the full picture of development. Many studies have emphasized the importance of a more holistic approach to measuring developme...
Our approach draws on the pioneering work of Hidalgo and Hausmann (2009), who developed the economic complexity index to assess a country’s knowledge base based on its export data. Their method, called the “method of reflection,” uses an iterative approach to link product ubiquity with country diversification, resultin...
This study considers SDGs and Indian states as a bipartite network with weighted links, where the weights correspond to the scores obtained. It further employs the framework by Sciarra et al.[25] called Generalized Economic Complexity (SDGs-GENEPY). The goal of this framework (as well as its predecessors) is to obtain ...
C
Even with a continuous covariate, we can nonparametrically estimate the ordinal distribution conditional on two distinct covariate values, then use our results to interpret the differences in terms of the latent distributions.
Or more simply, the continuous covariate can be discretized, which would also generally increase statistical precision.
This approach is valid but may be conservative in some cases; it would be valuable to refine its precision in future work.
Even with a continuous covariate, we can nonparametrically estimate the ordinal distribution conditional on two distinct covariate values, then use our results to interpret the differences in terms of the latent distributions.
Our results can also extend to comparison of conditional distributions, which can be estimated by semiparametric or non-parametric “distribution regression” even with continuous regressors (e.g., Frölich, 2006).
A
𝐗i]\mathbf{X}_{i}\big{]}bold_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ] :i=1,2,…,N}:i=1,2,...,N\big{\}}: italic_i = 1 , 2 , … , italic_N } is independent and identically distributed.
In the current paper, we allow for G≥2𝐺2G\geq 2italic_G ≥ 2 treatment levels and study the problem of joint estimation of the vector of PO means. We assume that the assignment to treatment is random – independent of both potential outcomes and observed predictors of the POs. In terms of the assignment mechanism, we as...
The i.i.d assumption is not the only one we can make. For example, we could allow for a sampling-without-replacement scheme given a fixed sample size N𝑁Nitalic_N. This would complicate the analysis because it generates a slight
This section studies the finite sample properties of the different estimators of the PO means, namely, SM, PRA, SRA, and their nonlinear counterparts. For the simulations, we generate a population of one million observations and mimic the asymptotic setting of random sampling from an “infinite” population. The empirica...
et al. (2020). Such a framework will affect inference only in cases where the sample is a substantial fraction of the population. If the sample size is small relative to the population size, the finite-population adjustments of the kind derived in Abadie
B
When a commercial algorithm has a disparate impact, it is important to understand whether it is possible to reduce that disparate impact without compromising on other business-relevant criteria, or if this level of disparate impact is necessitated by those other goals. We have designed a statistical approach to assess ...
In words, improvement convergence says that whenever the algorithm a0subscript𝑎0a_{0}italic_a start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is (Δr,Δb,Δf)subscriptΔ𝑟subscriptΔ𝑏subscriptΔ𝑓(\Delta_{r},\Delta_{b},\Delta_{f})( roman_Δ start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT , roman_Δ start_POSTSUBSCRIPT italic_b end_PO...
We expect that, for many datasets, it will be possible to reject the null using a naive selection rule that does not satisfy improvement convergence, as is the case in our empirical illustration in Section 6. However, Theorem 4.2 guarantees that the null will be asymptotically rejected (in cases where it should be) whe...
This result assumes that the selection rule ρ𝜌\rhoitalic_ρ does not select a candidate algorithm that is arbitrarily fair, even asymptotically. This is related to our discussion at the end of Section 3.2: if the algorithms chosen by the selection rule converge to perfect fairness, then our bootstrap procedure is no lo...
The main drawback of our approach is its dependence on a selection rule for choosing a candidate algorithm. When we are able to reject the null, as in our application, then the optimality properties of the selection rule (in particular, whether it satisfies improvement convergence as defined in Section 4) do not matter...
D
U⁢(σp,δ0/(1−p))=U⁢(σ,δ0).𝑈superscript𝜎𝑝subscript𝛿01𝑝𝑈𝜎subscript𝛿0U(\sigma^{p},\delta_{0}/(1-p))=U(\sigma,\delta_{0}).italic_U ( italic_σ start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT , italic_δ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT / ( 1 - italic_p ) ) = italic_U ( italic_σ , italic_δ start_POSTSUBSCRIPT...
The subsequent Lemma 3, which is essentially a locally robust PPE folk theorem, follows immediately from Lemma 2, the continuity of payoffs in δ𝛿\deltaitalic_δ, and familiar self-generation arguments à la APS 1990 and FLM 1994.
A major difficulty in the analysis under imperfect monitoring is that such games are usually studied via recursive techniques involving the set of equilibrium payoffs (see Abreu, Pearce, and Stacchetti, 1990). Because payoffs of a given strategy profile, and the “self-generation” operator itself, depend on the discount...
Our proof differs in three ways from this. First, we use the notion of η𝜂\etaitalic_η-strong self-generation, to leave “wiggle room” for varying discounting and achieving robustness. Second, we show this property for closed balls, rather than directly for all smooth sets; this is analytically more tractable. Points in...
Given the Reboot Lemma, existence of the desired Blackwell equilibrium follows from the construction of a robust equilibrium. Existence of a robust equilibrium will be demonstrated by adapting arguments from Abreu, Pearce, and Stacchetti (1990) (henceforth APS 1990). Robustness requires incentives to hold for a range o...
D
How will the auctioneer’s benefit in the optimal Istanbul Flower Auction vary with the changes in the time cost parameter, and with market competitiveness (i.e., the number of bidders)?
If the starting price is set to enhance other aspects of the Istanbul Flower Auction, rather than prioritize auctioneer benefits, will the auction still serve the auctioneer’s interests?
Istanbul Flower Auction that strictly outperform both the Dutch auction and the English auction when the focus of optimization shifts to the bidder utility, the social welfare or the speed of the auction, respectively. To emphasize the difference from the auctioneer’s optimal starting price s∗superscript𝑠s^{*}italic_s...
Will the auctioneer-optimal Istanbul Flower Auction enhance the benefits for others in terms of bidder utility, social welfare, and auction speed?
We further perform numerical analyses that illustrate that for the starting price that maximizes the auctioneer’s expected utility, the Istanbul Flower Auction performs better than Dutch auction in terms of social welfare, bidder welfare, and expected auction duration. The auction’s performance improves significantly w...
C
However, in practice people normally violate this subgame perfect equilibrium solution, because the first player offers the second between 50%percent5050\%50 % and ∼75%similar-toabsentpercent75\sim 75\%∼ 75 % of the resource X𝑋Xitalic_X [34, 35, 36]. Several other experimental features of the ultimatum game are review...
Several approaches were proposed for explaining this seemingly irrational behavior; see [36] for a recent review of various approaches devoted to the ultimatum game. The most known of them is based on fairness, which (for equal utilities of the players) states that the proposer chooses somewhere between the equal split...
The drawback of axiomatic bargaining is that it does not provide a constructive framework for realistic negotiations. Zeuthen, and later Harsanyi, studied an iterative scheme of mutual compromises that led to the Nash solution [1, 6, 7]. Rubinstein developed an iterative bargaining scheme with penalties for longer nego...
We applied Weber’s law to the ultimatum game [34, 35, 36]. This asymmetric (i.e. the players have different roles) game emerged as a basic challenge to the game-theoretic rationality, because people here do not behave according to the Nash equilibrium from the non-cooperative game theory. The ultimatum game does not in...
We show that our results apply to the ultimatum game, which does not involve real bargaining, but still closely relates to offer formation. The challenge of the ultimatum game is that people do not follow its sub-game perfect Nash equilibrium, thereby defying the basic tenet of classical rationality. Our approach to so...
A
⌊T1/3⌋superscript𝑇13\left\lfloor T^{1/3}\right\rfloor⌊ italic_T start_POSTSUPERSCRIPT 1 / 3 end_POSTSUPERSCRIPT ⌋
⌊T1/2⌋superscript𝑇12\left\lfloor T^{1/2}\right\rfloor⌊ italic_T start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT ⌋
⌊T1/2⌋superscript𝑇12\left\lfloor T^{1/2}\right\rfloor⌊ italic_T start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT ⌋
⌊T1/2⌋superscript𝑇12\left\lfloor T^{1/2}\right\rfloor⌊ italic_T start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT ⌋
⌊T1/2⌋superscript𝑇12\left\lfloor T^{1/2}\right\rfloor⌊ italic_T start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT ⌋
A
Benchmarking the model against state-of-the-art approaches on more complex graph datasets to further validate its performance.
Both MAE and MSE are indispensable tools, with their applicability depending on the specific requirements of the task. MAE is robust to outliers, offering a balanced perspective on overall accuracy, while MSE emphasizes substantial errors, favoring models with fewer extreme deviations.
In this study, MAE and MSE are employed to evaluate the predictive performance of MLP, GNN, and GCN models, providing complementary insights into their strengths and weaknesses.
Table 2: Comparison of Task1 part2 result of Test MSE and MAE for different models on various products.
Table 1: Comparison of Task 1 part 1 result of Test MSE and MAE for different models on various products.
D
H=∇w2Q⁢(w1)⁢∇w2Q⁢(w2)+.𝐻superscriptsubscript∇𝑤2𝑄subscript𝑤1superscriptsubscript∇𝑤2𝑄superscriptsubscript𝑤2H\;=\;\nabla_{w}^{2}Q(w_{1})\,\nabla_{w}^{2}Q(w_{2})^{+}.italic_H = ∇ start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_Q ( italic_w start_POSTSUBSCRIPT 1 end_P...
Assumption A1) is standard, indicating that the observable variables 𝐗𝐗\mathbf{X}bold_X can be written as a linear combination of the structural errors 𝐒𝐒\mathbf{S}bold_S via the matrix inverse Λ−1=AsuperscriptΛ1𝐴\Lambda^{-1}=Aroman_Λ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT = italic_A. The matrix A𝐴Aitalic_...
A recurring theme in these works is the need to assume uncorrelated structural errors. This is unsurprising in the context of VAR models, where uncorrelated shocks are both familiar and, to some extent, desirable. Yet in many other simultaneous-equation settings, the uncorrelatedness assumption may be overly restrictiv...
In some econometrics applications, researchers may wish to impose the additional assumption that the structural errors are uncorrelated. This assumption provides extra information about the second cumulant, namely that the covariance matrix of X𝑋Xitalic_X is
Such concerns have appeared frequently in econometric applications. One of the earliest “negative results” originates in the vector autoregression (VAR) literature: if no further restrictions are imposed and one assumes a simple setting in which structural errors are uncorrelated and normally distributed, then the dist...
C
Finally, our use of ChatGPT as a WEIRD benchmark revealed consistently high prosocial behavior. This somewhat diverges from typical behavior observed in WEIRD populations but aligns with previous findings on language models’ tendency towards prosociality [mei2024turing, atariwhichhumans]. This latter observation highli...
Our results reveal substantial cross-cultural variability which suggests that our SCAs qualitatively capture the cultural nuances influencing economic behavior in these societies. For example, the behavior of SCAs representing horticultural societies like the Machiguenga and Tsimané exhibited more self-interested tende...
Our results for the Yanomami, a previously unstudied population in these experimental contexts, provide novel insights. The Yanomami SCA demonstrates behavior closer to homo economicus in the Dictator Game and as a proposer in the Ultimatum Game compared to other synthetic agents. However, as a responder in the Ultimat...
Our results reveal substantial cross-cultural variability in experimental behavior, an absence of purely self-interested (homo economicus) behavior across all SCAs, and qualitative resemblances to real human populations where data are available. These findings not only generally align with previous research on small-sc...
Overall, the results demonstrate the ability of SCAs to capture cross-cultural variation in economic decision-making by inferring cultural nuances from the generated profiles. The observed patterns align well with existing anthropological data on factors influencing game behavior, such as market integration, community ...
D
Interestingly, the relationship with top 21–100 publications does not display the same consistent positive pattern. For that tier, causal narrative complexity measures have positive but small coefficients that are often statistically insignificant, particularly when year fixed effects are included. Non-causal measures ...
Note: This figure shows coefficient estimates relating narrative complexity measures (number of edges, number of unique paths, longest path length) to publication outcomes (Top 5, Top 6–20, Top 21–100 journals) and citation counts. Estimates are displayed for both “Non-Causal” and “Causal” versions of the measures, and...
Turning to citations, a stark and consistent difference emerges between causal and non-causal novelty measures. Causal novelty measures are all positive and statistically significant, with large effect sizes, demonstrating the importance of introducing new causal relationships in generating long-term scholarly attentio...
Turning to citations, all causal narrative complexity measures are positively associated with citation counts, with large and statistically significant coefficients. This indicates that complexity grounded in credible causal relationships enhances longer-run academic influence. In contrast, non-causal measures are nega...
Figure 12 presents the coefficient estimates for narrative complexity measures, which include the number of edges, the number of unique paths, and the log of the longest path length, analyzed separately for the non-causal and causal subgraphs. The results highlight that causal narrative complexity is positively associa...
C
As shown in Figure 1, panel (a), total quantities across asymmetry specifications center around Q≈40𝑄40Q\approx 40italic_Q ≈ 40. This falls between the theoretical benchmarks of static Nash level (48) and the collusive benchmark of alternating monopoly. The output produced by algorithms is restricted relative to the c...
The comparison of total profits, shown in panel (c), is less straightforward. In the presence of cost asymmetries, it not only matter which quantity is produced, but also which of the two firms produces which output. We find that total profits under simulated algorithms are smaller than under joint profit maximization ...
As asymmetry increases, bargaining solutions align increasingly well with simulated output, particularly equal relative gains (for both disagreement profits). Interestingly, the simulation results in terms of quantity are closest, on average, to those predicted by a monopoly (joint profit maximization). However, this a...
Our main findings are as follows. Somewhat surprisingly, both consumers and firms can benefit from asymmetry. As firms become more asymmetric, the more efficient firm produces additional output. Some of these efficiency gains are indirectly shared with consumers. However, the algorithms in our setting do not implement ...
We next investigate the effect of increasing asymmetry between firms. According to our simulation results, total quantity increases in asymmetry. However, for low levels of asymmetry, the simulated quantities are above the joint profit-maximizing solution (monopoly quantities), that is, the algorithms are less collusiv...
D
However, when the competitors are strategic complements, the sign of the SEA and of the SED are the same, so that, when the investment makes the incumbent tough (again, SED<0), then the SEA is now negative and the incumbent has an incentive to underinvest, that is, a different “puppy dog strategy”, which consists of be...
However, when the competitors are strategic complements, the sign of the SEA and of the SED are the same, so that, when the investment makes the incumbent tough (again, SED<0), then the SEA is now negative and the incumbent has an incentive to underinvest, that is, a different “puppy dog strategy”, which consists of be...
For all numerical results discussed in the next section, the slope of the best responses are negative, so that the competitors are of a strategic substitutable nature. And the SEA is positive. Therefore, the SED is also positive. Thus, both deterrence and accommodation calls for overinvestment and, therefore, a “top do...
Thus, when the competitors are strategic substitutes, the signs of the SEA and of the SED are different, so that, when the investment makes the incumbent tough (i.e., SED<0), then the SEA is positive and the incumbent has an incentive to overinvest, that is, it chooses the same “top dog strategy” as in the deterrence c...
The structure of the report is the following. Next section presents the related work. Section 3 describes model of the different agents in the market. Section 4 describes the strategic game that frames the decisions of the two aggregators. Section 5 discusses the numerical results obtained in a comparative statics anal...
B
Finally, the policymakers’ optimal forecasts is determined by their preferences and their aversion to different types of forecasting errors. This implies that the central bank’s loss function has a crucial role in the forecast evaluation process. Most of the literature has focussed on squared error loss, partly because...
The last panel of Figure 5 reports the test statistic for the null of equal predictive accuracy of the Bank of England and the survey forecasts. A negative value of the test statistic indicates a lower loss for the Bank of England, i.e. better performance with respect to the surveys. As in Figure 4, different loss func...
Figure 5 reports the survey forecasts for the 1, 2 and 4-quarter ahead horizons, along with the Bank of England forecasts and the realised inflation for 2004.Q4 to 2023.Q4.555Notice that, in this case, due the forecast horizon alignment, we have only one survey forecast per year, namely the one for Q4, therefore the sa...
This section contains a formal evaluation of the performance of the Bank of England’s inflation forecasts over recent years. Figure 2 reports the Bank of England modal inflation forecasts for horizons one to twelve quarters ahead for the period 2014.Q1 to 2023.Q4 along with the realised CPI inflation, and the forecasts...
Note: Bank of England (BoE) inflation forecasts at forecasting horizons of 1, 4, 8 and 12 quarters, along with realized CPI inflation and the forecasts from the random walk (RW) and the autoregressive (AR) benchmarks. Quarterly observations from 2014.Q1 to 2023.Q4. The vertical line indicates the mid-sample point 2018....
C
While the equilibrium model considers wind power producers as the sole users of ML, the private and social benefits are evident even within this narrow setting. As ML applications in electricity markets continue to grow, it is reasonable to include more ML users, such as conventional generators, demand, and system oper...
This paper studies how stochastic producers can cohesively extract more revenues from their forecast models in competitive electricity markets while implicitly enhancing the market efficiency across the day-ahead and real-time markets. In the setting of interest, the ML model selection by each producer affects the reve...
The objective here is to minimize the average social cost of electricity, regularized by the total regression loss up to chosen parameter 𝚪.𝚪\boldsymbol{\Gamma}.bold_Γ . The feasible region includes the day-ahead and real-time market-clearing constraints, enforced on each sample i𝑖iitalic_i of the training dataset �...
Finally, this work complements studies of competitive electricity market equilibrium and the ability of marginal pricing to ensure socially optimal outcomes. Marginal pricing has been proven to ensure sufficient regulation capacity in real-time markets [28]. The work in [1, 2, 3, 29, 5] established the existence of mar...
The barriers for implementing the stochastic market-clearing models in [1, 2, 3, 4, 5, 6, 7] motivated several strategies to coordinate the day-ahead and real-time markets within existing, deterministic market-clearing models. The strategy in [11] computes the optimal day-ahead wind power offering by anticipating the c...
D
Furthermore, Theorem 1(b) elaborates on these gains in a positive semi-definite sense for a vector of regression-adjusted estimators. This result encompasses any linear combination of the unconditional distribution estimators and implies potential efficiency gains for both DTE and PTE estimators. While a similar positi...
Figure 3 presents the DTE and PTE of the intervention in comparison to the control group. We compute the DTE for y∈{0,1,2,…,200}𝑦012…200y\in\{0,1,2,\dots,200\}italic_y ∈ { 0 , 1 , 2 , … , 200 }. The top left panel of Figure 3 displays the simple estimate of the DTE, whereas the top right panel illustrates the regressi...
Additionally, we assume that the model specification for distributional regression in (1) is correct.
We theoretically demonstrate that regression adjustment can reduce the variance of distribution estimators and enhance the precision of the DTE estimator.
We also show the point-wise semiparametric efficiency of the DTE estimator under correct specification in Section A.6 for the sake of completeness.
D
The framework’s performance was benchmarked against established methods, including Dynamic Programming [6], Monte Carlo simulations [29], Hybrid Deep Monte Carlo [60], Multi-Objective Optimization [21], and Hybrid RL with Generative Models. Table 3 summarizes the results.
Multi-Objective Optimization: Balanced performance with a surplus of $12,467.12 and strong efficiency (1,462.96). However, its static nature limits adaptability in dynamic scenarios [21].
Reinsurance optimization is a complex and dynamic field that requires robust, scalable, and adaptive models to manage financial risks effectively. The hybrid framework proposed in this study integrates generative modeling and reinforcement learning to address these challenges. This section evaluates the framework’s app...
Dynamic Programming: Achieved a final surplus of $12,487.71 with zero ruin probability and high efficiency (1,568.63). However, its scalability is limited by the ”curse of dimensionality” [6].
Ruin Probability Constraint: The probability of financial ruin must not exceed a predefined threshold[4]:
C
Climate change is a pressing global risk that affects all of humanity, both directly and indirectly, often through complex and unforeseen mechanisms (IPCC, 2018; Stern, 2007). As climate change intensifies, natural disasters become more frequent and severe, posing significant threats to societies worldwide (Field et al...
Accurate and comprehensive media coverage is essential for raising public awareness of climate change and encouraging global cooperation (Boykoff & Boykoff, 2007; Painter, 2013; Schäfer, 2015). Media reporting helps shape perceptions of climate-related risks and can influence public support for mitigation and adaptatio...
Media-induced empathy—or its absence— may have wide-reaching implications for global climate efforts. Natural disasters are tangible manifestations of climate change, and accurate, balanced reporting on these events can foster empathy and solidarity (de Waal, 2009). If media outlets focus narrowly on certain types of d...
Our findings also show that increases in reporting are strongly skewed toward disasters that cause significant fatalities, implying potential threshold effects in drawing media attention. This relationship, however, varies considerably across countries. Research on the link between media coverage and public awareness, ...
Climate change presents risks not just through direct impacts, but also via complex, cascading effects (IPCC, 2018; Beck, 2009). The increasing prevalence of climate-related disasters underscores the urgency of global action (Field et al., 2012; Coumou & Rahmstorf, 2012). However, traditional media often appears “blind...
A
(ii) assemble evidence pertaining to each ideal, such as recent affective (emotional) states, significant events, and objective outcomes;
(iii) appropriately weight and aggregate this evidence according to its importance to overall life quality, and
according to where the average \pdftooltipLSLife Satisfaction (See page 1.1) level lies on the scale, and according
(i) conceive of the domains, expectations, aspirations or other criteria salient to her sense of experienced life quality or satisfaction;
respondents according to an overall math score,121212This math score is a “plausible value” of the individual’s underlying
A
The logic of 1 holds more generally. Without intentional failing, “pass” and “fail” would be arbitrary, interchangeable labels. On any test, if type θ1subscript𝜃1\theta_{1}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is more likely to pass than type θ2subscript𝜃2\theta_{2}italic_θ start_POSTSUBSCRIPT 2 end_POSTSU...
The proof has two parts. First, a standard argument (see Myerson, 1982) shows that every implementable social choice function can be implemented by a truthful and obedient mechanism. The second part is specific to testing. Consider a truthful, obedient mechanism. Whenever the principal recommends the agent to not try o...
The principal can give the agent one test from the set T𝑇Titalic_T.444If the principal can give multiple tests, then the resulting compound test can be included in T𝑇Titalic_T. A compound test may have more scores than “pass” and “fail.” Section 7 extends the model to allow for nonbinary tests. The agent observes the...
Finally, we discuss two alternative assumptions about the agent’s control over the test result: (i) observable skipping and (ii) exogenous scores. Under (i), the agent cannot intentionally fail a test, but he can “skip” the test; skipping is observed by the principal. Under (ii), the agent can neither intentionally fai...
The principal chooses how to utilize the testing technology within a mechanism. Formally, we consider the following protocol. The principal elicits a type report from the agent. Based on the report, the principal selects one test to give the agent. The agent sees the test and privately chooses whether to try on the tes...
C
Our results show a stronger spatial concentration and skewness of industrial complexity values than export complexity values in Brazil. Only a few urban hubs have managed to achieve comparative advantages in the most complex services and specialized manufacturing activities, while low to intermediately complex agro-ind...
These results invite a rethinking of complexity as a policy tool in the context of regional development policies. Indeed, complexity-based indicators are multidimensional and highly dependent on the choice of underlying proxy data. In effect, they capture different dimensions of productive systems and knowledge bases, ...
These results imply that an economic diversification strategy that only considering exports may not be sufficient to use local knowledge, smart diversification, and growth potential in a large developing/emerging economy such as Brazil. Instead of identifying with complete industrial data, for instance, significant opp...
Economic complexity literature has traditionally used international trade data at the country level to estimate country and product-level complexity indicators [5, 2, 1]. However, it seems that export-based complexity measures are inadequate in comprehensively capturing knowledge and productive capabilities in all econ...
In this article, we show that industrial complexity is more adequate at capturing the knowledge base and predicting economic growth at the micro-regional level in a large emerging economy, such as Brazil, than export data. Export data does not capture significant strengths of regions in services as well as manufacturin...
A
Under this model of voter updating and behavior, do heterogeneous treatment effects provide evidence that the relevant mechanism is voter learning? Ex-ante, the empiricists do not know that learning is the active (or operative) mechanism. To learn about mechanisms, many researchers will estimate HTEs, typically pointin...
For the outcome measuring voter preferences, y1subscript𝑦1y_{1}italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT,
Under this model of voter updating and behavior, do heterogeneous treatment effects provide evidence that the relevant mechanism is voter learning? Ex-ante, the empiricists do not know that learning is the active (or operative) mechanism. To learn about mechanisms, many researchers will estimate HTEs, typically pointin...
The second outcome, y2subscript𝑦2y_{2}italic_y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, measures each voter’s vote choice for the incumbent. Vote choice is obviously a more standard outcome in literature on voter behavior. This outcome is given by:
For the outcome measuring voter choice y2subscript𝑦2y_{2}italic_y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT,
A
𝔼⁢[D⁢(U+V)]𝔼delimited-[]𝐷𝑈𝑉\displaystyle\mathbb{E}[D(U+V)]blackboard_E [ italic_D ( italic_U + italic_V ) ]
inno. in dim. $u$}}under⏟ start_ARG blackboard_P ( italic_A = italic_u ) end_ARG start_POSTSUBSCRIPT Prob. of prioritizing dim. italic_u end_POSTSUBSCRIPT × under⏟ start_ARG blackboard_P ( italic_U > 0 ) end_ARG start_POSTSUBSCRIPT Prob. of adopting inno. in dim. italic_u end_POSTSUBSCRIPT × under⏟ start_ARG blackboard...
ℙ⁢(A=u)⏟Prob. of prioritizing dim. u×ℙ⁢(U>0)⏟Prob. of adopting inno. in dim. u×𝔼⁢[U+V⁢|U>⁢0]⏟Combined perf. if adopting inno. in dim. usubscript⏟ℙ𝐴𝑢Prob. of prioritizing dim. usubscript⏟ℙ𝑈0Prob. of adopting inno. in dim. usubscript⏟𝔼delimited-[]𝑈𝑉ket𝑈0Combined perf. if adopting inno. in dim. u\displaystyle\unde...
ℙ⁢(A=v)⏟Prob. of prioritizing dim. v×ℙ⁢(V>0)⏟Prob. of adopting inno. in dim. v×𝔼⁢[U+V⁢|V>⁢0]⏟Combined perf. if adopting inno. in dim. v.subscript⏟ℙ𝐴𝑣Prob. of prioritizing dim. vsubscript⏟ℙ𝑉0Prob. of adopting inno. in dim. vsubscript⏟𝔼delimited-[]𝑈𝑉ket𝑉0Combined perf. if adopting inno. in dim. v\displaystyle\und...
inno. in dim. $v$}}.under⏟ start_ARG blackboard_P ( italic_A = italic_v ) end_ARG start_POSTSUBSCRIPT Prob. of prioritizing dim. italic_v end_POSTSUBSCRIPT × under⏟ start_ARG blackboard_P ( italic_V > 0 ) end_ARG start_POSTSUBSCRIPT Prob. of adopting inno. in dim. italic_v end_POSTSUBSCRIPT × under⏟ start_ARG blackboar...
B
In this section, we describe how we quantify the costs of the rise in volatility in terms of the rise in delays for inputs. Further, we measure the cost of delays in terms of the change in output, prices, and sourcing choices. To do so, first, we calibrate the rise in delivery delays to match the observed rise in input...
Next, we obtain the calibrated change in delays, Δ⁢delaystΔsubscriptdelays𝑡\Delta\text{delays}_{t}roman_Δ delays start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, and the price of the input from China, τ⁢ptf𝜏subscriptsuperscript𝑝𝑓𝑡\tau p^{f}_{t}italic_τ italic_p start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT sta...
We use the model to obtain a measure of delays that is able to rationalize both trends. We start by calibrating the model to match moments of the U.S. manufacturing sector in 2018. Then, we use data on the rise in tariffs applied by the Trump and Biden administrations to inputs from China, the decline in the imports fr...
To quantify the rise in delays, we internally calibrate the change in the relative price of inputs from China and the standard deviation of the delivery times for foreign inputs from China and ROW. We do so by matching the decline in the manufacturing imports from China over sales and the rise in input inventories over...
To measure and study the costs of the rise in delays in the aggregate economy, in this section we describe how we choose the parameters for our quantitative analysis. We start by calibrating the model to match moments of the U.S. manufacturing sector in 2018. Then, we use data on the actual tariff change, the decline i...
C
\theta^{t}\right)∫ roman_ln ( divide start_ARG ∫ italic_p ( italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT | italic_u , italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , italic_θ start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ) italic_μ ( italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) italic_d italic_ω start...
ω1subscript𝜔1\omega_{1}italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. Because the researcher’s belief about each parameter
ω2subscript𝜔2\omega_{2}italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT equals ω2∗=0superscriptsubscript𝜔20\omega_{2}^{*}=0italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = 0. That is, the researcher
Note that only the researcher’s belief about ω2subscript𝜔2\omega_{2}italic_ω start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT enters
and that the researcher’s belief over ω1subscript𝜔1\omega_{1}italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT assigns probability
C
≻1subscriptsucceeds1\succ_{1}≻ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is more ambiguity loving than ≻2subscriptsucceeds2\succ_{2}≻ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT if, for all f∈ℱ𝑓ℱf\in\mathcal{F}italic_f ∈ caligraphic_F and x∈X𝑥𝑋x\in Xitalic_x ∈ italic_X, x≻1fsubscriptsucceeds1𝑥𝑓x\succ_{1}fitalic_x ≻ star...
An agent is more ambiguity averse than an other one if she is less inclined to choose an uncertain act f𝑓fitalic_f over a constant act x𝑥xitalic_x. On the other hand, an agent is more uncertainty loving than an other one if she is more inclined to stick to an uncertain act f𝑓fitalic_f than to switch to a constant ac...
According to Axiom 5, if the outcome of an act is considered more desirable than the outcome of an other act in each state of the world, then the first act is preferred to the second one. In other words, according to a preference relation satisfying Axiom 5, the state-wise dominance of an act f𝑓fitalic_f over an act g...
While the DM cannot compare f𝑓fitalic_f to the constant x𝑥xitalic_x, she declares x𝑥xitalic_x more desirable than g𝑔gitalic_g. On the other hand, while she cannot compare g𝑔gitalic_g to the constant act y𝑦yitalic_y, she declares f𝑓fitalic_f more desirable than y𝑦yitalic_y.
Our axiomatization maintains the assumption that preferences are complete over constant acts, deemed as the simplest ones. Axiom 6 underscores the role of constant acts as benchmarks for decision making: if the DM is unable to compare the act g𝑔gitalic_g to the constant act x𝑥xitalic_x whenever she is unable to compa...
A
Trajtenberg et al., (2009) propose a heuristics-based algorithm as an alternative. This method does not use an ML algorithm to determine the relative weight of different matching criteria, such as two patent documents sharing the same technology class or the same co-inventor team. Instead, the weights are set by the re...
Restricting attention to patent records from East Germany during 1989 and 1990 provides us with more detailed information about assignees and addresses, which helps us with disambiguation. However, a disadvantage of this focus is that we cannot accurately measure the productivity of inventors before 1989, or the length...
Trajtenberg et al., (2009) propose a heuristics-based algorithm as an alternative. This method does not use an ML algorithm to determine the relative weight of different matching criteria, such as two patent documents sharing the same technology class or the same co-inventor team. Instead, the weights are set by the re...
While we adopt this basic principle, we modify the approach to suit our specific setting. In Germany, middle names and initials are much less common than in the Anglo-Saxon context. Furthermore, they are not used consistently across different regions. To ensure clean and consistent name fields, we remove titles, change...
For example, if two patent records share a common name like PETER MUELLER, they will be classified as belonging to the same career if they indicate the same technology class and the inventor resides in the same municipality. However, for a rare name like ULRIKE JAINTA, the same municipality or assignee would be enough ...
C
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
4