Dataset Viewer
Auto-converted to Parquet Duplicate
context
stringlengths
100
2.3k
A
stringlengths
100
1.9k
B
stringlengths
100
2.18k
C
stringlengths
100
2.18k
D
stringlengths
100
2.69k
label
stringclasses
4 values
With FaSTM∀for-all\forall∀N we address these motivations by making as few assumptions as possible. We argue that the designs for FlowScope and DBJ, do not take into account all of these motivations.
In Section 2, we mentioned solutions for detecting communities of entities; and for detecting flows of money. Detecting communities of flows is, to the best of our knowledge, a novel approach that we are proposing in this paper. FaSTM∀for-all\forall∀N is able to detect closely related and interdependent sequences of tr...
Fig. 11 shows the (topological) parameter-free nature of FaSTM∀for-all\forall∀N. This validates Motivation 2 and 3 described in Section 2 - i.e., complex money laundering networks involve transactions among several parties, covering longer distances (more than 2 hops). In the left most graph, we are using diameter on t...
We showed that with FaSTM∀for-all\forall∀N it is possible to follow all suspicious trails of money, for all nodes. Following Occam’s razor, we are making the fewest AML related assumptions as possible. We are putting every conceivable money trail, within a reasonable time window Δ⁢wΔ𝑤\Delta wroman_Δ italic_w, under co...
Detecting (dense) flows of money is different from detecting communities in a transaction graph. A flow, by definition, has a temporal order. In the context of AML modeling a dependent flow of transactions is, in most situations, more interesting then the immediate interactions a bank account has. For motifs in tempora...
A
BPS adopted the DA mechanism in 2005 after two years of community engagement and an evaluation of the Boston mechanism in collaboration with our team of design economists (see Section 6.10).
had an analogous error occurred. However, it remains unclear whether such a flawed implementation of the neighborhood priority policy existed between 2000 and 2005 under the Boston mechanism.
In these meetings, we established that BPS’s walk zone priority had been rendered ineffective due to the inadvertent use of the minimum guarantee choice rule instead of the over-and-above choice rule,
achieving this compromise required walk zone priorities to be implemented through the “over-and-above” choice rule.
It remains unclear whether BPS properly implemented its walk zone priority with the “over-and-above” choice rule between 2000 and 2005, prior to this reform.
D
Recruitment was done using ORSEE (Greiner, \APACyear2015). English-language instructions are available in Appendix E.2. IRB approval was granted by the Gesellschaft für experimentelle Wirtschaftsforschung e.V. on May 13, 2024 (Approval ID RNnxiot5), a German nonprofit association providing services to experimental econ...
Experiment 2 is able to replicate this pattern. Models 1–2 in Table 3 regress CA interventions on knowledge. Noteworthily, the baseline rate of intervention is much lower than in experiment 1. Recall from Section 3.5.1 that the order of Choosers 1 and 2 was randomized between subjects. Thus, model 3 restricts the analy...
Experiment 2 exploits cases (i) and (iii) of Section 3.1 to investigate further aspects of the relationship between paternalism and knowledge. Experiment 2 is a follow-up to experiment 1, designed after learning the results of experiment 1. As in experiment 1, CAs are allowed but not required to intervene in the decisi...
Subjects were invited to participate in the online experiment at a date of their choosing. On that date, they were free to start the experiment at any point between 10am and 6pm.151515Students from Maastricht were allowed to participate at any time. They were recruited by email through a lecture. All in all, 610 CAs st...
Subjects were invited to participate in the online experiment at a date of their choosing. On that date, they were free to start the experiment at any point between 2pm and 6pm. Subjects were only allowed to intervene after passing a comprehensive set of comprehension checks. While we did not restrict the number of att...
C
1.0000⁢e+001.0000superscript𝑒001.0000e^{+00}1.0000 italic_e start_POSTSUPERSCRIPT + 00 end_POSTSUPERSCRIPT
1.0000⁢e+001.0000superscript𝑒001.0000e^{+00}1.0000 italic_e start_POSTSUPERSCRIPT + 00 end_POSTSUPERSCRIPT
1.0000⁢e+001.0000superscript𝑒001.0000e^{+00}1.0000 italic_e start_POSTSUPERSCRIPT + 00 end_POSTSUPERSCRIPT
1.0000⁢e+001.0000superscript𝑒001.0000e^{+00}1.0000 italic_e start_POSTSUPERSCRIPT + 00 end_POSTSUPERSCRIPT
1.0000⁢e+001.0000superscript𝑒001.0000e^{+00}1.0000 italic_e start_POSTSUPERSCRIPT + 00 end_POSTSUPERSCRIPT
A
The feature reduction analysis followed a structured approach, starting with correlation analysis to identify redundant features. We applied principal component analysis (PCA) to understand feature importance and dimensionality reduction potential. The resulting feature sets comprised the full set (26 features), a redu...
For all experiments, we use the ROC-AUC metric as the primary evaluation criterion, supplemented by precision-recall analysis to better assess performance on the minority (default) class. We maintain consistent random seeds, data normalization, and sampling procedures to ensure replicability and fair comparisons across...
To ensure robust evaluation, we implemented 5-fold cross-validation across all experimental configurations, maintaining consistent fold divisions across different models to enable fair comparisons. Performance metrics focused primarily on ROC-AUC scores, supplemented by precision-recall analysis to account for class im...
Our experimental analysis systematically evaluated model performance across temporal windows and feature sets, implementing rigorous controls to ensure reliable comparisons. For temporal window analysis, we compared model performance using four distinct time spans: 1-year (2022), 3-year (2019-2022), 5-year (2017-2022),...
We also controlled for external factors that might influence model performance. We implemented standardized preprocessing pipelines, including consistent scaling and encoding procedures across all experiments. Additionally, we maintained identical hyperparameter settings within each model architecture across different ...
B
First, we establish that most CAs believe that average Choosers open too many boxes relative to CAs’ injunctive norm. Moreover, CAs tend to set the cap above their injunctive norm, replicating Grossmann \BBA Ockenfels’ finding (\APACyear2024) that CAs leave room for Choosers to express themselves.
Story 2 implies that norm levels do not moderate the effect of deliberation on beliefs, and that there is such moderation of the effect on errors.
Second, our highly powered survey experiment reveals that exogenous Chooser deliberation has no effect on the cap.
We only report on CAs where data on the cap is available or, in EndoDelay, data on the cap and the Chooser’s day to decide. From an original sample size of 2,714 CAs who partook in the experiment, this wholly preregistered filtering procedure leaves data on 2,702 CAs. Attrition or selection are thus not significant con...
Story 1 implies that norm levels moderate the effect of deliberation on beliefs, and an absence of moderation of the effect on errors.
B
Alternatively, they might wait for the price to drop to lower levels that better match their current resource availability and risk profile.
In the simulation, coprocessors decide on auction and task bidding and execution based on their current load and reputation levels. Tasks that are less resource-intensive are executed by operators, while more complex tasks are auctioned to coprocessors. This dynamic task allocation mechanism ensures optimal resource ut...
Our model implements a dynamic and adaptive mechanism to adjust task allocations based on real-time performance data. This approach not only aims to balance the computational load across the network but also to refine the reward system based on evolving network conditions and coprocessor performance. Additionally, the ...
For tasks that exceed the complexity threshold of operators, an auction mechanism is employed to determine which coprocessor will undertake the task. This auction is a modified Dutch auction, where the initial high bid price is progressively lowered until a coprocessor accepts the task. However, unlike traditional Dutc...
This modified Dutch auction system ensures that tasks are allocated not only based on price competitiveness but also in a manner that respects the operational capabilities and current conditions of each coprocessor. This strategic allocation aids in maintaining system efficiency and task reliability, even under varying...
D
This section contains our empirical analysis. Section 3.1 describes the data and the steps we follow to prepare it for the empirical analysis. Section 3.2 discusses the model selection where we conduct a comparison study on the validation sample to select the model parameters for out-of-sample analysis, namely the samp...
We also conduct statistical tests to determine whether the mean and variance of the standardized residuals are 0 and 1, respectively. To this end, for each portfolio, we calculate the empirical mean and variance of the residuals over the 360 months. Subsequently, we perform t𝑡titalic_t-tests to assess whether the aver...
We consider the monthly returns of approximately 30,000 individual stocks from the three major stock exchanges in the US, namely, NYSE, AMEX and NASDAQ. The data is collected from CRSP over a period spanning 55 years from February 1962 to December 2016. We use the monthly Treasury bill rate as a proxy for the risk-free...
Next, we investigate the factors contributing to the improved performance of our portfolios, aiming to identify whether this enhancement stems from the non-linearity of the Gaussian process regression or from the ensemble learning approach. For that, we compare the performance of the portfolios from our ensemble GPR mo...
We evaluate the performance of decile portfolios, D⁢1𝐷1D1italic_D 1 to D⁢10𝐷10D10italic_D 10, and the long-short portfolio L⁢S𝐿𝑆LSitalic_L italic_S from our five portfolio strategies, namely EW, VW, PW, UW and PUW, in terms of their predicted monthly returns, the average realized monthly returns, their standard dev...
B
Q_{t}&=kR_{t-1}-m\triangle R_{t}+hV_{t}+l+V_{t}U_{t}.\end{split}start_ROW start_CELL roman_ln italic_V start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_CELL start_CELL = italic_α + italic_β roman_ln italic_V start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT + italic_W start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ;...
(Zt,Wt)subscript𝑍𝑡subscript𝑊𝑡(Z_{t},W_{t})( italic_Z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_W start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) are IID random vectors with mean zero.
We already proved existence of a stationary distribution for the first two components (ln⁡V,R)𝑉𝑅(\ln V,R)( roman_ln italic_V , italic_R ). Next, ξt:=(ln⁡Vt,Rt,Ut)assignsubscript𝜉𝑡subscript𝑉𝑡subscript𝑅𝑡subscript𝑈𝑡\xi_{t}:=(\ln V_{t},R_{t},U_{t})italic_ξ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT := ( roman...
Innovations η:=(U,Z,W)assign𝜂𝑈𝑍𝑊\mathbf{\eta}:=(U,Z,W)italic_η := ( italic_U , italic_Z , italic_W ) are i.i.d. with mean zero. However, these components can be correlated.
The three-dimensional innovations vector η𝜂\mathbf{\eta}italic_η has a continuous distribution on ℝ3superscriptℝ3\mathbb{R}^{3}blackboard_R start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT with strictly positive density fηsubscript𝑓𝜂f_{\mathbf{\eta}}italic_f start_POSTSUBSCRIPT italic_η end_POSTSUBSCRIPT everywhere.
C
Similarly to what we pointed out in Remark 2.5, it is possible to prove an analogous result assuming a power utility u⁢(x)=xγγ𝑢𝑥superscript𝑥𝛾𝛾u(x)=\frac{x^{\gamma}}{\gamma}italic_u ( italic_x ) = divide start_ARG italic_x start_POSTSUPERSCRIPT italic_γ end_POSTSUPERSCRIPT end_ARG start_ARG italic_γ end_ARG, with γ...
Motivated by the previous discussion, we are now ready to illustrate the main contribution of this paper.
This type of consistency naturally leads to strategies which are not affected by a fixed time horizon, which is the basis of the well known theory of forward performances, which we now recall, as it will play a crucial role in the discussion.
We conclude by mentioning an alternative class of stochastic utilities, where a deterministic utility is perturbed by a multiplicative martingale noise. While the expected utility of the agent on deterministic quantities is unaffected (and so are her preferences), the randomness of the market interacts with the marting...
The results described so far naturally lead to the question of whether an optimal random time exists to exit the investment plan. We now present a heuristic argument to illustrate why consistency offers the advantage of ensuring no regret, regardless of the exit time chosen by the agent.
D
We emphasize that, in contrast to the work [26], our risk-premium process θ∈ℍBMO2𝜃subscriptsuperscriptℍ2BMO\theta\in\mathbb{H}^{2}_{\rm BMO}italic_θ ∈ blackboard_H start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT roman_BMO end_POSTSUBSCRIPT is unbounded in general.
Since this inevitably makes the market incomplete, our strategy in the following sections does not work in a complete market.
As we will see in later sections, this generalization is necessary to handle the mean-field market clearing equilibrium.
After we obtain the optimal strategy of each agent, we move on to characterize the mean-field equilibrium
Let us first derive, heuristically, the relevant mean-field BSDE, which then will be shown to characterize the market-clearing
B
\mu_{g}-\frac{\theta}{2}w^{2}\sigma_{g}^{2}\right)italic_w ⋅ italic_μ start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT + ( 1 - italic_w ) italic_μ start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT - divide start_ARG italic_γ end_ARG start_ARG 2 end_ARG italic_w start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_σ start_POST...
After constructing the layers, we proceed to train the process by using FinRL for each calibration iteration. This training involves monitoring price fluctuations, making decisions (such as selling, holding, or buying a specific quantity of stock), and determining rewards based on various investor profiles. The outcome...
Each period involves establishing the RL environment, determining states and actions, and modifying reward functions as previously described. Specifically, the reward function is updated through a linear combination of the portfolio return and the weighted ESG score from each ESG rater for the corresponding period. We ...
Data Source: The study utilizes a dataset covering the period from 06/30/2018 to 06/30/2021 for the training set and 07/01/2021 to 06/30/2022 for the testing set. In each calibration, we choose to use ESG data from the four major ESG raters separately. In addition, the four kinds of proposed ensemble ESG scores in Sect...
By substituting these weights into the variance formula, we obtain the portfolio variances for the three types of investors, reflecting their different attitudes toward ESG factors and uncertainty. Next, we derive the differences in the portfolio variances for each pair of the three investor types:
A
Ideally, the allocation rule should be determined by the performance of teams that have an “average” probability of qualifying for the FIFA World Cup. The weakest teams do not appear in our database as they are eliminated in the qualification tournaments. The top teams can be separated into a “seeded” set, a kind of “e...
Inspired by the recent expansion of the FIFA World Cup to 48 teams, the current work has proposed allocation rules to distribute tournament slots among different sets of teams. Our approach adapts the methodology of the official FIFA World Ranking, a rating of national teams, to compare the performance of continents in...
Inspired by the recent expansion to 48 teams, Krumer and Moreno-Ternero, (2023) explore the allocation of additional slots among continental confederations by using the standard tools of the fair allocation literature. The “claims” of the continents are based on the FIFA World Ranking and the World Football Elo Ratings...
Table 5 focuses on the slot allocation for the 2026 FIFA World Cup based on all available information. The rule of the maximal number of slots (Section 4.6) applies to the CONMEBOL, resulting in a quota of eight. Both separating the nations with the highest number of appearances and choosing the frequency of Elo update...
Inspired by this idea, two sets of seeded countries have been chosen based on the number of appearances in the FIFA World Cup from 1954 to 2022.
D
”Third Party Review” is an indicator variable equal to 1 if the center has undergone an external evaluation.
”Education and Care Preschool” is an indicator variable equal to 1 if the center is certified as combining daycare and educational functions.
”Third Party Review” is an indicator variable equal to 1 if the center has undergone an external evaluation.
Although not directly utilized in the structural estimation, I incorporate data on daycare center characteristics from various sources. First, I collect information from the municipality, including exact addresses, total capacity, whether the center is publicly operated, and whether it qualifies as a certified center t...
”Education and Care Preschool” is a binary variable equal to 1 if the center is certified as a facility that integrates daycare and educational functions.
A
During the period from May 8888, 2022 to May 18181818, 2022, i.e., the first ten days after the Terra crash (Figure 17), we observed 13131313 users actively deleveraging their direct leverage staking positions and 5555 users deleveraging their indirect leverage staking positions. This activity resulted in a total debt ...
However, these financial opportunities come with inherent risks. To address this, we conduct stress tests under extreme market scenarios to assess the vulnerabilities of leverage staking. Our findings reveal that leverage staking significantly amplifies the risk of cascading liquidations, leading to intensified selling...
Leverage staking amplifies the risks of cascading liquidations. The liquidation of leverage staking positions introduces additional selling pressure to the market, thereby exacerbating the decline in stETH prices and triggering further liquidations.
To summarize, users with leverage staking positions can take various actions to respond to potential liquidations. Regardless of their choices, these actions may contribute to additional selling pressure on stETH, further exacerbating price declines and liquidation cascades. This dynamic underscores the interconnection...
We perform stress tests on the Lido–Aave–Curve \acLSD ecosystem to evaluate the impact of leverage staking under extreme conditions of significant stETH devaluation. Our simulation reveals that leverage staking escalates the risk of cascading liquidations. Systemic risk is exacerbated not only through liquidations but ...
C
Now that we have listed the assumptions and set-up for the model, we proceed to state a number of key results expressed conveniently in the form of propositions. These propositions provide useful insights into the optimal individual–ESE scores, optimal group size, and loan ceiling constraints.
Finding an optimal ESE score for both the lenders and borrowers in an individual ESE–joint liability model is critical because it can help address the challenge of promoting sustainability in agricultural production while ensuring access to credit for small-scale farmers. By incorporating environmental and social facto...
decision–making has gained a major traction. At the same time, while credit metrics for corporations have been extensively studied, there remains a noticeable gap in the assessment of individual borrowers, particularly farmers seeking credit to finance their agricultural operations. To address this gap, this paper intr...
In particular, in this paper, we propose a novel link between an individual–ESE score and a joint liability model and provide lenders and borrowers (where lenders are insurance companies and borrowers are farmers) with optimal scores and corresponding optimal group sizes by maximizing total utility of a group of indivi...
In this paper we have incorporated a novel concept coined as an individual–ESE score into a joint liability model for the purpose of evaluating the sustainability and creditworthiness of individual farmers acting as borrowers in the model. To achieve this objective we initially considered the expected profit of an indi...
A
1.13⋅1010⋅1.13superscript10101.13\cdot 10^{10}1.13 ⋅ 10 start_POSTSUPERSCRIPT 10 end_POSTSUPERSCRIPT
2.24⋅1010⋅2.24superscript10102.24\cdot 10^{10}2.24 ⋅ 10 start_POSTSUPERSCRIPT 10 end_POSTSUPERSCRIPT
1.13⋅1010⋅1.13superscript10101.13\cdot 10^{10}1.13 ⋅ 10 start_POSTSUPERSCRIPT 10 end_POSTSUPERSCRIPT
1.11⋅1010⋅1.11superscript10101.11\cdot 10^{10}1.11 ⋅ 10 start_POSTSUPERSCRIPT 10 end_POSTSUPERSCRIPT
2.24⋅1010⋅2.24superscript10102.24\cdot 10^{10}2.24 ⋅ 10 start_POSTSUPERSCRIPT 10 end_POSTSUPERSCRIPT
A
Accepted (5.9×10−15.9superscript1015.9\times 10^{-1}5.9 × 10 start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT)
Table 3: For S=0.5%𝑆percent0.5S=0.5\%italic_S = 0.5 %, H=0.7𝐻0.7H=0.7italic_H = 0.7, and 10,000 simulated trajectories of one trading day using model 2, bias, standard deviation, and quadratic risk of the estimators. The last column is the output of the bootstrap statistical test of relevance of the spread, with a co...
Table 2: For S=0.5%𝑆percent0.5S=0.5\%italic_S = 0.5 %, H=0.3𝐻0.3H=0.3italic_H = 0.3, and 10,000 simulated trajectories of one trading day using model 2, bias, standard deviation, and quadratic risk of the estimators. The last column is the output of the bootstrap statistical test of relevance of the spread, with a co...
Table 4: For S=0.5%𝑆percent0.5S=0.5\%italic_S = 0.5 % and 10,000 simulated trajectories of one trading day using model 3, bias, standard deviation, and quadratic risk of the estimators. The last column is the output of the bootstrap statistical test of relevance of the spread, with a confidence of 95%percent9595\%95 %...
Table 1: For S=0.5%𝑆percent0.5S=0.5\%italic_S = 0.5 % and 10,000 simulated trajectories of one trading day using model 1, bias, standard deviation, and quadratic risk of the estimators. The last column is the output of the bootstrap statistical test of relevance of the spread, with a confidence of 95%percent9595\%95 %...
D
\boldsymbol{M}+\boldsymbol{P}_{long}\right).= bold_italic_M - ( 1 - italic_θ ) roman_Δ italic_τ ( divide start_ARG italic_σ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG 2 end_ARG bold_italic_K + ( italic_r start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT - divide start_ARG italic_σ start_POSTSUPERSCRIPT ...
In previous paragraphs, we provide notable obstacles encountered in HJB PDE and, in general, Black-Scholes PDEs. To quantify and resolve abovementioned issues, we suggest a hybrid framework of variable mesh creation by FEM and Crank-Nicolson-Rannacher (CNR) techniques. We, in particular, resort to higher order FEM disc...
In variable timestepping case, CNR is implemented to reduce the possible spurious oscillation that might come from the pay-off function [5]. As it was depicted along the smooth solution line, all methods are in good visual agreement among each other, with or without Rannacher adjustment. Henceforth we kept the numerica...
In the spirit of Newton-type algorithms, we solve the DAE (16) using the penalty-like algorithm which was applied first in [7] and then improved by [6]. As in any iterative methods, there has to be at least one stopping criteria which is usually tolerance (t⁢o⁢l𝑡𝑜𝑙tolitalic_t italic_o italic_l) of two consequent ite...
Optimal control problems, reformulated as HJB PDE are fully nonlinear PDEs consisting the nonlinearity in first-order derivative. Usually, the treatment of degenerate or convection dominated Black-Scholes PDEs are tedious to implement and generalize for different tasks seen in Black-Scoles PDEs [1]. There are several w...
C
In this case Dq0=1subscriptsuperscript𝐷0𝑞1D^{0}_{q}=1italic_D start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT = 1 and changes in the functional distribution mirrors changes in personal income distribution.
Although the capital remains concentrated in the hands of high income earners, in modern capitalist economies, different income groups earn both capital and labor incomes \citepmilanovic2017.
We therefore see that the marginal effect of the capital share depends not only on a more or less concentrated distribution of capital income across individuals ranked in the top of the income distribution, but also by the difference between the concentration parameters of the capital and the labor income distributions...
Although the capital income remains relatively concentrated in favor of the overall rich, in modern capitalist economies both labor and capital incomes are distributed across different income groups.
Historically a high concentration of capital incomes in favor of the rich (high Sq,K0subscriptsuperscript𝑆0𝑞𝐾S^{0}_{q,K}italic_S start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_q , italic_K end_POSTSUBSCRIPT) and a high concentration of labor incomes in favor of people with low overall income ...
C
Conversely, the carbon price per unit (B)𝐵(B)( italic_B ) is the least sensitive variable in the framework. While changes in B𝐵Bitalic_B do affect both the optimal alpha and the profitability, the overall impact for the firm is moderate compared to the other factors. Firms can manage the effects of carbon pricing thr...
The carbon price signifies the cost of emissions imposed by regulatory frameworks and is essential to a company’s profitability. This price is sensitive to regulatory or policy changes, with high carbon prices creating significant financial pressure on firms to reduce emissions. When the market carbon price is elevated...
In conclusion, firms should focus heavily on productivity improvements (β)𝛽(\beta)( italic_β ) to maximize their profitability while carefully managing their production costs (c)𝑐(c)( italic_c ) and their low-carbon production efficiency (k𝑘kitalic_k) to optimize investments. Carbon pricing (B)𝐵(B)( italic_B ), whi...
Table 4 presents various variables under different scenarios of low-carbon transition for enterprises, focusing on production efficiency, costs, productivity, and carbon pricing.
Companies must carefully balance the costs and benefits of low-carbon investments. This involves determining the optimal investment ratio, which refers to the proportion of financial resources allocated to low-carbon initiatives relative to total investments. Companies that can effectively manage financial risks and se...
B
Amongst models of price evolution of basic securities, nowadays the so-called stochastic volatility models again become very popular. One of them, sometimes referred to as the exponential Ornstein–Uhlenbeck model, see Mejia Vega Vega (6), is considered in the present note.
The general recommendation how to provide an arbitrage-free price of an option consists in computation of its mean with respect to an equivalent martingale measure (EMM). The delicate issue here is with respect to which one.
The model has two sources of randomness, the equivalent martingale measure is not unique and one cannot represent every contingent claim by a stochastic integral with respect to the price process S𝑆Sitalic_S. The Back–Scholes paradigm of “pricing by replication” fails. There is no consensus between experts how to pric...
of corresponding pay-offs with respect to an equivalent martingale measure satisfying some variational property.
The entropy minimal equivalent martingale measure Qosuperscript𝑄𝑜Q^{o}italic_Q start_POSTSUPERSCRIPT italic_o end_POSTSUPERSCRIPT (EMM) is defined as an equivalent martingale measure such that
A
(statistical) arbitrage strategy (Marshall et al., 2013). The trading behavior of this strategy resembles a ”contrarian” approach, as it takes positions that counteract market overreactions. In this paper, we rigorously examine that the predictive power of the ”lead-lag spread” primarily arises from the cross-effect be...
The remainder of the paper is structured as follows. Section 2 introduces the high-frequency data structure in CFFEX and how we process the raw data, as well as the basic liquidity measures of different futures contracts. Section 3 presents the statistical estimators and models used to analyze tick by tick lead-lag net...
In this section, we present the results from our statistical analysis and backtesting. In Section 4.1, we examine the lead-lag relationships between stock index futures with different maturities. Section 4.2 analyzes the tick-by-tick joint movements across these futures contracts. In Section 4.3, we show how the strong...
Next, we fill in any missing futures contracts in an update event by using the data from the previous tick. While we recognize that filling missing values via previous-tick interpolation can introduce spurious lead-lag effects, the correlation level may decrease when asset processes are sampled synchronously at high fr...
As the pairwise lead-lag analysis showed, the tick-by-tick shifts in the futures contracts are highly correlated across all maturities, in which lead futures contract F1subscript𝐹1F_{1}italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT moves first in temporal advance. In order to analyze the joint dynamics of all future...
A
Our analysis suggests that nodes with similar prices are assigned to different zones under the alternative configurations being discussed (see Appendix B). This discrepancy can create economic inefficiencies and fairness issues if consumers and producers in these zones experience different electricity prices (Morales &...
uniform price in a whole BZ promotes transparency due to a single price signal in a large region, and it limits the market power that individual parties can exercise. Moreover, this approach enhances liquidity and makes cross-border trading easier in a multi-country system (NEMO Committee, 2024).
lead to regional inequities and distort the zonal price signals, meaning the prices do not accurately reflect the supply and demand conditions in specific subregions. This results in poor local incentives to adapt to energy prices, poor investment incentives, and an inefficient implementation of scarcity pricing (Papav...
In this process, the TSOs are evaluating 22 indicators related to market efficiency, network security, and the stability and robustness of the proposed configurations over time (ACER, 2020).
Morevoer, temporally unstable zonal configurations (Table 1) can lead to long-term market inefficiencies by distorting price signals, reducing market liquidity, and creating investment uncertainty (Gugler et al., 2020). Inconsistent price patterns over time can diminish market liquidity by introducing uncertainty and ...
D
The remainder of the paper is organized as follows. In Section 2, we discuss the problem formulation. In Section 3, we present the theory of q𝑞qitalic_q-learning for jump-diffusions, followed by the discussion of q𝑞qitalic_q-learning algorithms in Section 4. In Section 5, we apply the general theory and algorithms to...
are standard Brownian motion and compensated Poisson random measure, respectively. Using them we can rewrite the SDE for S^^𝑆\hat{S}over^ start_ARG italic_S end_ARG as
For reader’s convenience, we present Algorithms 1 and 2, which summarize the offline and online q𝑞qitalic_q-learning algorithms respectively. Such algorithms are based on the so-called martingale orthogonality condition in Jia and
For readers’ convenience, we first recall some basic concepts for one-dimensional (1D) Lévy processes, which can be found in standard references such as Sato (1999) and Applebaum (2009).
The study so far has been predominantly on pure diffusion processes, namely the state processes are governed by controlled stochastic differential equations (SDEs) with a drift part and a diffusion one. While it is reasonable to model the underlying data generating processes as diffusions within a short period of time,...
C
We also assume that the credit rating methodology remains the same between the train and test sets. This is an assumption that is made by the rest of the literature, and our training set is spread over an 18 year period, however it does not rule out the possibility that the results are only valid over the time period t...
The paper shows that while LLMs are good at encoding textual data and inferring signals that traditional methods cannot pick up on when combined with numerical data in the prompt there is performance deterioration. The other advantage is that traditional methods offer increased interpretability and a better understandi...
The advantage is that it requires a lot less memory to fine-tune a model and in contrast to some parameter-efficient fine-tuning methods, adaptation can take place through the entire model stack.
Linked to the recent progress of LLMs, there has been growing interest in applying these models in a variety of downstream applications Kaddour et al. (2023). It has been shown that as generative LLMs scale, they acquire abilities that were not present in smaller LLM variants Zoph et al. (2022), such as modular arithme...
The LoRA fine-tuning methodology outlined in this paper is a parameter-efficient technique and has been shown to be competitive in a variety of settings Hu et al. (2021), but can be outperformed in some tasks by full fine-tuning and other adapter-based methods Xu et al. (2023). We compared the performance of the LoRA i...
D
The gradual but nonlinear adoption of AI technologies in transportation, supported by historical data on technology adoption curves.
This system of equations models the feedback loop between AI adoption and traffic congestion. As AI adoption increases, congestion is expected to decrease, while high levels of congestion may slow down AI adoption due to resistance from the public or technical challenges.
Adaptive traffic signals powered by AI algorithms are revolutionizing traffic management by improving flow and reducing congestion. [19] discuss how these systems utilize data from sensors and cameras to monitor vehicle patterns and dynamically adjust signal timings in response to real-time traffic conditions. By optim...
by numerical data from these case studies, the qualitative alignment strengthens its credibility and relevance to real-world contexts.
The feedback loop wherein improved traffic conditions from AI adoption further incentivize greater adoption over time, reflecting findings from real-world case studies like SCATS and autonomous vehicle trials.
D
You are a professional cryptocurrency analyst, specializing in predicting next week’s {”price trend of a cryptocurrency”/”market trend”} based on the provided candlestick chart. Your output should be in the form of:
Analyze the following information of crypto to determine its  target in a week. Please respond with Rise or Fall and provide your reasoning for the  prediction.:
Analyze the following {”cryptocurrency”/”market”} information to determine the strength of the {”price trend of a cryptocurrency”/”market trend”} in a week. Please respond with Rise or Fall and provide your reasoning for the prediction.
You are a professional cryptocurrency analyst, specializing in predicting next week’s {”price trend of a cryptocurrency”/”market trend”} based on the provided information. Your output should be in the form of:Target: (predicted target)
You are a professional cryptocurrency analyst, specializing in predicting next week’s {”price trend of a cryptocurrency”/”market trend”} based on the provided candlestick chart. Your output should be in the form of:
B
}_{\textrm{left}}(\varepsilon)|t-s|| over~ start_ARG italic_φ end_ARG ( italic_t ) - over~ start_ARG italic_φ end_ARG ( italic_s ) | ≤ over~ start_ARG italic_φ end_ARG start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT left end_POSTSUBSCRIPT ( italic_ε ) | italic_t - italic_s |.
Any discrete time scheme can be extended as step-wise constant càdlàg scheme by setting, here in the case of X𝑋Xitalic_X, X¯t†:=X¯t¯assignsubscriptsuperscript¯𝑋†𝑡subscript¯𝑋¯𝑡\overline{X}^{{\dagger}}_{t}:=\overline{X}_{\underline{t}}over¯ start_ARG italic_X end_ARG start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT start...
As a consequence of this result, we also get some error bounds for the step-wise constant Euler scheme (Corollary 4.2).
We conclude this subsection by providing the error bounds relative to the step-wise constant càdlàg scheme.
Finally, some error bounds for the step-wise constant Euler scheme for X𝑋Xitalic_X are presented (Corollary 4.5).
C
\right]}.italic_μ ( italic_t , italic_Z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) = blackboard_E [ over~ start_ARG italic_μ end_ARG ( italic_t , italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) | italic_Z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ] , italic_σ ( italic_t , italic_Z start_POSTSUBSCRI...
As T→∞→𝑇T\to\inftyitalic_T → ∞, the limit of time-average of the conditional expectation (36) is given by
Here, 𝔼⁢[⋅]𝔼delimited-[]⋅\mathbb{E}\left[\cdot\right]blackboard_E [ ⋅ ] denotes the conditional expectation with respect to the filtration up to time t𝑡titalic_t.
By taking into account the boundary and terminal conditions and taking conditional expectation 𝔼t⁢[⋅]subscript𝔼𝑡delimited-[]⋅\mathbb{E}_{t}\left[\cdot\right]blackboard_E start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT [ ⋅ ] on both sides of the last equation, we obtain
𝔼t⁢[μ~⁢(τ,sτ)⁢uz]=𝔼t⁢[𝔼⁢[μ~⁢(τ,sτ)|ℱτ]⁢uz]=𝔼t⁢[𝔼⁢[μ~⁢(τ,sτ)|Zτ]⁢uz]=𝔼t⁢[μ⁢(τ,Zτ)⁢uz].subscript𝔼𝑡delimited-[]~𝜇𝜏subscript𝑠𝜏subscript𝑢𝑧subscript𝔼𝑡delimited-[]𝔼delimited-[]conditional~𝜇𝜏subscript𝑠𝜏subscriptℱ𝜏subscript𝑢𝑧subscript𝔼𝑡delimited-[]𝔼delimited-[]conditional~𝜇𝜏subscript𝑠𝜏subscript𝑍�...
B
If ρ𝜌\rhoitalic_ρ is Cb1subscriptsuperscript𝐶1𝑏C^{1}_{b}italic_C start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT in t𝑡titalic_t, m𝑚mitalic_m is Lipschitz in R𝑅Ritalic_R, and β𝛽\betaitalic_β and σ𝜎\sigmaitalic_σ are both Cb1subscriptsuperscript𝐶1𝑏C^{1}_{b}italic_C sta...
(3) We employ BSVIEs to establish dynamically optimal controls for mean-variance portfolio selection problems under various stochastic volatility models. Our well-posedness results guarantee that these equilibrium solutions are both well-defined and solvable. Additionally, we observe that the myopic and intertemporal h...
By Theorem 2.4, the stochastic Lipschitz BSVIE (18) is well-posed over any arbitrarily large time interval. Furthermore, Theorem 2.4 allow us to explore mean-variance problems in more stochastic-volatility models within random investment markets; see Section 4.
After briefly reviewing the game-theoretic approach to TIC control problems, we now focus on dynamic MV portfolio selection in an incomplete market with stochastic investment opportunities. We first present a probabilistic (BSVIE) representation of dynamically optimal MV portfolios. Then, we prove the existence and uni...
Lemma 4.3 follows directly from our well-posedness results in Theorem 2.2 and Theorem 3.2. Later, we will illustrate with some common stochastic volatility models that satisfy the assumptions made. Moreover, the extension results from Theorem 2.4 allow us to relax those assumptions, making the framework more adaptable ...
D
P(\mathbb{I}_{\mathrm{PHI}})}roman_Π ( roman_KC ) = italic_L square-root start_ARG italic_P ( blackboard_I start_POSTSUBSCRIPT roman_PHI end_POSTSUBSCRIPT ) / italic_P ( blackboard_I start_POSTSUBSCRIPT roman_KC end_POSTSUBSCRIPT ) end_ARG and roman_Π ( roman_PHI ) = italic_L square-root start_ARG italic_P ( blackboard...
The liquidity remaining, as a percentage of the initial cash reserves, is shown as a time series in Figure 1(d).
Taking the inspiration from the form UHsuperscript𝑈𝐻U^{H}italic_U start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT and following the structure of the utility functions in [12], for the remainder of this work, we will focus on a special structure for AMMs based on the remaining cash reserves for all outcomes, i.e., ...
Notably, for both of these LBAMMs, the liquidity available ΠΠ\Piroman_Π scales linearly with the initial cash reserves L𝐿Litalic_L; for this reason we quote all profits and losses as a percentage of the initial cash reserves rather than as an absolute value.
In this final case study, we explore the use of an LBAMM for pricing European options with a fixed expiry. Notably, for this construction, we want to consider a general probability space rather than the finite probability space considered in the prior case study. In doing so, we consider the LBAMM based on the scaled a...
C
ρ^g⁢(Y)=∫01q^Y⁢(1−u)⁢𝑑g⁢(u).subscript^𝜌𝑔𝑌superscriptsubscript01subscript^𝑞𝑌1𝑢differential-d𝑔𝑢\hat{\rho}_{g}(Y)=\int_{0}^{1}\hat{q}_{Y}(1-u)dg(u).over^ start_ARG italic_ρ end_ARG start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT ( italic_Y ) = ∫ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_P...
The estimation of DRMs according to eq. (5) requires the estimation of quantiles at the levels 1−u1𝑢1-u1 - italic_u for u=α0,…,αm𝑢subscript𝛼0…subscript𝛼𝑚u=\alpha_{0},\dots,\alpha_{m}italic_u = italic_α start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , italic_α start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT. We discuss...
Two questions have to be considered: First, how should the available N𝑁Nitalic_N samples be allocated to each quantile at the different levels? Second, should individual importance sampling be used for each quantile, or should a single common measure change for pooled samples be preferred?
For the importance sampling quantile estimator (2) we need to evaluate the likelhood ratio which requires knowledge of the normalizing constants. To be more precise, we have
The discussed algorithm consists of two simulation steps. First, pivot samples are drawn that are used for both the choice of the ML approximation and the determination of an IS measure change. Second, samples are generated under the IS distribution and used for the estimation of the DRMs. However, if DRMs are consider...
B
A primary criticism of the Borda count is its susceptibility to majority winner failures, but note how rare these failures actually are. BCU and MBC’s 1.3%percent\%% correspond to just 3 elections with majority winner failures, with ABC exhibiting this failure only once. Meanwhile, EBC and QBC never have this failure, ...
When comparing the manipulation failure rates, the BCU percentage for truncation failures illustrates the obvious incentive voters have with this method to submit ballots that only include their top choice, known as bullet voting. Nearly half of the elections using this variation are vulnerable to truncation failures, ...
Truncation Failure: A truncation failure occurs if there is a set of voters who could rank fewer candidates on their ballots and result in their favored candidate becoming the winner. Note this is usually called a truncation paradox in the literature, but we choose to distinguish it from the antidemocratic paradoxes si...
The two primary approaches for handling partial or truncated ballots are the Borda count untreated (BCU) and the modified Borda count (MBC). If a ballot only ranks k𝑘kitalic_k of the n𝑛nitalic_n candidates, MBC awards (k,k−1,…,2,1,0,…,0)𝑘𝑘1…210…0(k,k-1,\ldots,2,1,0,\ldots,0)( italic_k , italic_k - 1 , … , 2 , 1 , 0...
Observe that this method assigns the same total of points for every ballot, whether complete or partial. This also reduces, albeit not completely eliminating, the bullet voting incentive of BCU. A voter in an ABC election who truncates their ballot is making a net neutral move, as their highest ranked candidate that is...
A
Assess the feasibility of identifying the best-performing policy among three options based on the outcomes of two prior tests.
Off-policy learning goes beyond mere evaluation by leveraging OPE results to iteratively refine and enhance policies. The primary objective of the new policy learning process is to identify a counterfactually optimal policy that maximizes expected rewards within the given environment for the past logged data. A key adv...
Off-Policy Learning: Off-policy learning is the approach where an agent learns about an optimal policy based on the off-policy evaluator. Due to the dynamic nature of the policies and the complex underlying probability associated with them, we learn proxy payment models for simulation. We elaborate more about the proce...
In this study, we have demonstrated the effectiveness of off-policy evaluation (OPE) techniques in refining payment strategies by leveraging counterfactual scenarios. Our results underscore the advantages of continuous OPE methods over traditional discretized approaches, providing more precise and nuanced insights into...
Learning optimal policies through optimization of policy estimators and evaluating their effectiveness via simulation.
D
Concrete plans have been outlined globally to achieve the Sustainable Development Goals (SDGs) set under Agenda 2030, and India is following suit. Although there are various composite indices to measure SDG attainment worldwide, there is a lack of state-level analysis in India, necessitating a complex approach to evalu...
The data used for this study was obtained from the SDG Dashboard provided by NITI Aayog. NITI The Aayog, as the nodal agency for SDGs in India, has released SDG reports for all Indian states and Union Territories for the years 2018, 2019, 2020-21, and 2023-24. Each of the reports comes with a data frame that consists o...
In response to this need, NITI Aayog created the SDG India Index, marking the first government-led effort to track SDG progress at the subnational level. Designed to map the progress of each state and union territory, this index not only measures advancement but also fosters cooperative and competitive federalism to en...
The ranks for the state/UTs as well as the weights of the SDGs are calculated from the centralities of the weighted bipartite network. Using a geographical plot, the ranks of the states are visualized in Figure 2a. The greener colors represent a higher rank for a state. A visual representation of the bi-partite network...
The study finds that SDG 9—focused on Industry, Innovation, and Infrastructure—ranks the highest across Indian states, signaling India’s emphasis on enhancing industry partnerships, improving business infrastructure, and fostering a culture of innovation nationwide. Achieving this goal significantly benefits the nation...
D
From the graphs, it can be seen that the “gap” between model price and market price has narrowed, we can observe that the average RMSE decreases from 3.14 to 2.96 when “Multiple Regression” is applied, which indicates that it is useful in terms of practical pricing.
This chapter presents performance of the pricing algorithm by comparing the actual prices with the model prices of 10 CCBs with 3A credit rating in the first half of 2023. The experimental results confirm the effectiveness of “Multiple Regression” in Chapter 2. Then we use the predicted price as a determining factor fo...
We select the commonly used “Double Low Strategy” as the baseline to observe the backtesting performance. It is evident that after implementing the “Multiple Regression” , the backtesting returns of Least Squares factor significantly improve.
In recent years, numerous significant articles have tackled the pricing of American-style options using a combination of Monte Carlo simulation and dynamic programming, such as Longstaff and Schwartz, (2001). Their central premise is the proposal of a Least Squares framework to regress the continuation value of America...
In this part, we use historical data to backtest and validate the predictive ability of our model price for CCBs. We use the model prices as a determining factor for backtesting and observe their returns : assume
D
\in(0,10)\}}.italic_f start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT ( italic_x ) = divide start_ARG 96 end_ARG start_ARG 35 end_ARG divide start_ARG 10 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT end_ARG start_ARG ( italic_x + 10 ) start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT end_ARG bold_1 start_POSTSUBSCRIPT { italic...
and thus its value at time 1, S𝑆Sitalic_S, is random. In that regard, one can also call S𝑆Sitalic_S the seller’s random reserve (excluding premium), and the difference S−r𝑆𝑟S-ritalic_S - italic_r captures the seller’s additive background risk. But for the simplicity reason, we call S𝑆Sitalic_S the seller’s backgro...
We also assume that there is no background risk, then S𝑆Sitalic_S equals the seller’s initial reserve r>0𝑟0r>0italic_r > 0.
In the special case without the seller’s background risk, we have S≡r>0𝑆𝑟0S\equiv r>0italic_S ≡ italic_r > 0, the initial reserve; recall that S≤0𝑆0S\leq 0italic_S ≤ 0 is already analyzed in Proposition 2.1. In this case, we can further remove the conditions on S𝑆Sitalic_S in Assumption 1.
D𝐷Ditalic_D depends not only on the buyer’s loss X𝑋Xitalic_X and contract I𝐼Iitalic_I, but also the seller’s background risk S𝑆Sitalic_S.
B
(4.6⋅10−6, 9.712, 20.249)⋅4.6superscript1069.71220.249(4.6\cdot 10^{-6},\,9.712,\,20.249)( 4.6 ⋅ 10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT , 9.712 , 20.249 )
The explicit formulas (3.13) and (3.16) are readily derived respectively from (3.7) and (1.22), while using the definition (1.17) and the separability property of the L𝐿Litalic_L-S𝑆Sitalic_S-C𝐶Citalic_C factors (3.11).
where, for p,k∈{1,…,N+Nc}𝑝𝑘1…𝑁subscript𝑁𝑐p,k\in\left\{1,\ldots,N+N_{c}\right\}italic_p , italic_k ∈ { 1 , … , italic_N + italic_N start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT }, the weights in the squares are respectively identified from (3.13) and (3.16) such that
Figure 12: Plots of (σi)isubscriptsubscript𝜎𝑖𝑖\left(\sigma_{i}\right)_{i}( italic_σ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT against (τi)isubscriptsubscript𝜏𝑖𝑖\left(\tau_{i}\right)_{i}( italic_τ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) start_POSTSUBSC...
(0.648,−0.516, 0.16,−0.148, 0.541)0.6480.5160.160.1480.541(0.648,\,-0.516,\,0.16,\,-0.148,\,0.541)( 0.648 , - 0.516 , 0.16 , - 0.148 , 0.541 )
D
A breakout signal is triggered if the price evolution of a threshold breaches a support or resistance line.
The Delta Engine is unique in that it exclusively trades on contrarian breakout signals. When an initial signal occurs, the algorithm executes a long or short trade of size u𝑢uitalic_u. Now, only a breakout signal identifying a reversal of the initial trend will generate an offsetting trade of size 2⁢u2𝑢2u2 italic_u....
A feedback loop is realized by accounting for the market activity. Specifically, the number of directional changes scaling law acts as a volatility measure reflecting the current market state. Calibrating to this rhythm, agents can fall out of sync and be temporarily silenced, further enforcing self-organized decision ...
On these transformed data landscapes, the trading model builds the framework for adaptive decision-making: For each threshold and various look-back sizes, resistance and support lines are fitted to the upward and downward overshoot events, respectively.
These multi-scale signals are aggregated and trigger trades only if one of the thresholds detects a contrarian pattern, enforcing emergent collective decision-making.
D
Moreover, we also employ the Diebold-Mariano (DM) test (Diebold and Mariano,, 2002) to compare the forecasting accuracy between models at the 5% significance level.
The DM test statistics are provided in Tabs. 3 and 4 for the validation and the test set, respectively. The results of the DM test suggest that the gain in forecasting accuracy provided by SpotV2Net is statistically significant.
Tab. 5 summarizes the multi-step forecasting performance of the models employed in our study, in the validation and test phases. In the validation period, the ranking is the same as for single-step forecasts, both in terms of aggregate values of average MSE and QLIKE. Instead, in the test period, the HAR-Spot model has...
The results of the implementation of GNNExplainer are summarized in Fig. 4 using two heatmaps, one for the validation set and one for the test set, which illustrate the frequency (in percentage terms) of the inclusion of nodes into the subgraphs of the most influential nodes for predicting the spot volatility of a give...
Moreover, the DM test statistics are reported in Tabs. 6 and 7. The results suggest that the gain in forecasting accuracy provided by SpotV2Net is statistically significant at the 5% level also for multi-step forecasts.
A
The result of Theorem 2.3 is a generalization of the result presented in [22] for the rough Heston model to general kernels 𝒦𝒦\mathcal{K}caligraphic_K, in case of a constant interest rate.
There are four types of geometric Asian option contracts: fixed-strike Asian call options, fixed-strike Asian put options, floating-strike Asian call options, and floating-strike Asian put options.
For such options, no explicit closed-form solution is known in the Heston model (cf. [26]), which is already the case for the Black-Scholes model. There is a vast literature on the numerical valuation of such arithmetic options in the Heston model (e.g., [29], [14], [8]), and therefore we focus on finding explicit form...
Asian options are a type of option where the payoff depends on the mean of the underlying asset over a certain period of time. We distinguish between two different types of such options, namely arithmetic and geometric Asian options. Arithmetic Asian options are options where the payoff depends on the arithmetic mean
Concerning option pricing in general Volterra models only a few explicit results exist. The seminal paper [6] shows how the rough fractional stochastic volatility (RFSV) model by [16] can be used to price claims on both the underlying and integrated variance, however, no explicit call price formula is given yet. This i...
C
In this section we verify that the candidate liquidation strategy (2.18) is a best response against a given aggregate trading rate μ𝜇\muitalic_μ, taking into account an individual player’s impact on aggregate trading. In the MFG the impact is zero and hence ξ∗,δ,x,μsuperscript𝜉𝛿𝑥𝜇\xi^{*,\delta,x,\mu}italic_ξ start...
Both the N𝑁Nitalic_N-player and the MFG admit a Nash equilibrium such that the aggregate equilibrium trading rate does not change its sign.
Furthermore, in the N𝑁Nitalic_N-player game, the strategy is a best response against an equilibrium aggregate trading rate.
In this section we verify that the candidate liquidation strategy (2.18) is a best response against a given aggregate trading rate μ𝜇\muitalic_μ, taking into account an individual player’s impact on aggregate trading. In the MFG the impact is zero and hence ξ∗,δ,x,μsuperscript𝜉𝛿𝑥𝜇\xi^{*,\delta,x,\mu}italic_ξ start...
The case δ=1N𝛿1𝑁\delta=\frac{1}{N}italic_δ = divide start_ARG 1 end_ARG start_ARG italic_N end_ARG corresponds to the forward-backward system associated with an individual player’s optimization problem in the N𝑁Nitalic_N-player game where the average trading rate ξ¯Nsuperscript¯𝜉𝑁\bar{\xi}^{N}over¯ start_ARG itali...
B
Furthermore, we introduce a novel approach by applying previously unexplored measures within the Parrondian framework. This investigation aims at elucidating the influence of structural attributes inherent to switching protocols on the manifestation of the Parrondo’s effect. Specifically, we will examine the impact of ...
The remaining of the paper is organised as follows: in Sec. II, we describe the milestones in the study of Parrondo’s effect, the importance of aperiodic series in science and to what extent our work compares with previous research. In Sec. III, we introduce the methodology we employ to generate our Parrondo’s games. I...
As a matter of fact, we expect the more anticorrelated the outcome of the switching protocol is with that of each isolated game, the more performing the former. This observation is best understood when we combine the cross-correlation Fig. 4 with the capital gain results presented in Fig. 5. All the three aperiodic swi...
Still with the goal of understanding the connection between the performance of a switching protocol with the properties of its sequencing scheme, we have introduced a structural plane composed of measurements of lacunarity and persistence analogous to the complexity-entropy plane used to classify data seriesRosso et al...
In order to shed further light on the relation between the features of the Parrondian game and its effectiveness, we have carried out the analysis looking at the cross-correlation between the capital of a switching protocol and its underlying games. For our aperiodic protocols, we have found the cross-correlations betw...
A
Financial markets are one of the most crucial components of the global economy, with billions of dollars traded daily. Accurate predictions of stock market behavior can yield substantial financial gains for investors and institutions. However, the behavior of markets is inherently complex and difficult to forecast [1, ...
Recent advancements in deep learning have led to the application of various models for stock market prediction. Multi-layer perceptron (MLP)-based models capture complex, non-linear relationships in financial data [3, 4]. Convolutional neural networks (CNNs), using 2D inputs of daily features or 3D inputs that include ...
The introduction of the self-attention mechanism [8] has further advanced stock market prediction by allowing models to capture global dependencies. Building on this, transformers have shown remarkable performance in sequence prediction tasks due to their ability to capture complex dependencies across different time st...
Building on the strengths of Mamba and GNNs, we introduce SAMBA, a novel model for stock market prediction that effectively handles complex sequential data. SAMBA consists of two main components: Bidirectional Mamba (BI-Mamba) block, which captures long-term dependencies in historical price data, and an adaptive graph ...
The complex nature of stock markets makes predicting stock returns challenging. While transformer-based models have shown promising results, they require high computational resources. We introduce SAMBA, a computationally more efficient but still accurate solution for stock return prediction. SAMBA leverages the select...
A
The rich class of GTS distribution (1) has a myriad of applications ranging from financial to mathematical physics and economic systems. However, few studies [10, 11, 12] have covered the methods and techniques to estimate the parameters of the GTS distribution. This study aims to contribute to the literature by provid...
The likelihood ratio test in Table 6 shows that, even with non-statistically significant parameters, the GTS distribution fits significantly better than the Bilateral Gamma distribution for both S&P 500 and SPY ETF indexes. Contrary to the AIC statistics, the BIK statistics do not provide the same information. A compre...
The study provides a methodology for fitting the rich class of the seven-parameter GTS distribution to financial data. Four historical prices were considered in the methodology application: two heavily tailed data (Bitcoin and Ethereum returns) and two peaked data (S&P 500 and SPY ETF returns). In the study, each histo...
The rest of the paper is organized as follows: Section 2 provides some theoretical framework of the GTS distribution. Section 3 presents the multivariate maximum likelihood (ML) method and the analytic version of the two-parameter normal distribution. Section 4 presents the results of the GTS parameter estimations alon...
The rich class of GTS distribution (1) has a myriad of applications ranging from financial to mathematical physics and economic systems. However, few studies [10, 11, 12] have covered the methods and techniques to estimate the parameters of the GTS distribution. This study aims to contribute to the literature by provid...
C
(Han & E, , 2016; Han et al., , 2018; Beck et al., , 2019; Buehler et al., , 2019; Becker et al., , 2019; Zhang & Zhou, , 2019; Reppen et al., , 2023; Reppen & Soner, , 2023).
the investment and consumption controls as neural networks, πθ⁢(t,Xt)subscript𝜋𝜃𝑡subscript𝑋𝑡\pi_{\theta}(t,X_{t})italic_π start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_t , italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT )
However, to the best of our knowledge, no existing deep-BSDE or physics-informed neural networks (PINNs)
Two neural networks Cϕsubscript𝐶italic-ϕC_{\phi}italic_C start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT and πθsubscript𝜋𝜃\pi_{\theta}italic_π start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT approximate the
However, the neural-network-based policies πθsubscript𝜋𝜃\pi_{\theta}italic_π start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT and Cϕsubscript𝐶italic-ϕC_{\phi}italic_C start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT
B
With a test statistic of 1.05, well below the critical value of z=1.96𝑧1.96z=1.96italic_z = 1.96, we fail to reject the null hypothesis, suggesting that a Lévy-driven CAR(1) model could be a good fit for this spread dynamic. This is further supported by the autocorrelation function, which remains well below the dashed...
We follow the methodology detailed in Section 3. In Step 1 (see Subsection 3.1), we select our estimator for a𝑎aitalic_a. Since the price spread can take on negative values, we use the LSB estimator to estimate a𝑎aitalic_a. In Step 2 (see Subsection 3.2), we determine the values for N𝑁Nitalic_N and M𝑀Mitalic_M. The...
Under the null hypothesis H0subscript𝐻0H_{0}italic_H start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, we expect the estimated p-value for the test to be around the nominal level (α=0.05𝛼0.05\alpha=0.05italic_α = 0.05). In Table 8 below, we use Procedure 1 to test whether the driving process is Brownian motion, with varying v...
Following Step 5 (see Subsection 3.5), we applied our test to assess whether the driving process of these spreads was Brownian motion. Using Procedure 1, we found that the spread dynamic of ABT and DHR failed to reject the normality assumption of the recovered increments, suggesting that the increments are consistent w...
We observe varying and sometimes inconclusive results from our test. For example, when measuring the price spread between Abbott Laboratories (ABT) and Danaher Corporation (DHR), we fail to reject the null hypothesis of uncorrelated increments, with a test statistic of 1.05. Similarly, the spread between Apple (AAPL) a...
C
Table 1: Comparison of Task 1 part 1 result of Test MSE and MAE for different models on various products.
In this study, MAE and MSE are employed to evaluate the predictive performance of MLP, GNN, and GCN models, providing complementary insights into their strengths and weaknesses.
Both MAE and MSE are indispensable tools, with their applicability depending on the specific requirements of the task. MAE is robust to outliers, offering a balanced perspective on overall accuracy, while MSE emphasizes substantial errors, favoring models with fewer extreme deviations.
Table 2: Comparison of Task1 part2 result of Test MSE and MAE for different models on various products.
Table 1: Comparison of Task 1 part 1 result of Test MSE and MAE for different models on various products.
C
Our model achieves randomized exploration by perturbing a deterministic control u¯tsubscript¯𝑢𝑡\bar{u}_{t}over¯ start_ARG italic_u end_ARG start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, that will be determined by optimization, with a random perturbation vt∈ℝdusubscript𝑣𝑡superscriptℝsubscript𝑑𝑢v_{t}\in\mathbb{R}^...
A motivation for letting the covariance matrix ΞΞ\Xiroman_Ξ depend on time is to have the possibility of reducing the breadth of exploration over time, consistently with the reinforcement learning literature. Exploration is more valuable in early times when little information is available. As more information becomes a...
We have seen that a LEQG problem with a randomized control emulating exploration can, via the energy-entropy duality, be reduced to a risk-neutral LQG problem with an additive penalization. This LQG problem takes the form of a stochastic game where one player is the original controller of the LEQG problem. The other tw...
In this paper, we consider a discrete-time risk-sensitive control problem of the LEQG type drawing inspiration from the results of Wang et al., (2020). Specifically, we introduce randomized controls to emulate exploration. We generate these randomized controls by perturbing a deterministic control with an additive Gaus...
i.e. the more we penalize the control and the larger we take the risk sensitivity parameter θ𝜃\thetaitalic_θ, the less dispersed we have to choose the randomization that provides exploration. The bound also depends on the computed value of Pt+1subscript𝑃𝑡1P_{t+1}italic_P start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBS...
A
In this section, we study the explosion for the volatility process vtsubscript𝑣𝑡v_{t}italic_v start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT in (12).
The natural scale function p⁢(x)𝑝𝑥p(x)italic_p ( italic_x ) (left) and the volatility in the natural scale σ¯⁢(y)¯𝜎𝑦\bar{\sigma}(y)over¯ start_ARG italic_σ end_ARG ( italic_y ) (right) for the SABR model with parameters as discussed in text. The scale function p⁢(x)𝑝𝑥p(x)italic_p ( italic_x ) approaches p∞=1.5614...
A sufficient condition for the absence of explosions is expressed in terms of the scale function as p∞=∞subscript𝑝p_{\infty}=\inftyitalic_p start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT = ∞ [26].
By point (i) in Proposition 2.1, the x→∞→𝑥x\to\inftyitalic_x → ∞ limit of the scale function is finite, so explosions cannot be excluded. However the finiteness of p∞subscript𝑝p_{\infty}italic_p start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT is only a necessary, but not sufficient criterion for the existence of explosions.
The large x𝑥xitalic_x limit of the scale function is p∞=1.5614subscript𝑝1.5614p_{\infty}=1.5614italic_p start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT = 1.5614.
B
README.md exists but content is empty.
Downloads last month
4