context stringlengths 103 1.9k | A stringlengths 104 2.67k | B stringlengths 109 2.17k | C stringlengths 107 2.98k | D stringlengths 107 2.86k | label stringclasses 4
values |
|---|---|---|---|---|---|
In our numerical experiments, the test of another sampling (Algorithm 3) is actually unnecessary. Is it possible to show this analytically as well? | In the case that the condition in Eq. (16) is indeed satisfied, we can find ν𝜈\nuitalic_ν by a brute-force search. Let 𝒱𝒱\mathcal{V}caligraphic_V be a trial set of the grid shift parameters. For each ν∈𝒱𝜈𝒱\nu\in\mathcal{V}italic_ν ∈ caligraphic_ViiiiiiThroughout this paper, we require that |ν|≤1/2𝜈12|\nu|\leq 1/... | which we call the optimal grid decomposition of y0superscript𝑦0y^{0}italic_y start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT. | It is possible to find the optimal grid shift parameter by optimization instead of doing brute-force search in a trial set, which should significantly improve our current result. | An overview of our algorithm is described as follows. For signal vectors with size N𝑁Nitalic_N, when the frequencies are all nearly on-grid (f≈n/N,n∈ℤformulae-sequence𝑓𝑛𝑁𝑛ℤf\approx n/N,n\in\mathbb{Z}italic_f ≈ italic_n / italic_N , italic_n ∈ blackboard_Z) and the noise for each sample is bounded by a constant, th... | C |
𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾\mathsf{expensive}sansserif_expensive on the channel c1subscript𝑐1c_{1}italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. | Hopefully, the process P𝑃Pitalic_P will choose 𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾\mathsf{expensive}sansserif_expensive as well so that the cost | However, the process P𝑃Pitalic_P may fail to choose 𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾\mathsf{expensive}sansserif_expensive on the channel | 𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾\mathsf{expensive}sansserif_expensive on the channel c1subscript𝑐1c_{1}italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. | If the process P𝑃Pitalic_P chooses 𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾𝖾𝗑𝗉𝖾𝗇𝗌𝗂𝗏𝖾\mathsf{expensive}sansserif_expensive, 3 units of potential are sent as | A |
The GFlops calculations encompass tensor contractions, QR factorization, and low-rank approximations, as outlined in the model detailed in Section 2.2. | In (a)(b), the input networks are MPSs with different ranks. In (c)(d), the inputs are balanced binary tree (BBT) tensor networks with different ranks. | It is worth noting that in our reported results, the execution time excludes the graph analysis part, which involves graph embedding and computing the contraction sequence of given tensor networks. This part remains independent of the tensor network ranks and is negligible when the ranks are high. | It is worth noting that Algorithm 5 may produce an embedding in which there exists a vertex in the embedding tree whose corresponding tensor network partition is empty. In such cases, we can address this problem by introducing identity matrices into the input graph. This adjustment ensures that the resulting tensor net... | The selection of the embedding tree is guided by an analysis of the structure of the input tensor network graph G𝐺Gitalic_G, its partitioning, and the contraction path. This analysis aims to identify a tree structure that optimizes the efficiency of both the current contraction and any subsequent contractions involvin... | B |
Additionally, the best-approximated isotropic displacement using (3.1) or (3.2) is unknown as it naturally depends on the very parameters μisosuperscript𝜇iso\mu^{\rm iso}italic_μ start_POSTSUPERSCRIPT roman_iso end_POSTSUPERSCRIPT, κisosuperscript𝜅iso\kappa^{\rm iso}italic_κ start_POSTSUPERSCRIPT roman_iso end_POSTSU... | These values are computed by minimizing the norm of the displacement ∥u(r)∥delimited-∥∥𝑢𝑟\lVert u(r)\rVert∥ italic_u ( italic_r ) ∥ (norm) and the full displacement field u(x1,x2)𝑢subscript𝑥1subscript𝑥2u(x_{1},x_{2})italic_u ( italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_... | 3.4 Fitting only against the norm ∥u(r)∥delimited-∥∥𝑢𝑟\lVert u(r)\rVert∥ italic_u ( italic_r ) ∥ of the displacement | We present two different procedures for finding the best approximated isotropic elasticity tensor with quadratic error minimization using Mathematica. In the first one, we only consider the radially averaged norm of the displacement ∥u(r)∥delimited-∥∥𝑢𝑟\lVert u(r)\rVert∥ italic_u ( italic_r ) ∥. In the second proced... | 4.2 The best approximating ℂisosubscriptℂiso\mathbb{C}_{\rm iso}blackboard_C start_POSTSUBSCRIPT roman_iso end_POSTSUBSCRIPT for the norm of the displacement ∥u(r)∥delimited-∥∥𝑢𝑟\lVert u(r)\rVert∥ italic_u ( italic_r ) ∥ | B |
Compared to the U-Net and its variants, our method enhances the DSC by 1.74% over the second-best SwinUNETR. When compared with other multimodal medical image segmentation models, our method enhances DSC by 1.29% aover the second best SegResNet. Moreover, Diff4MMLiTS significantly reduces false positives and false nega... | In our framework, each stage is trained independently, with the output of one stage serving as the input for the subsequent one. This sequential training method ensures that each component is optimized for its specific role before integration into the overall framework. To comprehensively validate the importance of eac... | As illustrated in Table IV, we further evaluate the performance of the synthesis strategy on multimodal and unimodal segmentation methods. In all quantity settings, we employ nnUNet as the segmentation model architecture. The performance of unimodal segmentation models typically rely heavily on the quantity and diversi... | To further evaluate the adaptability of Diff4MMLiTS, we use three backbone models in the MS module, namely U-Net, AttentionUNet, and nnUNet, with results presented in Table III. The findings indicate that our framework adapts seamlessly to all backbones, achieving notable performance improvements. Compared to segmentat... | We evaluate the results of our proposed method on publicly available external datasets to verify that the model trained with Diff4MMLiTS can effectively generalize to out-of-distribution data without the need for retraining on the new dataset. All methods are trained on mmLiTs and tested on lesion samples selected from... | D |
Meanwhile, our fully explicit and unified representation supports highly efficient rendering, achieving superior efficiency over all competitors except 3DGS. | As shown in Figure 9, 4DGS works well under diverse lighting and weather conditions. It faithfully reconstructs high-frequency texture details and correctly models the geometry for both dynamic and static regions. | We propose a generic scene representation, 4D Gaussian splatting (4DGS), for modeling dynamic scenes, as shown in Figure 2. | The quality of synthesis in dynamic regions notably excels when compared to other methods. Several intricate details, including the black bars on the flame gun, the fine features of the right-hand fingers, and the texture of the salmon, are faithfully reconstructed, demonstrating the strength of our approach. | These representations hold the topological invariance and low-frequency motion prior, thus well-suited for reconstructing dynamic scenes from monocular videos. | A |
In this section, we propose a counterfactual contrastive learning method based on the counterfactual passage extraction to improve the robustness and relevance sensitivity of dense retrieval models. | Ideally, a perfect retrieval model should be able to not only estimate the relevance between documents and queries, but also capture the key passages of a document that determine its relevance to each query. | Having high relevance sensitivity means that a dense retrieval model could easily distinguish not only positive documents from negative ones, but also counterfactual documents, which modify the key passages of postive documents, from other documents. | The assumptions on the relative preferences between positive documents, negative documents, and counterfactual documents in terms of relevance is depicted in Figure 1. | Different from traditional hard negative mining techniques, the introduction of our counterfactual documents focus on the modifications of the positive documents, thus are more effective in improving the relevance sensitivity of dense retrieval models instead of their overall retrieval performance. | B |
The MSRVTT-Caption [12] in the video captioning task is the same as the MSRVTT dataset in the text-video retrieval task. | The parameter α𝛼\alphaitalic_α serves as the hyper-parameter that balances the cross-modality contrastive loss (ℒCsubscriptℒ𝐶\mathcal{L}_{C}caligraphic_L start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT) and the Banzhaf Interaction loss (ℒIsubscriptℒ𝐼\mathcal{L}_{I}caligraphic_L start_POSTSUBSCRIPT italic_I end_POSTSU... | Evaluation Metrics. We choose Recall at rank K (R@K), Median Rank (MdR), and mean rank (MnR) to evaluate the retrieval performance. We select the answer accuracy to evaluate the video-question answering performance. We apply four metrics for the video caption task, including BLEU-4 [70], ROUGE-L [71], METEOR [72], a... | In text-to-video retrieval, given a text query alongside a gallery of videos, the objective is to rank all videos so that the video corresponding to the text query is ranked as high as possible. Similarly, in video-to-text retrieval, the goal is to rank all text candidates based on the video query. In our HBI V2 framew... | Ablation about Components. To illustrate the importance of each part of our method including the Banzhaf Interaction, the deep supervision structure, the self-distillation, and the representation reconstruction, we conduct ablation experiments on both MSRVTT and MSRVTT-QA datasets in Table V. The Banzhaf Interaction... | B |
Input: Snapshots 𝒮=[𝒖1,…,𝒖J]𝒮superscript𝒖1…superscript𝒖𝐽\mathcal{S}=[\bm{u}^{1},\ldots,\bm{u}^{J}]caligraphic_S = [ bold_italic_u start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , … , bold_italic_u start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT ] with 𝒖𝒊≈𝒖(ti).superscript𝒖𝒊𝒖subscript𝑡𝑖\bm{u^{i}}\approx\... | Φ†superscriptΦ†\Phi^{\dagger}roman_Φ start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT denotes the Moore-Penrose pseudo inverse of the DMD modes ΦDMD.superscriptΦDMD\Phi^{\text{DMD}}.roman_Φ start_POSTSUPERSCRIPT DMD end_POSTSUPERSCRIPT . | Output: DMD modes ΦDMD.superscriptΦDMD\Phi^{\text{DMD}}.roman_Φ start_POSTSUPERSCRIPT DMD end_POSTSUPERSCRIPT . | Output: DMD modes ΦDMD.superscriptΦDMD\Phi^{\text{DMD}}.roman_Φ start_POSTSUPERSCRIPT DMD end_POSTSUPERSCRIPT . | 6: Obtain ΦDMD=𝒮1𝚺−1𝑽W.superscriptΦDMDsuperscript𝒮1superscript𝚺1𝑽𝑊\Phi^{\text{DMD}}=\mathcal{S}^{1}{\bm{\Sigma}}^{-1}{\bm{V}}W.roman_Φ start_POSTSUPERSCRIPT DMD end_POSTSUPERSCRIPT = caligraphic_S start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT bold_Σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT bold_italic_V i... | B |
2if h≤m1/8ℎsuperscript𝑚18h\leq m^{1/8}italic_h ≤ italic_m start_POSTSUPERSCRIPT 1 / 8 end_POSTSUPERSCRIPT then | Let G𝐺Gitalic_G be an m×m𝑚𝑚m\times mitalic_m × italic_m grid digraph and H𝐻Hitalic_H beaninduced subgraph of Auxα(G)subscriptAux𝛼𝐺\textsf{Aux}_{\alpha}(G)Aux start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ( italic_G ) with hℎhitalic_h vertices. For every β>0𝛽0\beta>0italic_β > 0, AuxReach runs in O~(h1/2+β/2)~... | Input: An induced subgraph H𝐻Hitalic_H of Auxα(G)subscriptAux𝛼𝐺\textsf{Aux}_{\alpha}(G)Aux start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT ( italic_G ) and two vertices x𝑥xitalic_x and y𝑦yitalic_y in H𝐻Hitalic_H (let G𝐺Gitalic_G be an m×m𝑚𝑚m\times mitalic_m × italic_m grid digraph and h=|V(H)|ℎ𝑉𝐻h=|V(H)|ita... | 3 /* m𝑚mitalic_m is a global variable where G𝐺Gitalic_G is an m×m𝑚𝑚m\times mitalic_m × italic_m grid digraph */ | the points of an m×m𝑚𝑚m\times mitalic_m × italic_m grid. The edges can only occur between a vertex and its immediate vertical | C |
In fact, without this nonlocality, any CA-based discussion would likely have been rendered meaningless. | In the absence of quantum effects, a CA encoding the information of the stretched horizon would be mapped identically onto the conformal boundary. | In this scheme, the evolution law Z𝑍Zitalic_Z—which transfers information from the CA at the stretched horizon to the boundary—encodes the resulting displacement as a permutation of states. | Let us return to the issue of information on the stretched horizon being transmitted to the conformal boundary. | In the context of the black hole paradox, the piece of information inscribed on the horizon can be considered to be the encrypted information of the conformal boundary[7]. | A |
Recall that ξη^(x)superscript𝜉^𝜂𝑥\xi^{\widehat{\eta}}(x)italic_ξ start_POSTSUPERSCRIPT over^ start_ARG italic_η end_ARG end_POSTSUPERSCRIPT ( italic_x ) and ξη^τ(x)superscript𝜉subscript^𝜂𝜏𝑥\xi^{\widehat{\eta}_{\tau}}(x)italic_ξ start_POSTSUPERSCRIPT over^ start_ARG italic_η end_ARG start_POSTSUBSCRIPT italic_τ... | In the formulations of our theoretical results, we will use the following assumptions. Let m𝑚mitalic_m be a positive integer (where our results will require m≥4𝑚4m\geq 4italic_m ≥ 4). | As can be seen below, our proposed algorithms target the ridge of the ridgeness function, and we will see below (see Lemma 5) that the ridge of the ridgeness function essentially equals the original ridge of f𝑓fitalic_f. | This important section can be interpreted as providing population level versions of our main convergence results for the proposed algorithms presented above. Indeed, the algorithms can be interpreted as ‘perturbed versions’ of corresponding population level versions. We will discuss the precise meaning of this in what ... | The remaining part of the paper is organized as follows. In Section 2 we introduce the formal definition of ridges. This is followed by our extraction algorithms, whose performance is illustrated using some numerical studies in ℝ2superscriptℝ2\mathbb{R}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT. The m... | A |
\theta}}^{\prime},3^{2}\bm{I}_{2})italic_φ ( bold_italic_θ | bold_italic_θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) = caligraphic_N ( bold_italic_θ | bold_italic_θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , 3 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT bold_italic_I start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) | The surrogate is built with k-nearest neighbor (kNN) regression using K∈{1,10,100}𝐾110100K\in\{1,10,100\}italic_K ∈ { 1 , 10 , 100 } neighbors. | We compare two MH-S algorithms and one DA-PM-MH algorithm using again a nearest neighbor surrogate, with K=100𝐾100K=100italic_K = 100. The budget is E=105𝐸superscript105E=10^{5}italic_E = 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT evaluations. | with B=4𝐵4B=4italic_B = 4, η0=4subscript𝜂04\eta_{0}=4italic_η start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = 4 and ηi=3.5subscript𝜂𝑖3.5\eta_{i}=3.5italic_η start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 3.5 for i=1,…,2𝑖1…2i=1,\ldots,2italic_i = 1 , … , 2, where Θ=[−10,10]×[−10,10]Θ10101010\Theta=[-10,10]\times[-10,10... | Figures (a)-(b)-(c) show noisy realizations of the ABC likelihood with bandwidth ϵ=0.1italic-ϵ0.1\epsilon=0.1italic_ϵ = 0.1 for M∈{1,10,100}𝑀110100M\in\{1,10,100\}italic_M ∈ { 1 , 10 , 100 }, respectively. We also plot the 0.1 and 0.9 quantiles of the noisy realizations. Figure (d) shows the true posterior distributio... | A |
Ω(f(n))Ω𝑓𝑛\Omega\left(f(n)\right)roman_Ω ( italic_f ( italic_n ) ),Θ(f(n))Θ𝑓𝑛\Theta\left(f(n)\right)roman_Θ ( italic_f ( italic_n ) ). Furthermore, for a constant c>0𝑐0c>0italic_c > 0, we write 𝒪c(f(n))subscript𝒪𝑐𝑓𝑛\mathcal{O}_{c}\left(f(n)\right)caligraphic_O start_POSTSUBSCRIPT italic_c end_POSTSUBSCR... | To keep track of the progress of the dynamics towards consensus, we describe the dynamics via the bias at time t𝑡titalic_t, denoted by stsubscript𝑠𝑡s_{t}italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, which represents the difference between the sizes of the majority and minority opinion communities at time ... | We first compute the expectation of the bias at time t𝑡titalic_t, conditional on its value at time t−1𝑡1t-1italic_t - 1. | The transition probabilities are characterized iteratively by the majority update rule as follows: given any time t≥0𝑡0t\geq 0italic_t ≥ 0, let Mt∈Σnsubscript𝑀𝑡superscriptΣ𝑛M_{t}\in\Sigma^{n}italic_M start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ∈ roman_Σ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT be the s... | An opinion dynamics is a synchronous distributed algorithm characterized by a very simple structure. In this structure, the state of a node at round t𝑡titalic_t depends only on its own state and a symmetric function of the multiset of states of its neighbors at round t−1𝑡1t-1italic_t - 1. | B |
By embedding the distribution of the short term experimental data using kernels, we derive interpretable weights for extrapolating long term effects from short term effects. | Our research question is how to extrapolate long term effects of continuous actions, allowing nonlinearity and heterogeneity in the link between the short term and the long term. | The final estimator has a simple closed form solution, while preserving nonlinearity and heterogeneity in the link between the short term and long term. | The long term regression γ0obs(s,x)superscriptsubscript𝛾0obs𝑠𝑥\gamma_{0}^{\textsc{obs}}(s,x)italic_γ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT obs end_POSTSUPERSCRIPT ( italic_s , italic_x ) allows for nonlinearity and heterogeneity in the link | The short term kernel mean embedding μsexp(d,x)subscriptsuperscript𝜇exp𝑠𝑑𝑥\mu^{\textsc{exp}}_{s}(d,x)italic_μ start_POSTSUPERSCRIPT exp end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_d , italic_x ) allows for nonlinearity and heterogeneity in the counterfactual distribution of short te... | B |
𝐂Uncoded=[𝐈M|𝟎]Tsuperscript𝐂Uncodedsuperscriptdelimited-[]conditionalsubscript𝐈𝑀0𝑇\mathbf{C}^{\text{Uncoded}}=[\mathbf{I}_{M}|\mathbf{0}]^{T}bold_C start_POSTSUPERSCRIPT Uncoded end_POSTSUPERSCRIPT = [ bold_I start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIPT | bold_0 ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSC... | where the first term on the right hand side calculates the average number of learners used for training each agent, and oc≥0subscript𝑜𝑐0o_{c}\geq 0italic_o start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ≥ 0. Using the above metric, the computation overhead of each assignment scheme can be derived as follows: | This paper introduced DARL1N, a scalable MARL algorithm that can be trained over a distribute computing architecture. DARL1N reduces the representation complexity of the value and policy functions of each agent in a MARL problem by disregarding the influence of other agents that are not within one hop of a proximity gr... | The coded schemes mitigate the impact of stragglers by assigning each agent to multiple learners. The training performed by the extra learners is redundant. To quantify the computation overhead introduced by this redundancy, we use the following metric: | Coded distributed training assigns each agent to multiple learners. Here, we investigate five codes, where the encoding matrices can be directly utilized as the assignment matrix. | D |
Quite importantly, their result is only applicable to SAA while our reduction applies to any policy in the wide range of sample-size-agnostic policies. In particular, this will be critical to derive policies which achieve minimax optimal asymptotic regret rates when SAA fails and is not (rate) optimal (see Section 5). | The examples of pricing and ski-rental illustrate how critical Theorem 1 is to derive guarantees for a wide range of data-driven policies but leaves open the choice of the policy that should be analyzed. In Section 5.2 we take initial steps for the design of general policies with strong asymptotic worst-case regret gua... | We next present an alternative sample-size-agnostic policy for which the asymptotic worst-case vanishes as ϵitalic-ϵ\epsilonitalic_ϵ goes to 00. Furthermore, we characterize the worst-case performance of that policy, showing that it has the best possible dependence with respect to ϵitalic-ϵ\epsilonitalic_ϵ. | We prove Theorem 1 through a sample path analysis. We show that almost surely (over all possible historical samples observed), the asymptotic worst-case regret of a sample-size-agnostic policy can be bounded by the right-hand-side of (6). In particular, we use the ETC property of the distance to show that asymptoticall... | We show in Section A.2.1 that, in general, by leveraging relations between different distances one may relate the worst-case regret of data-driven decision-making instances in heterogeneous environments which have the same ϵitalic-ϵ\epsilonitalic_ϵ but differ along the type of distance used. In particular, when ΞΞ\Xiro... | C |
On the six natural RNA targets and among the subset of all CASP15 participants ranked on these specific targets, RhoFold (AIchemy_RNA) was fourth, while RhoFold+’s performance was on par with AIchemy_RNA2’s (with a difference of 0.4 in the Z-score) and surpassed that of other methods. In a detailed analysis of performa... | In order to test RhoFold+’s ability to generalize for structure- (in addition to mainly sequence-) dissimilar targets, we sought to determine whether RhoFold+’s predictions could surpass the best single template (the most structurally similar model) in the training set for a given query. To investigate this, we compare... | Importantly, here RhoFold+ was trained using non-overlapping training data with respect to the RNA-Puzzles targets tested (see Methods). We conducted preprocessing to obtain 24 single-chain RNA targets and excluded RNA complexes. This set of RNA targets contained two puzzles (PZ), PZ34 and PZ38, that were introduced af... | Interestingly, RhoFold+ also attained the best Z-score for R1116, although its RMSD was ∼similar-to\sim∼1 Å higher than that of UltraFold (other methods produced predictions with significantly lower accuracy, with RMSDs >>>10 Å). Upon further investigation, we found that, while UltraFold outperformed RhoFold+ on this m... | k. Comparison of RhoFold+’s predictions against AIchemy_RNA2 and UltraFold on the R1116 target from CASP15. | C |
In addition, we compute the wall-clock time for different processes of MAZE on the AA layout, specifically the collecting trajectories, updating, and pairing, resulting in 150 seconds, 82 seconds, and approximately 0 seconds, respectively. The primary factor contributing to the overall time overhead is the process of c... | We mainly evaluate the performance of MAZE in the popular Overcooked [4] environment, a two-player common-payoff collaborative cooking environment. Furthermore, we design a grid-world FillInTheGrid to verify the versatility of MAZE. We conduct experiments on different layouts in these environments, where the agent and ... | The training curves are shown in Figure 6, reflecting the change in the average reward of the agents during the training phase. In all the six heterogeneous environments, V-MAZE achieves better performance clearly, showing the necessity of considering the heterogeneity and distinguishing the two players explicitly. Bes... | Table III shows the detailed results, i.e., the mean and standard deviation of the reward achieved by each algorithm under each combination of layout and partner. We compute the rank of each algorithm under each setting as in [10], which are averaged in the last row of Table III. Besides, we apply the Wilcoxon rank-sum... | Finally, we compare the performance of different methods on a homogeneous environment CR and a heterogeneous environment H-CR. The layouts of these environments are the same. However, the skills of different players in CR are the same, and the skills in H-CR are different. Similar to Section V-C, we first show the trai... | D |
(e) Precipitation rates averaged over time and longitudes and relative frequency histograms (f) are shown for ERA5 data (black), CM2Mc-LPJmL (red), GFDL-ESM4 (blue), quantile mapping (magenta) and the GAN (cyan). The GAN applied to the CM2Mc-LPJmL output corrects the double-peaked ITCZ as well as the histogram over the... | In both tropical and temperate zones, the constrained GAN corrects the precipitation towards the more complex and higher-resolution GFDL-ESM4, while following the trend of the CM2Mc-LPJmL model. Again, the unconstrained model remains relatively constant in both cases, with a small decrease over time in the temperate zo... | The averaged absolute value of the grid-cell-wise mean error (ME) for the raw CM2Mc-LPJmL and GFDL-ESM4 models, as well as for the QM- and GAN-based post-processing, using the CM2Mc-LPJmL output as input. The bias reduction relative to the raw CMCMc-LPJmL model is given in percentage. Note that the GAN shows the larges... | of the mean error (ME) shown in the spatial plots (Table LABEL:tab:bias). Here, the GAN shows the strongest error reduction compared to QM and GFDL-ESM4, reducing the error of CM2Mc-LPJmL by 75% for annual and between 72% to 64% for seasonal time series. We include the results of two additional ESMs from CMIP6, the MPI... | Mean errors of (a) CM2Mc-LPJmL, (b) GFDL-ESM4, (c) QM-based and (d) GAN-based post-processing methods applied to the CM2Mc-LPJmL output. | B |
To the best of our knowledge, this is the first study of the TOP as the existing literature assumes that components are directly plugged into slots on CAP machines (Castellani et al. (2019); Gao et al. (2021)). In practice, PCB manufacturers use trolleys to load components which otherwise could be difficult to manage a... | The problem structure is exploited to decompose the TOP into two smaller, identical and independent problems, i.e., assignment of trolleys and assignment of stackers, by pre-computing the dependency between them. So, a single and smaller MILP model is sufficient to solve both the problems and, hence, to solve the TOP (... | A novel extension of the BPP is derived to formulate the TOP by introducing additional constraints to ensure that the number of trolleys required to build each PCB is less than or equal to the capacity of the assembly line used for building the PCB. An MILP model is developed to solve the TOP which is solved using exac... | To formulate the TOP, we extend the bin packing problem (BPP) which finds a minimum number of bins of common capacity to pack a given set of items of different weights (Wäscher et al. (2007)). The TOP shares constraints similar to the BPP, with additional constraints (for details refer to Subsection 3.2) to ensure that... | We present a novel extension of the BPP to formulate the TOP. Similar to bin packing, the TOP finds a minimum number of trolleys/stackers (equivalent to bins) of common capacity to load/pack a given set of components (equivalent to items) of different sizes/weights to build a set of PCBs in an assembly line. The TOP sh... | C |
Unlike other deep learning methods that may suffer from limited interpretability, Hgarn can be used to reveal the dependencies between activities through the learned Hierarchical Graph. We visualize one attention head of GatCsubscriptGat𝐶\textsc{Gat}_{C}Gat start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT’s sliced atten... | However, most existing studies focus on predicting human mobility based on individual location sequences, overlooking the integral interplay between activity participation and location visitation behaviors. Classic travel behavior theories suggest that an individual’s travel decisions are determined by the need to part... | In this study, we propose Hierarchical Graph Attention Recurrent Network (Hgarn) for next location prediction. Specifically, we construct a hierarchical graph based on past mobility records and employ a Hierarchical Graph Attention Module to capture complex time-activity-location dependencies. This way, Hgarn can learn... | We design a activity-aware Hierarchical Graph Attention Recurrent Network (Hgarn), which contains a hierarchical graph attention module to model dependencies between time, activities, and locations, and a temporal module to incorporate the hierarchical graph representations into sequence modeling, leveraging next activ... | Both travel behavior theories and empirical evidence suggest that human mobility patterns largely depend on the need to participate in activities at different times of the day. Therefore, it is crucial to consider the latter when modeling the former. In this paper, we propose a Hierarchical Graph Attention Recurrent Ne... | D |
Does there exist C>0𝐶0C>0italic_C > 0 such that logR2(2,n)≤(log(M2(n))C\log R_{2}(2,n)\leq(\log(M_{2}(n))^{C}roman_log italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( 2 , italic_n ) ≤ ( roman_log ( italic_M start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ( italic_n ) ) start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT? | Bucić, Sudakov and Tran [1] gave a doubly exponential upper bound for d=2,3𝑑23d=2,3italic_d = 2 , 3; and | In their paper, Fishburn and Graham [10] introduced another natural generalisation for monotone sequences and the Erdős-Szekeres theorem, which they called a lex-monotone array. | The authors would like to thank Zachary Hunter and Matija Bucić for their helpful comments. We also thank the anonymous referee for their suggestions. | Using the methods from our proof of Theorem 1.1, we resolve their question, showing that a doubly exponential upper bound holds in all dimensions. | C |
G(|ψ⟩)<G(|ψ~⟩)𝐺ket𝜓𝐺ket~𝜓G(|\psi\rangle)<G(|\tilde{\psi}\rangle)italic_G ( | italic_ψ ⟩ ) < italic_G ( | over~ start_ARG italic_ψ end_ARG ⟩ ). | The access to multipartite quantum states is an indispensable prerequisite for many applications in quantum information, turning them into a powerful resource which potentially outperform their classical counterparts Bennett and Brassard (2014); Ekert (1991); Holland and Burnett (1993). Indeed, magic states turn out to... | In this work we have presented an iterative method for the computation of maximally resourceful quantum states. We provided a convergence analysis and showed that in each step the resourcefulness of the iterates increase. We illustrated our approach for the special case of the geometric measure, allowing us to identify... | a generic quantum state, we show that in each step of the algorithm the resourcefulness increases. We illustrate the universality of our method by applying it to various different resource quantifiers and present a detailed analysis for the geometric measure. Here we | The proof is given in Appendix A and comes with an interesting feature. It turns out that the proof does not rely on the particular product state structure of |π⟩ket𝜋\ket{\pi}| start_ARG italic_π end_ARG ⟩, so any figure of merit based on maximizing the overlap with pure states from some subset can be optimized with o... | D |
In this paper, we generalize the existing results in literature on EFX allocations to the setting when the number of distinct valuations is k𝑘kitalic_k, but the number of agents can be arbitrary. We give an EFX allocation with at most k−2𝑘2k-2italic_k - 2 unallocated goods such that no agent envies the bundle of unal... | We now prove a lemma that will be used crucially to prove Theorem 1. In the lemma below, we assume that there is an existing (possibly partial) EFX allocation X𝑋Xitalic_X. We show that if we improve the bundles of the leading agents such that the new bundle of each leading agent is a minimally envied subset with respe... | We thank anonymous reviewers for their helpful comments. Vishwa Prakash HV acknowledges the support of TCS Research Scholar Fellowship. | In the remainder of the proof, we consider the case that Xnsubscript𝑋𝑛X_{n}italic_X start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT is the only EFX-feasible bundle for both b1subscript𝑏1b_{1}italic_b start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and c1subscript𝑐1c_{1}italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. | That is, the overall minimum has increased. Now, we run the PR algorithm on X′′superscript𝑋′′X^{\prime\prime}italic_X start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT with the valuation vasubscript𝑣𝑎v_{a}italic_v start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT to get a new allocation Z𝑍Zitalic_Z. Let agent c1subscript�... | B |
Road Anomaly Test Sets. We further compare SLEEG with recent advanced anomaly segmentation methods on Road Anomaly in Tab. 2, it is observed that SLEEG outperforms most competitors by a large margin when no labeled anomaly data is available. Since there exists larger inherent domain shift between Road anomaly and Citys... | Investigation on the influence on AP and false positive rate with varied λ𝜆\lambdaitalic_λ value on FS Lost & Found validation set (left) and FS Static validation set (right). | Table 6: Ablation results of comparing static/dynamic margin (Eq. (Anomaly Estimators for Likelihood Maximization)) on FS and Road Anomaly validation set. | Ablation results for different likelihood estimators on FS validation set and Road Anomaly validation set. | Comparison of visualization results with JEM, Softmax Entropy and Image Re-synthesis on FS Lost & Found validation set. | C |
The interpolation results are shown in fig. 18. The physical space results are shown in the top row, and the RCDT-POD results are shown in the bottom row. The results show that the RCDT-POD ROM, despite the intrinsic error, can predict the target snapshot, without introducing an additional shock within the wake, clearl... | For the implementation of proper orthogonal decomposition (POD) and model order reduction (MOR) we use the EZyRB package [28]. | In section 4, instead, we focus on the complete MOR procedure starting with a simple moving Gaussian distribution, transformed into RCDT space and order-reduced using POD, compared alongside ’standard’ POD in physical space. We then test our workflow for a multi-phase fluid wave and the flow around an airfoil using hig... | This work has focused on implementing and verifying the Radon-Cumulative Distribution Transform (RCDT) for image and flow capture and assessing its applicability in model order reduction (MOR) – under proper orthogonal decomposition (POD) – of high-fidelity CFD input data. RCDT and subsequent RCDT-POD MOR workflows wer... | Both the implementation of the RCDT and ROM workflows have been written in Python 3.9.7, making use of two packages, PyTransKit [27] and EZyRB [28]; implementing the discretised form of RCDT – with subsequent forward/inverse transforms – and model order reduction functionality, respectively. For the ROM side, i.e. EZyR... | C |
\underline{\bf X})^{T}=\underline{\bf X}^{{\dagger}}*\underline{\bf X}.( under¯ start_ARG bold_X end_ARG ∗ under¯ start_ARG bold_X end_ARG start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT = under¯ start_ARG bold_X end_ARG ∗ under¯ start_ARG bold_X end_ARG start_POSTSUPERS... | We see that the GTSVD provides the same right tensor 𝐙¯¯𝐙\underline{\bf Z}under¯ start_ARG bold_Z end_ARG in (14)-(15) and we can use it to sample lateral slices of the data tensors 𝐗¯¯𝐗\underline{\bf X}under¯ start_ARG bold_X end_ARG and 𝐘¯¯𝐘\underline{\bf Y}under¯ start_ARG bold_Y end_ARG based on the TDEIM alg... | The MP pseudoinverse of a tensor can also be computed in the Fourier domain and this is shown in Algorithm 2. | The procedure of the computation of the GTCUR for tensor triples is summarized in Algorithm 8. Lines 6-8 can be efficiently computed in the Fourier domain and similar algorithms like Algorithm 6 can be developed for this computation. The t-RSVD of the tensor triplets (𝐗¯,𝐘¯,𝐙¯)¯𝐗¯𝐘¯𝐙(\underline{\bf X},\underline{... | The basis tensors 𝐔¯¯𝐔\underline{\bf U}under¯ start_ARG bold_U end_ARG and 𝐕¯¯𝐕\underline{\bf V}under¯ start_ARG bold_V end_ARG required in Algorithm 4 can be computed very fast through the randomized truncated t-SVD [31, 32, 33]. This version can be regarded as a randomized version of the TDEIM algorithm. | B |
Noise Generation. We assess the realism of noise distributions synthesized by different methods using the public evaluation metrics AKLD and PGap on the SIDD validation set. We compare RNSD with baseline techniques including GRDN (Kim, Chung, and Jung 2019), C2N (Jang et al. 2021), sRGB2Flow (Kousha et al. 2022), DANet... | Additionally, we evaluate our method using another publicly available metric (Jang et al. 2021) by training the DnCNN network (Zhang et al. 2017) from scratch with synthetic noise generated by RNSD. We compare its performance with C2N (Jang et al. 2021), NoiseFlow (Abdelhamed, Brubaker, and Brown 2019), sRGB2Flow (Kous... | Noise Generation. We assess the realism of noise distributions synthesized by different methods using the public evaluation metrics AKLD and PGap on the SIDD validation set. We compare RNSD with baseline techniques including GRDN (Kim, Chung, and Jung 2019), C2N (Jang et al. 2021), sRGB2Flow (Kousha et al. 2022), DANet... | Visual Analysis of Noisy Images.We compare RNSD with baselines such as C2N (Jang et al. 2021), DANet (Yue et al. 2020), and R2F(sRGB2Flow)(Kousha et al. 2022), as shown in Fig.4. RNSD accurately mimics real-world noise patterns across sensors and ISO settings, synthesizing realistic noise while preserving color and ton... | Figure 1: Subjective results and AKLD (Yue et al. 2020) of various noise synthesis methods, including sRGB2Flow (Kousha et al. 2022), DANet (Yue et al. 2020), and C2N (Jang et al. 2021). | A |
We trained an FFM (Juan et al., 2016) provided by Yahoo-Inc (2023) using the binary cross-entropy loss on the above data, both with binning of several resolutions and with splines defined on 6 sub-intervals . The numerical field was naïvely transformed to [0,1]01[0,1][ 0 , 1 ] by simple normalization. We plotted the le... | Next, we compared the test cross-entropy loss on 75,000 samples generated in the same manner with for several numbers of intervals used for binning and cubic Splines. For each number of intervals we performed 15 experiments to neutralize the effect of random model initialization. As is apparent in Figure 5, Splines con... | We conduct experiments with k∈8,16,…,64𝑘816…64k\in{8,16,\dots,64}italic_k ∈ 8 , 16 , … , 64 as embedding dimensions, and each experiment is conducted using 50 trials of Optuna (Akiba et al., 2019) with its default configuration to tune the learning rate and the L2subscript𝐿2L_{2}italic_L start_POSTSUBSCRIPT 2 end_POS... | Figure 5: Comparison of the test cross-entropy loss obtained with Splines and bins. Both methods suffer from sparsity issues as the number of intervals grows, but Splines are able to utilize their approximation power with a small number of intervals, before sparsity takes effect. The bands are 90% bootstrap confidence ... | We ran 20 experiments with the tuned configurations to neutralize the effect of random initialization, and report the mean and standard deviation of the metrics on the test set in Table 1, where it is apparent that our approach outperforms binning on these datasets. These datasets were chosen since they contain several... | A |
The rightmost chart in Figure 1 highlights a fundamental shortcoming of the PRP ranking - most of the dynamics it induces do not converge. Dynamics in both other two ranking functions, however, always converge. This is a key advantage of these functions. | As mentioned in §5, the PRP ranking function maximizes the users’ welfare for a fixed profile. The reason why softmax ranking functions with high β𝛽\betaitalic_β values nevertheless manage to achieve higher users’ welfare than the PRP is that the PRP is only short-term optimal, | Another insight from Figure 1 is that the users’ welfare of the PRP is roughly constant across λ𝜆\lambdaitalic_λ values. | A plausible explanation is that in the case of PRP and the tested λ𝜆\lambdaitalic_λ range, λ𝜆\lambdaitalic_λ has little, if any, impact on the behavior of publishers. This conjecture might also explain why the publishers’ welfare of the PRP appears to linearly decrease with λ𝜆\lambdaitalic_λ: the dynamics remain the... | To conclude this section, let us revisit the trends we discovered in light of the results we have already seen in §6. From this perspective, we can see how an increase in k𝑘kitalic_k emphasizes the consequences of the instability of the PRP ranking function. The already low convergence ratio at k=2𝑘2k=2italic_k = 2 f... | B |
For offline MARL, since baselines are tested in a decentralized style, i.e., all agents independently decide their actions with only local observations, MADiff-C is not meant to be a fair comparison but to show if MADiff-D fills the gap for coordination without global information. | Compared with centralized control, a more popular and widely-adopted setting is that each agent only makes its own decision without any communication with other agents, which is what most current works (Lowe et al., 2017; Rashid et al., 2020; Wang et al., 2023) dealt with. In this case, we can only utilize the current ... | Compared to single-agent learning, offline multi-agent learning (MAL) has been less studied and is more challenging. | Datasets: we use the off-the-grid offline dataset (Formanek et al., 2023), including three datasets with different qualities for each map, e.g., Good, Medium, and Poor. | Similar to the single-agent case, direct supervised learning (BC) on the dataset behaves poorly when datasets are mixed quality. | D |
In this section we work towards a message passing formulation of synthetic AIF. We start by reviewing AIF and the CFFG representation for a GFE objective for control. Further details on variational objectives for AIF and epistemic considerations can be found in (Koudahl et al., 2023). | We simulate a nested perception-action cycle, where on each trial the seller sends an action (offer) α^ssubscript^𝛼𝑠\hat{\alpha}_{s}over^ start_ARG italic_α end_ARG start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT to the primary agent, and where the buyer sends actions (moves) u^tsubscript^𝑢𝑡\hat{u}_{t}over^ start_AR... | For the initial simulation we set the reward probability α=0.9𝛼0.9\alpha=0.9italic_α = 0.9 and reward utility c=2𝑐2c=2italic_c = 2, and execute the perception-action cycle for S=100𝑆100S=100italic_S = 100 consecutive trials on the CFFG of Fig. 7. The resulting minimum policy GFE over trials, grouped by time, is plot... | AIF defines an agent and an environment that are separated by a Markov blanket (Kirchhoff et al., 2018). In general, at each time step, the agent sends an action to the environment. In turn, the environment responds with an outcome that is observed by the agent. The goal of the agent is to manipulate the environment to... | The results in Fig. 10 illustrate how the agent consolidates the outcomes of epistemic policies in the goal statistics. For the goal at the first time step 𝐜1,ssubscript𝐜1𝑠\bm{\mathrm{c}}_{1,s}bold_c start_POSTSUBSCRIPT 1 , italic_s end_POSTSUBSCRIPT, the agent learns to prefer a visit to the cue position. For the s... | C |
Experiment setup. We split the data into 90/10901090/1090 / 10 train/test sets at random and repeat the experiment 10 times. We determined the best estimate of the rank of the true outcome matrix and the rank of the observation pattern using 9-fold cross-validation with MNN. We used 16-fold cross-validation to separate... | Now, we consider R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT score, MSE, MAE, and max error as metrics to compare the estimates made by MNN and modified USVT against the true outcomes. The results of this experiment can be seen in Table 2. We can see that across all these metrics, MNN outp... | Table 1: Comparison of performance of MNN and USVT on Glance data. As can be seen, MSE for MNN is >28x better. | Results. As we can see from Fig. 4, the estimates from modified USVT are extremely biased. The estimates from MNN, however, appear to be minimally biased and inline with ground truth. Moreover, from Fig. 3, we can see that the estimates made by modified USVT are very sensitive to outliers in the data, while the estimat... | Results. Before comparing the performance of MNN and modified USVT on the synthetic dataset, we examine the bias of the estimates for the full outcome matrix in both cases (experiments 1 and 2). As can be seen in Fig. 5,the distribution of estimates generated by MNN better approximates the true distribution of outcomes... | C |
Hypernetworks have demonstrated their effectiveness and versatility across a wide range of domains and tasks in deep learning. In this section, we discuss some of the important applications222We have explored 50 important papers (arranged by publication year) while considering at least one application in each distinct ... | Continual learning, also known as lifelong learning or incremental learning, is a machine learning paradigm that focuses on the ability of a model to learn and adapt continuously over time, in a sequential manner, without forgetting previously learned knowledge. Unlike traditional batch learning, which assumes static a... | Multitasking refers to the capability of a model to perform multiple tasks or learn multiple objectives simultaneously. It involves leveraging shared representations and parameters across different tasks to enhance learning efficiency and overall performance. Hypernets can be applied in the context of multitasking to f... | Task-conditioned hypernetworks: These hypernetworks take task-specific information as input. The task information can be in the form of task identity/embedding, hyperparameters, architectures, or any other task-specific cues. The hypernetwork generates weights that are tailored to the specific task. This allows the hyp... | Few-shot learning is a sub-field of machine learning that focuses on training models to learn new concepts or tasks with only a limited number of training examples. Unlike traditional machine learning approaches that typically require large amounts of labeled data for each task, few-shot learning aims to generalize kno... | A |
We apply our PCE-based method to approximate non-polynomial functions. This transforms all benchmark programs into Prob-solvable loops, which allows using the static analysis tool Polar (Moosbrugger et al., 2022) to compute the moments of the program variables as a function of the loop iteration n𝑛nitalic_n. | In this section, we develop a method for the derivation of the exact moments of probabilistic loops that comply with a specified loop structure and functional assignments. | Our method for exact moment derivation for probabilistic loops with non-polynomial functions builds upon Prob-solvable loops. | We implemented the techniques for exact moment derivation for loops containing trigonometric or exponential polynomials, presented in Section 5, in the tool Polar. | We evaluate the technique for exact moment derivation using Polar on all benchmarks satisfying the general program structure of Listing 1 in Section 5. | C |
Routing attacks pose significant threats to FANETs, originating from nodes that bypass prevention methods and can cause dramatic damage to the network. Therefore, it is imperative to analyze these attacks to develop effective countermeasures. Despite the importance of routing security, there is a notable lack of studie... | 3D GMM was employed to simulate the natural 3D flight of UAVs in a realistic manner, as demonstrated in [117]. The alpha parameter value of the 3D GMM, which provides a balance of randomness and predictability in the UAV’s mobility, was initially set at 0.25 and then incrementally increased by 0.05 to create different ... | In the analysis conducted in this study, four attacks against AODV were implemented in realistic simulation scenarios: sinkhole, dropping, blackhole, and flooding attacks. | This study covers four attacks against the widely used AODV protocol, each with different goals. Initially, a concise overview of AODV, 3D Gauss Markov Mobility (GMM), and the specific attacks is presented. Following this, the simulation results obtained from networks with diverse topologies are demonstrated and deeply... | The unique characteristics of UAVs and networks of UAVs are presented in details, and then analyzed from a security perspective. | C |
We find that the ISO can impact the optimality (i.e., choosing the best candidates) and fairness (i.e., treating similar candidates similarly) of the selected k𝑘kitalic_k candidates, especially when the screener is human. | Here, position bias refers to the penalty (or premium) a candidate experiences due to where it falls on the ISO, as humans are predisposed to favor the items placed at the top of a list (Baeza-Yates, 2018; Athey and Ellison, 2011). | Again, these results are due to the low probability of top scores for which the effects of the bias due to ϵ1subscriptitalic-ϵ1\epsilon_{1}italic_ϵ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is not counter-balanced by ρ𝜌\rhoitalic_ρ. | Here, the candidates for a job represent the items and the screener evaluating their profiles represents the decision-maker. | The former refers to a consistent screener; the latter refers to an inconsistent screener whose evaluation of candidates suffers over time due to the fatigue of performing a repetitive task. | A |
Besides, we design a space aggregation module (SAM) to yield the clear images, which combines the reciprocity of dual degradation priors. We perform extensive experiments on several datasets to analyze and validate the effectiveness and superiority of our proposed DASUNet compared to other state-of-the-art methods. | In this paper, we have proposed a dual degradation-inspired deep unfolding method (DASUNet) for low-light image enhancement. Specifically, we design a dual degradation model (DDM) based on the degradation specificity among luminance and chrominance spaces. An alternative optimization solution is proposed to solve it an... | To push the frontiers of deep unfolding-based image enhancement, we propose a Dual degrAdation-inSpired deep Unfolding network, termed DASUNet, for low-light image enhancement, which is shown in Fig. 2. The motivation originates from the degradation specificity of low-light images between luminance and chrominance spac... | We propose a dual degradation model based on degradation specificity of low-light images on different spaces. It is unfolded to form dual degradation-inspired deep unfolding network for low-light image enhancement, which can jointly learn two degradation priors from luminance space and chrominance space. More important... | Dual degradation model. Based on the degradation specificity between luminance and chrominance spaces, we proposed a DDM for low-light image enhancement. To demonstrate its effectiveness, we conduct some comparison experiments on various color spaces and degradation models on LOL dataset, the results of which are prese... | C |
Notice that the utility functions take the same form as the one-specialist case, whose analogous observation is proven in Appendix 8.3. There, we proved that a utility of the form Aδk0k0−1+Bδ(1−δ)1k1−1𝐴superscript𝛿subscript𝑘0subscript𝑘01𝐵𝛿superscript1𝛿1subscript𝑘11A\delta^{\frac{k_{0}}{k_{0}-1}}+B\delta(1-\d... | For simplicity, in the multi-specialist case, we prove unimodality for the case of quadratic costs (this is all we need to arrive at the bargaining solutions reported in the paper). We show that the same proof holds for both the generalist and specialists’ utilities: | Solving the powerful-G𝐺Gitalic_G, powerful-D𝐷Ditalic_D, vertical monopoly or other bargaining solutions consists in maximizing players’ utilities either separately or combined into a joint utility. This is possible once parameters are specified; however, we cannot produce a closed-form expression for the general poly... | It is important to note that the three regimes defined in this section can describe a specialist’s strategy in either the 1-specialist or multi-specialist fine-tuning game. In the 1-specialist case, the potential strategies describe counterfactual outcomes that depend on the particular cost and revenue functions of the... | For ease and without loss of generality, we assume the set of domains is in descending order of value cisubscript𝑐𝑖c_{i}italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, and we’ll consider each domain one at a time to determine whether the domain has δi=1subscript𝛿𝑖1\delta_{i}=1italic_δ start_POSTSUBSCRIPT i... | A |
The average occupied pixel area for a single object in BEE24 is one-fourth of that in the second-ranked dataset, GMOT-40, highlighting the challenge of detecting and tracking smaller objects. | BEE24 has a much larger maximum duration (i.e., 200 s) and number of tracks for a single video than several common MOT datasets. For example, the maximum duration and tracks are an order of magnitude larger than those in GMOT-40. | MOT17 and MOT20. We compare the proposed TOPICTrack tracker with the state-of-the-art trackers on the MOT17 and MOT20 test sets. | Furthermore, the maximum number of annotations for a single video in BEE24 far exceeds those of other datasets, except for MOT20. MOT20 focuses on crowded scenes and therefore has the highest number of annotations. However, the objects’ appearances are easily identifiable, and their slow motion tends to be linear. | In fact, in this case, the motion pattern of the bee tends to be linear, thus using a linear assumptions-based motion model for association could keep track of the bee. | C |
Tab. 4 presents the results for the explanation generation subtask. Models relying solely on opinions (C𝐶Citalic_C) and emotion representations (E𝐸Eitalic_E) as input exhibit significantly poorer performance across all metrics compared to other baselines. For instance, the BART model without dialogue (D𝐷Ditalic_D) u... | We consider two generative text models (BART [28], T5-large [48]) and a recently introduced multi-modal model NLX-GPT [53] for emotion and explanation generation for both Questioner and Answerer. Since the Answerer has always access to the image I𝐼Iitalic_I, we include I𝐼Iitalic_I in the form of text from pretrained ... | The Affective Visual Dialog task involves three subtasks: dialog-based question answering 4.1, affective explanation generation 4.2 and dialog-based emotion classification 4.3. We split the dataset into train, | The Questioner asks questions about a hidden image, which is intentionally concealed to mitigate any visual priming biases. These biases can cause models to operate under the assumption that the questioner will inquire about the objects depicted in the image [4, 23]. The objective of the Questioner is to explore the hi... | Table 4: Results on Affective Explanation Generation setup for Questioner. I,E,C,D𝐼𝐸𝐶𝐷I,E,C,Ditalic_I , italic_E , italic_C , italic_D represents the image, 2 opposed emotion labels, associated opinions, and the dialog defined in Sec. 4. | D |
We also evaluate the performance of the text retrieval task by experimenting on the test split of the flickr30k dataset (Young et al., 2014). This dataset consists of five caption texts for each photo, and these texts are similar to each other. We use the first caption text vector to retrieve the top 5 similar sentence... | In addition, we evaluate the performance of text embedding in transfer tasks. In particular, our approach involves training text embedding on STS tasks and then transferring it to seven other kinds of tasks. Notably, AnglE outperforms baselines, showing a significant improvement of 4.34%percent4.344.34\%4.34 % and 4.48... | To provide a comprehensive analysis, we also evaluate the performance of the baselines in the non-transfer setting. We train the baselines on the train set and evaluate them on the test or validation set. Two typical models, SimCSE and SBERT, representing contrastive and supervised learning, are compared with our model... | To comprehensively evaluate the STS tasks, we have introduced the GitHub Issues Similarity Dataset to evaluate model performance on the long-text STS task. Furthermore, we have proposed an LLM-supervised learning method to cope with the scarcity of domain-supervised data. Extensive experimental results have demonstrate... | In this section, we will first introduce the baselines, then the results of the transfer STS tasks, then the results of the non-transfer STS tasks, and finally a summary. | A |
C2: The percentage of dispensed amount that has been sunk should be within a certain (undisclosed) range, and, | Fig. 11 shows the (topological) parameter-free nature of FaSTM∀for-all\forall∀N. This validates Motivation 2 and 3 described in Section 2 - i.e., complex money laundering networks involve transactions among several parties, covering longer distances (more than 2 hops). In the left most graph, we are using diameter on t... | C2: The percentage of dispensed amount that has been sunk should be within a certain (undisclosed) range, and, | C3: The maximum flow, respecting the temporal order, of money should be greater than a certain (undisclosed) threshold | The flow looks interesting because of the cyclic behaviour. On a closer look (middle graph), after taking into account the chronological order of transactions, the cyclic behaviour is not there anymore. The aim is to convert the left most graph to the right most graph, by respecting the temporal order. It can be observ... | C |
While the proposed PPO-based reinforcement learning (RL) approach for DC-DC boost converter control shows significant promise, | The performance of the PPO-based control approach is compared with traditional control techniques, including optimized proportional-integral (PI) control and artificial neural network (ANN) control. | The proposed PPO-based reinforcement learning (RL) method for DC-DC boost converter control does have slightly higher computational demands compared to traditional control methods. This is primarily due to the complexity of the PPO algorithm and the | significantly degraded, indicating that the PI control method struggled to handle the input voltage variation effectively. Quantitatively comparing the performance of these control methods, RL control emerged as the superior method as depicted in Table 7. It exhibited the ability to seamlessly handle the input voltage ... | The computational complexity of the proposed RL-based control method is slightly higher than that of traditional control methods. | D |
Existing studies have three key limitations: they demand extra images and fine-tuning of text-to-image models with limited scope for new concepts; they can’t learn from user interaction history and need detailed user prompts; and there’s a lack of public, personalized text-to-image datasets that truly reflect user pref... | Figure 3: Dataset statistics and distribution. Left: Proportion of users based on the varying number of historical prompts they have. Note that each user has a minimum of 18 historical prompts, as we have excluded those with fewer prompts from the dataset. Right: Proportion of prompts based on their varying lengths. Be... | As mentioned in 5.3 and Table 3, we have conducted experiments to demonstrate the performance of our method in terms of two shorter types of input prompt xtsubscript𝑥𝑡x_{t}italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. The quantitative results in 3 showcase that our method performs robustly even on conditio... | Recently, researchers have found that optimizing prompts can boost the performance of LLMs on several NLP tasks and even search systems. For examples, | In search systems, LLMs are used to generate query expansion terms by [10], while they are used to reformulate query by Wang et al. [26] instead. | C |
In this subsection, we propose a method to enhance node classification performance by integrating the objective function proposed in this study with existing GNN-based node classification methods, which do not directly utilize higher-order structures in training, including the latest semi-supervised and unsupervised le... | Sixth, according to experimental results with benchmark data, using the training and validation data employed in Planetoid, GCN, GAT, and SGC as prior information improves mean accuracy by 2.2%, 0.7%, and 0.3% for Cora, Citeseer, and Pubmed, respectively. The objective function used in this study is intended to promote... | In the training process, Glorot initialization [57] is utilized to initialize the parameters, and the Adam SGD optimizer [58] is employed for optimization. For all experiments, the learning rate is set to 0.4 and the number of epochs to 10. The proposed objective function aims to learn the probability distribution assi... | In this subsection, we propose a method to enhance node classification performance by integrating the objective function proposed in this study with existing GNN-based node classification methods, which do not directly utilize higher-order structures in training, including the latest semi-supervised and unsupervised le... | In this experiment, we integrate GNNs with the proposed objective function and evaluate the performance gains using the Cora, Citeseer, and Pubmed datasets. GAT [22] uses an attention mechanism to learn node embeddings. The node features are created using the bag-of-words representation of documents, with the dimension... | D |
Fig. 7(a) shows fundamental diagrams for the mixed autonomy traffic flow with commercially available ACC vehicles at different market penetration rates (MPRs) ranging from 0% to 100% without attack. It is observed that the capacity decreases from around 1,900 (veh/hr) to 1,250(veh/hr) as the MPR increases from 0% to 10... | Fig. 7(d) shows the fundamental diagrams for Scenario 3. In the absence of attacks, the fundamental diagram at 0% MPR (Fig. 7(d)) is the same as the normal case without attack (Fig. 7(a)). The fundamental diagram for the MPR of 60% is shown in Fig. 7(d), which is also similar to that of Fig. 7(a) for the same reason as... | In this article, we have considered three types of candidate cyberattacks on AVs with low levels of automation, i.e., ACC vehicles. We study the impacts of these attacks on both microscopic and macroscopic traffic flow dynamics. Motivated by these impacts, we then develop a machine learning based approach, i.e., a GAN-... | Fig. 7(c) shows the fundamental diagrams for Scenario 2. As in Scenario 1, the fundamental diagram at 0% MPR is the same as Fig. 7(a) (without attack). The fundamental diagram for the MPR of 60% in this case is similar to the normal scenario shown in Fig. 7(a). The capacity Q𝑄Qitalic_Q and the shape of the fundamental... | In this section, we present numerical results on fundamental diagrams with ACC vehicles being attacked under the three types of attacks introduced before. For comparison with the case without attacks (Fig. 7(a)), we show fundamental diagrams at the ACC MPR of 0%, 60%, and 100%. | D |
The Indefinite Datasets are for Causal Discovery in Indefinite Data (CDID) task (producing the causal structures and causal representations as discussed in Section 2.3.3), contributing to the | DIR [62], and our method (Ours𝑂𝑢𝑟𝑠Oursitalic_O italic_u italic_r italic_s). We provide the details about all datasets, baselines and implementation in Appendix C. | and input_instrcution𝑖𝑛𝑝𝑢𝑡_𝑖𝑛𝑠𝑡𝑟𝑐𝑢𝑡𝑖𝑜𝑛input\_instrcutionitalic_i italic_n italic_p italic_u italic_t _ italic_i italic_n italic_s italic_t italic_r italic_c italic_u italic_t italic_i italic_o italic_n as shown in Step 1. | We conduct experiments for CDID (Causal discovery in indefinite data) task and causal consistency on the Causalogue and Causaction datasets. | We use EDKA-GM as the Non-causal model, and biCD as the causal model (the backbone model with our method in this task). | A |
Chief Complaint (CC𝐶𝐶CCitalic_C italic_C): The primary reason or concern for which the patient seeks medical attention. | Present Illness (PI𝑃𝐼PIitalic_P italic_I): A detailed account of the symptoms and problems leading up to the current visit, typically in chronological order. | Algorithm 1 shows the detailed steps described in the methodology. The goal is to efficiently utilize the power of the transformer-encoders for finding the context in long clinical texts. | Table 2: Preliminary results for different experimented baseline models on mortality prediction and length of stay prediction tasks in macro-averaged % AUROC. The CORe and DischargeBERT models outperform the baseline model performances, leading to their selection in the main experiments of our study. | The final prediction A𝐴Aitalic_A for the clinical note N𝑁Nitalic_N is then obtained by consolidating all Aisubscript𝐴𝑖A_{i}italic_A start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, typically through averaging or another fusion strategy. | A |
Can we use the reaction to adversarial perturbations as an OSR score to separate familiar and novel samples? | We call an attack informed if the adversary has access to the binary set-labels of the input, i.e., closed-set vs. open-set, and uninformed if that information is not available huang2011adversarial. | We consider a deep neural network f𝜽:𝒳→ℝ|ℱ|:subscript𝑓𝜽→𝒳superscriptℝℱf_{\bm{\theta}}:\mathcal{X}\to\mathbb{R}^{|\mathcal{F}|}italic_f start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT : caligraphic_X → blackboard_R start_POSTSUPERSCRIPT | caligraphic_F | end_POSTSUPERSCRIPT parameterized by 𝜽𝜽\bm{\theta}bold_... | In open-set recognition (OSR) a set 𝒩𝒩\mathcal{N}caligraphic_N of novel categories is additionally considered and a test set containing inputs from both novel and familiar classes is used to evaluate the OSR performance: | We consider an input space 𝒳𝒳\mathcal{X}caligraphic_X and a set ℱℱ\mathcal{F}caligraphic_F of familiar categories, i.e., the closed-set. | D |
README.md exists but content is empty.
- Downloads last month
- 9