text stringlengths 9 1.99k | image imagewidth (px) 384 384 |
|---|---|
Comparison of test results. The top row is RAWS, the middle row is the results of the proposed SAGHS method and the bottom row is the results of improved YOLOv5 with CBAM. (a) Detection of incomplete objects (b) Detection when targets are occluded or overlapped with each other (c) Detection of fuzzy objects and (d) Det... | |
Comparison of test results. The top row is RAWS, the middle row is the results of the proposed SAGHS method and the bottom row is the results of improved YOLOv5 with CBAM. (a) Detection of incomplete objects (b) Detection when targets are occluded or overlapped with each other (c) Detection of fuzzy objects and (d) Det... | |
Comparison of test results. The top row is RAWS, the middle row is the results of the proposed SAGHS method and the bottom row is the results of improved YOLOv5 with CBAM. (a) Detection of incomplete objects (b) Detection when targets are occluded or overlapped with each other (c) Detection of fuzzy objects and (d) Det... | |
Comparison of test results. The top row is RAWS, the middle row is the results of the proposed SAGHS method and the bottom row is the results of improved YOLOv5 with CBAM. (a) Detection of incomplete objects (b) Detection when targets are occluded or overlapped with each other (c) Detection of fuzzy objects and (d) Det... | |
Comparison of test results. The top row is RAWS, the middle row is the results of the proposed SAGHS method and the bottom row is the results of improved YOLOv5 with CBAM. (a) Detection of incomplete objects (b) Detection when targets are occluded or overlapped with each other (c) Detection of fuzzy objects and (d) Det... | |
Comparison of test results. The top row is RAWS, the middle row is the results of the proposed SAGHS method and the bottom row is the results of improved YOLOv5 with CBAM. (a) Detection of incomplete objects (b) Detection when targets are occluded or overlapped with each other (c) Detection of fuzzy objects and (d) Det... | |
2D particle placement for a constant density model with $10^4$ particles for a random distribution (left) and a glass distribution (right). The upper panels show a zoom on each particular distribution in the spatial region between $x>0.9$ and $y>0.9$. While the random particle distribution leads to local clustering of ... | |
Evolution of the density for a 3D constant density model with $10^6$ particles for different number of iterations at the rejection step (grey), $32$ steps (dark red), $64$ steps (red), $128$ steps (orange) and the final step (light orange). The upper panel shows the density as a function of the distance. With increasin... | |
Histogram of nearest neighbour distances for a 3D constant density model for random placement and a glass distribution. The expected distance by which the x-axis is normalised is calculated by dividing the whole box volume up equally onto all particles. | |
Auto correlation function for a 3D constant density model for an Lennart-Jones fluid model (dark blue) and a glass distribution (light blue) in the top panel. In combination with the Hansen-Verlet criterion (dark orange line) the auto correlation function can be used to measure the quality of the glass distribution tha... | |
Evolution of a plateau density function with steep gradients on the edges for a particle number of $10^6$. In the left panel we show density plateau for $32$ iterations (dark red), $64$ (red), $128$ (orange) and the final iteration step (light orange) and compare it to the analytic density profile (blue). In the two pa... | |
Evolution of a one dimensional sine wave perturbation as initial condition for a three dimensional simulation for $10^{6}$ SPH-particles The top panel shows the evolution of the absolute density value as for the von Neumann-rejection sampling (grey), after $32$ iterations (dark red), after $64$ iterations (red), after ... | |
Evolution of a one dimensional sawtooth as an initial condition for a three dimensional simulation for $10^{6}$ SPH-particles The top panel shows the evolution of the absolute density value as for the von Neumann-rejection sampling (grey), after $32$ iterations (dark red), after $64$ iterations (red), after $128$ itera... | |
Auto correlation function for the sine wave density distribution with a particle number of $10^4$. The upper panel shows the auto correlation function as a function of distance for a random distribution (dark blue) and a glass distribution (light blue). The Hansen-Verlet criterium that marks the transition from a fluid... | |
We show a comparison with and without our particle redistribution scheme for the top hat density distribution with $10^6$ particles. On the left hand side we show the total density distribution for the top hat density distribution with the rejection sampling (grey) for $32$ iterations (dark red), $64$ iterations (red),... | |
Plateau & gradients test with $10^6$ particles. Top left panel: Min, mean and max density errors; Top right panel: Fractions of particles moved farther than 1, 0.1 and 0.01 of the local mean particle separation; Bottom left panel: Min, mean and max displacements calculated; Bottom right panel: Particles probed and actu... | |
Resolution study of the plateau test for $10^4$ (left), $10^5$ (middle) and $10^6$ particles (right) for $256$ iterations (dark red), $468$ iterations (red) and the final iteration step (gold). We show the analytic model density in blue. With increasing number of iterations the code comes closer to the model density on... | |
We show the L1 error as a function of the number of iterations of the WVT-algorithm for the top hat profile with gradients (top), the sine wave (middle) and the saw tooth density distribution (bottom). In the beginning the L1 error drops as a function of the iteration count. For the top hat profile with gradients the a... | |
Comparison between the original best converged simulation and a run with employed bias correction. Over all resolutions the bias correction works well and fits the model density better, while still preserving the original shape. | |
Zeldovich pancake initial conditions with $10^6$ particles. Again we start with a rejection sampling supported random distribution and plot the resulting density after several iteration steps. The result is very close to the analytical model. | |
Beta profile for an isolated galaxy cluster with $10^6$ particles after the same iterations steps as previous tests. Due to the large density range these initial conditions are very challenging for our code and it takes more iterations for good convergence that with previous tests. Nevertheless, the result is very clos... | |
Visualization of the checklist model. Notes are embedded and then passed to a Bi-LSTM which outputs contextualized embeddings at each timestep. A checklist encodes where the fingers that have recently been used are located based on prior predictions. These are both fed into an MLP which predicts the next finger. | |
Examples of the input representations we compare. Note that the labels on the vectors are indices corresponding to sub-embeddings shared across notes. | |
Fingering patterns tracked by our fluency metrics. Arrows and shading indicate sequential notes. | |
Mean response time of jobs under the Po2-$(g,\epsilon)$ scheme as a function of arrival rate $\lambda$ for $\epsilon=0.4$, $g=100$. | |
Mean response time of jobs under the Po2-$(g,\epsilon)$ scheme and the random scheme as a function of arrival rate $\lambda$ for $\epsilon=0.8,g=100$. | |
Comparison of the Po2-$\epsilon$ scheme for $\epsilon \in \cbrac{0.2,0.4,0.6,0.8}$ with the random scheme. We set $n=200$. | |
SEPIA Band 5 observations toward Arp 220 (6.2 hours). The plotted spectral resolution is 50 km s$^{-1}$. We observe strong water and methanol emission as well as many tracers of the dense gas (HCN, HNC, HCO$^+$, CS, HC$_{\rm 3}$N, C$^{34}$S). Insets show the HCN and HNC double-peak profiles at a 25 km s$^{-1}$ resoluti... | |
SEPIA Band 5 observations toward Arp 220 (6.2 hours). The plotted spectral resolution is 50 km s$^{-1}$. We observe strong water and methanol emission as well as many tracers of the dense gas (HCN, HNC, HCO$^+$, CS, HC$_{\rm 3}$N, C$^{34}$S). Insets show the HCN and HNC double-peak profiles at a 25 km s$^{-1}$ resoluti... | |
SEPIA Band 5 observations toward Arp 220 (6.2 hours). The plotted spectral resolution is 50 km s$^{-1}$. We observe strong water and methanol emission as well as many tracers of the dense gas (HCN, HNC, HCO$^+$, CS, HC$_{\rm 3}$N, C$^{34}$S). Insets show the HCN and HNC double-peak profiles at a 25 km s$^{-1}$ resoluti... | |
{Left:} Rotational ladders of HCN, HNC and HCO$^{+}$ (fluxes are normalised to that of the 1$-$0 transition). Inset: CS rotational ladder (not normalised). Filled squares are from this work. {Right:} RADEX modeling of the HCN, HNC and HCO$^+$ 2$-$1-to-1$-$0 ratios (units of Jy km s$^{-1}$) in a kinetic temperature (in ... | |
{Left:} Rotational ladders of HCN, HNC and HCO$^{+}$ (fluxes are normalised to that of the 1$-$0 transition). Inset: CS rotational ladder (not normalised). Filled squares are from this work. {Right:} RADEX modeling of the HCN, HNC and HCO$^+$ 2$-$1-to-1$-$0 ratios (units of Jy km s$^{-1}$) in a kinetic temperature (in ... | |
Comparison between the analytic solution for the self-similar evolution of a viscous disk (equation <ref>) and a simulation. The upper panel shows the normalized gas surface density $\Sigma/\Sigma_0$ versus normalized radius $r/R_0$ at several times for the analytic solution (solid lines) and the simulation (circles; ... | |
Same as the lower panel of Figure <ref>, but now showing the absolute value of the error versus position at time $t/t_s = 2$ for a series of computations with different numbers of cells $N$. All the calculations shown use Crank-Nicolson time centering and iterative solver tolerance $\textrm{tol} = 10^{-10}$. | |
$L^1$ error (equation <ref>) versus resolution $N$ at time $t/t_s=2$ for the self-similar disk test results shown in Figure <ref> (blue lines and points). The black dashed line shows a slope of $-2$ for comparison. | |
Results of a simulation of an initially-singular ring test. The upper panel shows the exact analytic solution (solid lines, equation <ref>) and the numerical results produced by (circles; only every 64th point shown, for clarity) as a function of position $r/R_0$ at times $t/t_0 = 0.004$, 0.008, 0.032, and 0.128. The ... | |
Results of a simulation of the gravitational instability-dominated disk test. The three panels shows the gas surface density $\Sigma$ normalized to the steady-state value at $R$, the middle panel shows the velocity dispersion $\sigma$ normalized to $v_\phi$, and the bottom panel shows Toomre $Q$. The black dashed line ... | |
Same as Figure <ref>, but now showing a simulation that starts out of equilibrium with $Q=0.5$. | |
Same as Figure <ref>, but now showing a simulation that starts out of equilibrium with $Q=1$ but a surface density and velocity dispersion that are both double the steady-state value. | |
Vertically-integrated pressure $P$ divided by the scale height parameter $fz_0$ at several times in the singular ring test. The top panel shows a simulation omitting radiation pressure ($\gamma=5/3$, $\delta=0$), while the bottom panel shows an otherwise identical simulation with a complex equation of state including r... | |
Same as Figure <ref>, but now showing effective temperature $T_{\rm eff}$ versus position in the singular ring test without (top) and with (bottom) radiation pressure. Notice the difference in scales between the top and bottom panels. | |
Maximum normalized residual $\max|\bmath{R}_{0,i}|$ (equation <ref>) remaining after $N$ iterations in the four test problems described in Section <ref>. Solid lines show updates using the Crank-Nicolson method, and dashed lines using the backwards Euler method. Colors indicate the order $M$ of Anderson acceleration us... | |
Wall clock time required per unit simulation time advanced versus Anderson acceleration parameter $M$ in the four test problems, normalized so that $M=0$ corresponds to unity. Thus values $<1$ indicate a reduction in computational cost relative to the unaccelerated case, while values $>1$ indicate a slowdown. The thick... | |
B-spline reconstruction of the rotation curve for a <cit.> potential using a fit of degree $D=6$ with $B=15$ breakpoints. In the upper panel, we show the exact analytic values of $v_\phi$, $\psi$, and $\beta$ (solid lines) and the numerical fits (data points, only every 16th point shown for clarity). All quantities are... | |
B-spline reconstruction of the rotation curve of the Milky Way. The top panel shows data from <cit.> (black points with error bars), together with b-spline fits of degree $D=2$, 3, and 4 (blue lines). The $D=2$ and $D=4$ fits use 15 breakpoints each, while for $D=3$ we show models with both $B=8$ and $B=15$ breakpoints... | |
Cube group of 24 rotations. (a) 90, 180and 270rotation around the opposite faces. (b) $\pm120\degree$ rotation around the 4 diagonal axes. (c) 180rotation about the 6 pairs of opposite edges. | |
Cube group of 24 rotations. (a) 90, 180and 270rotation around the opposite faces. (b) $\pm120\degree$ rotation around the 4 diagonal axes. (c) 180rotation about the 6 pairs of opposite edges. | |
Cube group of 24 rotations. (a) 90, 180and 270rotation around the opposite faces. (b) $\pm120\degree$ rotation around the 4 diagonal axes. (c) 180rotation about the 6 pairs of opposite edges. | |
Examples of handwriting digits from rotated MNIST. | |
Examples of point clouds sampled from ModelNet dataset. | |
Examples of point clouds sampled from ModelNet dataset. | |
Examples of point clouds sampled from ModelNet dataset. | |
Examples of point clouds sampled from ModelNet dataset. | |
Classification accuracy versus number of elements in the rotation group for (a) ModelNet40 under z rotations. (b) ModelNet10 under z rotations. (c) ModelNet40 under $\mathbf{SO}(3)$ rotations. (d) ModelNet10 under $\mathbf{SO}(3)$ rotations. | |
Classification accuracy versus number of elements in the rotation group for (a) ModelNet40 under z rotations. (b) ModelNet10 under z rotations. (c) ModelNet40 under $\mathbf{SO}(3)$ rotations. (d) ModelNet10 under $\mathbf{SO}(3)$ rotations. | |
Classification accuracy versus number of elements in the rotation group for (a) ModelNet40 under z rotations. (b) ModelNet10 under z rotations. (c) ModelNet40 under $\mathbf{SO}(3)$ rotations. (d) ModelNet10 under $\mathbf{SO}(3)$ rotations. | |
Classification accuracy versus number of elements in the rotation group for (a) ModelNet40 under z rotations. (b) ModelNet10 under z rotations. (c) ModelNet40 under $\mathbf{SO}(3)$ rotations. (d) ModelNet10 under $\mathbf{SO}(3)$ rotations. | |
The architecture of our rotation equivariant method. The green part (left & right) shows the basic structure. Each input point cloud is rotated with every element $r_i$ in a rotation group $R$, and fed into a shared network. The blue (middle) part integrates this structure in a hierarchical network like SO-Net <cit.>. ... | |
Surface plasma waves (SPWs) supported on a semi-infinite metal (SIM) and the associated electric field. The SIM surface is located on the plane $z=0$. In (a), the arrows indicate the electric field generated by the charges which are indicated by the colors, of the SPWs. In (b), a plot of the electric field is displayed... | |
The rate of gain $\gamma_0/\omega_p$ belonging with the intrinsic channel plotted against (a) the wavenumber $k/k_p$ and (b) the Fuchs parameter $p$. Here $k_p = \omega_p/v_F$. Dots: $\gamma_0$ obtained by numerically solving Eq. (<ref>). Dashed line: a linear fitting as given in Sec. VII while also serves as a guide f... | |
(a) The signal $y$ (blue) along with 7 reconstructions $\hat{y}^j$ (linear interpolation), based on different random sampling patterns (average RMSE = 0.392). (b) A pointwise error $q^j=(y-\hat{y}^j)^2$ for each $\hat{y}^j$ and the mean $Q$ (black). (c) High values of $Q$ are marked as important, with adequate spacing,... | |
(a) The signal $y$ (blue) along with 7 reconstructions $\hat{y}^j$ (linear interpolation), based on different random sampling patterns (average RMSE = 0.392). (b) A pointwise error $q^j=(y-\hat{y}^j)^2$ for each $\hat{y}^j$ and the mean $Q$ (black). (c) High values of $Q$ are marked as important, with adequate spacing,... | |
(a) The signal $y$ (blue) along with 7 reconstructions $\hat{y}^j$ (linear interpolation), based on different random sampling patterns (average RMSE = 0.392). (b) A pointwise error $q^j=(y-\hat{y}^j)^2$ for each $\hat{y}^j$ and the mean $Q$ (black). (c) High values of $Q$ are marked as important, with adequate spacing,... | |
(a) The signal $y$ (blue) along with 7 reconstructions $\hat{y}^j$ (linear interpolation), based on different random sampling patterns (average RMSE = 0.392). (b) A pointwise error $q^j=(y-\hat{y}^j)^2$ for each $\hat{y}^j$ and the mean $Q$ (black). (c) High values of $Q$ are marked as important, with adequate spacing,... | |
An RGB patch and the corresponding generated $\hat{Q}$ are presented to demonstrate some of the features of $\hat{Q}$. Blue circle - 0 importance of a distant object. Green circle - varying importance that correspond to varying depth gradient magnitude. RGB image | |
An RGB patch and the corresponding generated $\hat{Q}$ are presented to demonstrate some of the features of $\hat{Q}$. Blue circle - 0 importance of a distant object. Green circle - varying importance that correspond to varying depth gradient magnitude. $\hat{Q}$ map | |
A comparison of $Q$ maps for different $B$ (9728, 973), $P$ (FusionNet <cit.>, PENet <cit.>, linear interpolation) and $M$ (RMSE as in <ref>, REL as in <ref>, MAD where $q$ is defined as a per-pixel absolute difference). The $Q$ maps were processed for comfortable viewing while preserving relative importance. | |
A comparison of $\hat{Q}$ and $\hat{s}^*$ for $M$ = RMSE and REL. | |
Four different sampling patterns and the corresponding reconstruction networks were used to reconstruct depth for each of the images in the test set. Our importance-based sampling pattern (red) consistently achieves lower reconstruction error. RMSE | |
Four different sampling patterns and the corresponding reconstruction networks were used to reconstruct depth for each of the images in the test set. Our importance-based sampling pattern (red) consistently achieves lower reconstruction error. REL | |
Qualitative Results. Four different sampling patterns are marked on RGB images (crops of objects in different scenes). The corresponding depth prediction results can be compared to GT. | |
Our adaptive depth sampling method guides the sampling process to obtain high frequency information which is crucial for accurate depth prediction. This results in sharper edges of the reconstructed objects and substantially lower error. The main component of our method is the $Q$ map which indicates a per-pixel import... | |
Our inference framework. To demonstrate a method of utilization of the information stored in $\hat{Q}$, we propose an inference framework as a proof-of-concept. Gaussian Sampling is used to extract a sampling mask $\hat{s}^*$ which is then guide the depth sampling process s.t. the acquired information is crucial for th... | |
A diagram of the drift-diffusion model (DDM). At each moment, the noisy momentary evidence is accumulated until it hits one of the two decision thresholds ($A$ and $-A$). The decision threshold and the mean and the variance of the momentary evidence together determine the speed and accuracy of the decision. | |
Schematic diagram of the Dropout-based Drift-Diffusion Model (right) and its brain science counterpart (left). In DDDM, outputs $p_i$ from the stochastic copies of an arbitrary neural network simulate the noisy temporal neural signal $o_{t_i}$ in the brain. In both, the series of outputs/signals are passed to a similar... | |
Variation of accuracy (solid lines) and response time (the dashed line) against increasing perturbation size ($\epsilon$). We compared three models: the baseline model $h_{0,0}$, the dropout classifier without DDM $h_{0.0, 0.8}$, and the dropout classifier with DDM $H_{0.0,0.8}$ under the PGD attack on MNIST. | |
U The cosine similarity and the $L_2$ distance between hidden outputs for the clean and adversarial inputs. | |
U The cosine similarity and the $L_2$ distance between hidden outputs for the clean and adversarial inputs. | |
UMAP for hidden outputs of each layer in VGG16. | |
The symmetric difference $\Omega\Delta\widehat{\Omega}$ is depicted in grey and $\partial \Omega$ in black.
Parameters: $|\Omega|=100$; $K=20$; $\varphi(t)=2^{1/4}e^{-\pi t^2}$;
left column: $g=\varphi$; right column: $g(t)= \varphi(t)t^2$. | |
The symmetric difference $\Omega\Delta\widehat{\Omega}$ is depicted in grey and $\partial \Omega$ in black.
Parameters: $|\Omega|=100$; $K=20$; $\varphi(t)=2^{1/4}e^{-\pi t^2}$;
left column: $g=\varphi$; right column: $g(t)= \varphi(t)t^2$. | |
The symmetric difference $\Omega\Delta\widehat{\Omega}$ is depicted in grey and $\partial \Omega$ in black.
Parameters: $|\Omega|=100$; $K=20$; $\varphi(t)=2^{1/4}e^{-\pi t^2}$;
left column: $g=\varphi$; right column: $g(t)= \varphi(t)t^2$. | |
The symmetric difference $\Omega\Delta\widehat{\Omega}$ is depicted in grey and $\partial \Omega$ in black.
Parameters: $|\Omega|=100$; $K=20$; $\varphi(t)=2^{1/4}e^{-\pi t^2}$;
left column: $g=\varphi$; right column: $g(t)= \varphi(t)t^2$. | |
An example of a loop-free episodic Markov decision process when two actions $a_1$ ("up") and $a_2$ ("down") are available in all states. Nonzero transition probabilities under each action are indicated with arrows between circles representing states. In the case of two successor states, the successor states with the in... | |
Illustration of the proposed approach. | |
The visualization of the learned self- and cross-attention weight. The lightness or darkness of the color represents the value of attention weight. The red box denotes the ground truth box and blue box denotes the predicted box. In cross-attention, the sampled keypoints are drawn with red, green and blue for the scale ... | |
The comparison of CenterFormer with RCNN-style detector. RCNN aggregates point or grid features in RoI, while CenterFormer can learn object-level contextual information and long range features through an attention mechanism. | |
The overall architecture of CenterFormer. The network consists of four parts: a voxel feature encoder that encodes the raw point cloud into a BEV feature representation, a multi-scale center proposal network (CPN), the center-based transformer decoder, and a regression head to predict the bounding box. | |
Illustration of the cross-attention layer. (Left) Multi-scale cross-attention. (Right) Multi-scale deformable cross-attention. | |
Illustration of the cross-attention layer. (Left) Multi-scale cross-attention. (Right) Multi-scale deformable cross-attention. | |
The network structure of spatial-aware fusion. To focus on the current centers, we use current BEV feature as the reference to learn attention. | |
The network structure of multi-scale CPN. Each Conv block contains a convolution layer with kernel size $3\times3$, a batchnorm layer and a relu activation layer. We use convolution layer with stride and transpose convolution layer as the down-sample and up-sample layers. | |
The LEVEL_2 mAPH results comparison of the multi-frame CenterFormer on WOD validation set. All models are trained without the IoU rectification. $\star$: CenterFormer using point concatenation. $\dagger$: Deformable CenterFormer. The LEVEL_2 mAPH results comparison breakdown by speed. | |
The LEVEL_2 mAPH result comparison of the position encoding methods on WOD validation set. All experiments use only 20% of uniformly sampled training data. The LEVEL_2 mAPH results comparison of CenterFormer and DETR in different epochs. | |
Visualization of CenterFormer predictions. The red box denotes the ground truth bounding box. The blue box denotes the predictions with confidence score $>0.4$. Truncated objects whose center is outside of the 50m range are not visualized. Best viewed in color. | |
Overview of the core concepts of the DLT Ontology. | |
The simulator serves as a black-box where the inputs consist of the control $u_k$ and disturbance $w_k$, and the output is a quaternion $\{x_k, u_k, w_k, x_{k+1}\}$. | |
Landscape of this work. | |
The long-term cost converges to the near optimal value within 30 iterations. | |
Evolution of $x_{k,1}$ and $u_{k,1}$ under the Gaussian mixture disturbance distribution. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.