renderDpi int64 200 200 | name stringlengths 1 7 | page int64 0 507 | figType stringclasses 2
values | regionBoundary dict | caption stringlengths 6 3.85k | imageText listlengths 0 17k | renderURL stringlengths 171 177 | captionBoundary dict |
|---|---|---|---|---|---|---|---|---|
200 | 5 | 9 | Figure | {
"x1": 118.44,
"x2": 490.32,
"y1": 95.39999999999999,
"y2": 545.04
} | Figure 5: Optical metasurface design problem configuration and results. (A-B) Design variables (materials, layer thicknesses, and cross-section geometry types). (C) A sample absorbance spectrum and the wavelength intervals (highlighted wavelength regions) corresponding to absorbance above the threshold t. The design ob... | [
"RIGID",
"GA",
"Thickness",
"1",
"Thickness",
"2",
"Thickness",
"3",
"si",
"ty",
"D",
"en",
"Material",
"1",
"Material",
"2",
"Material",
"3",
"Geometric",
"type",
"Satisfaction",
"rate",
"(GA)",
"Average",
"score",
"s",
"et",
"ric",
"Sampling",
"threshold",... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00003_1762245679/figures/2401.00003-Figure5-1.png | {
"x1": 71.6709976196289,
"x2": 540.1668090820312,
"y1": 560.8505859375,
"y2": 697.7620239257812
} |
200 | 1 | 2 | Figure | {
"x1": 74.52,
"x2": 540,
"y1": 72,
"y2": 239.04
} | Figure 1: Schematic diagram of the RIGID method. We first train a random forest on a design-response dataset to learn the forward design-response relation — predicting qualitative responses (e.g., bandgap existence at any given wave frequency) of designs. Then given a design target, we can infer the likelihood of any d... | [
"y",
"0",
"1y",
"0",
"1",
"Inverse",
"inference",
"Design",
"variable",
"i",
"(x",
"i",
")",
"De",
"sig",
"n",
"v",
"ari",
"ab",
"le",
"j",
"(x",
"j)",
"Target-tailored",
"designsLikelihood",
")",
"(n",
"m",
"gt",
"h",
"el",
"en",
"W",
"av",
"Desig... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00003_1762245679/figures/2401.00003-Figure1-1.png | {
"x1": 72,
"x2": 540.3499145507812,
"y1": 263.4195251464844,
"y2": 313.0580139160156
} |
200 | 7 | 12 | Figure | {
"x1": 75.96,
"x2": 537.12,
"y1": 138.96,
"y2": 454.32
} | Figure 7: Visualization of estimated likelihood and validation metrics for synthetic problems. (A) Likelihood function values for randomly created design targets. Orange lines show boundaries of actual feasible regions associated with the targets T = {I(a, b; s) = 1|∀s ∈ Ω}. Points show satisfactory RIGID designs and G... | [
"RIGID",
"GA",
"Average",
"score",
"Satisfaction",
"rate",
"Selection",
"rate",
"Average",
"score",
"Satisfaction",
"rate",
"Selection",
"rate",
"SqExp",
"ric",
"s",
"M",
"et",
"Sampling",
"threshold",
"SupSin",
"ric",
"s",
"M",
"et",
"Sampling",
"threshold",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00003_1762245679/figures/2401.00003-Figure7-1.png | {
"x1": 71.7509994506836,
"x2": 540.0038452148438,
"y1": 481.67852783203125,
"y2": 542.2259521484375
} |
200 | 3 | 7 | Figure | {
"x1": 117,
"x2": 492.12,
"y1": 72,
"y2": 444.24
} | Figure 3: Acoustic metamaterial design problem configuration and results. (A) Design variables of center and corner mass radii (rcenter and rcorner) and strut radius (rstrut). (B) High symmetry points of the cubic irreducible Brillouin zone. (C) A sample dispersion relation and bandgap (marked by the highlighted zone).... | [
"F",
"E",
"C",
"D",
"B",
"A",
"ric",
"s",
"M",
"et",
"Sampling",
"threshold",
"Satisfaction",
"rate",
"(GA)",
"Average",
"score",
"Satisfaction",
"rate",
"Selection",
"rate",
"Reduced",
"wavevector",
"H",
"z)",
"y",
"(M",
"ue",
"nc",
"Fr",
"eq",
"H",
"... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00003_1762245679/figures/2401.00003-Figure3-1.png | {
"x1": 71.25299835205078,
"x2": 541.7467041015625,
"y1": 458.4525146484375,
"y2": 562.636962890625
} |
200 | 6 | 11 | Figure | {
"x1": 121.32,
"x2": 487.08,
"y1": 75.96,
"y2": 366.12
} | Figure 6: Synthetic data creation for (A) the SqExp problem and (B) the SupSin problem. For each problem, the left panel shows 100 functions with randomly sampled parameters a and b. We treat a and b as synthetic design variables, and the corresponding functions as quantitative responses (e.g., absorbance spectra of op... | [
"Design",
"ID",
"A",
"s",
"t",
"z",
"Squared",
"Exponential",
"Functions",
"Synthetic",
"Ranges",
"B",
"s",
"Design",
"ID",
"t",
"z",
"Superposed",
"Sine",
"Functions",
"Synthetic",
"Ranges"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00003_1762245679/figures/2401.00003-Figure6-1.png | {
"x1": 72,
"x2": 541.2417602539062,
"y1": 378.155517578125,
"y2": 427.7950134277344
} |
200 | 4 | 8 | Figure | {
"x1": 73.8,
"x2": 538.1999999999999,
"y1": 146.51999999999998,
"y2": 392.03999999999996
} | Figure 4: Distributions of satisfactory solutions for two bandgap targets. The off-diagonal plots show the pairwise bivariate distributions of design variables, and the diagonal plots show the marginal distributions of the data in each column. The left panel shows that GA designs are highly localized while RIGID can le... | [
"RIGID",
"Data",
"GA",
"us",
"C",
"or",
"ne",
"r",
"m",
"as",
"s",
"ra",
"di",
"us",
"ad",
"iu",
"s",
"C",
"en",
"te",
"r",
"m",
"as",
"s",
"ra",
"di",
"ut",
"r",
"S",
"tr",
"Corner",
"mass",
"radius",
"Center",
"mass",
"radius",
"Target",
"b... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00003_1762245679/figures/2401.00003-Figure4-1.png | {
"x1": 71.7509994506836,
"x2": 540.0042114257812,
"y1": 417.467529296875,
"y2": 478.0150146484375
} |
200 | 2 | 4 | Figure | {
"x1": 99,
"x2": 510.12,
"y1": 75.96,
"y2": 419.03999999999996
} | Figure 2: The inverse design pipeline of the proposed RIGID method (using the inverse design of acoustic metamaterials as an example). Given design parameters x and the auxiliary variable s (e.g., wave frequency), a trained random forest predicts the probability of the qualitative response y (e.g., bandgap existence). ... | [
"Frequency",
"(MHz)",
"3",
"4",
"6",
"7",
"0.6",
"0.8",
"0.4",
"meeting",
"target",
"Target",
"and",
"actual",
"bandgaps",
"Designs",
"Likelihood",
"of",
"Step",
"5.",
"Generating",
"designs",
"based",
"on",
"likelihood",
"Final",
"likelihood",
"map",
"xj",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00003_1762245679/figures/2401.00003-Figure2-1.png | {
"x1": 72,
"x2": 540.1652221679688,
"y1": 435.72052001953125,
"y2": 583.541015625
} |
200 | 1 | 5 | Table | {
"x1": 306.71999999999997,
"x2": 543.24,
"y1": 316.8,
"y2": 574.1999999999999
} | Table 1: Evaluation on different datasets. “ALL” the proportional mixture of the four base datasets. | [
"ALL",
"0.555",
"0.685",
"0.505",
"0.621",
"0.529",
"0.652",
"HI",
"0.435",
"0.611",
"0.361",
"0.517",
"0.395",
"0.560",
"AI",
"0.474",
"0.611",
"0.419",
"0.532",
"0.445",
"0.569",
"SI",
"0.444",
"0.601",
"0.413",
"0.539",
"0.428",
"0.568",
"RI",
"0.499",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table1-1.png | {
"x1": 307.1310119628906,
"x2": 543.0916748046875,
"y1": 586.1245727539062,
"y2": 604.0830078125
} |
200 | 4 | 10 | Figure | {
"x1": 78.84,
"x2": 518.04,
"y1": 289.8,
"y2": 432
} | Figure 4: Preprocessing for an observation with four types of features. | [
"Environmental",
"features",
"Other",
"players",
"-----------------'",
"The",
"Agent",
"RBG",
"Bird's-eye-view",
"-----------------'",
"([j�iiiu",
"-,",
"Unit",
"Features"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure4-1.png | {
"x1": 158.40199279785156,
"x2": 438.4804992675781,
"y1": 443.4005432128906,
"y2": 449.40301513671875
} |
200 | 15 | 24 | Table | {
"x1": 90,
"x2": 507.24,
"y1": 264.96,
"y2": 483.12
} | Table 15: Language model performance evaluation with different sizes of fine-tuning training set. The underlined “Improve Rate” values represent the improvement percentage of the “CoFT → SFT” method relative to “SFT” method. | [
"CoFT",
"39.42%",
"58.35%",
"34.28%",
"50.10%",
"36.67%",
"53.91%",
"CoFT",
"→",
"SFT",
"41.61%",
"60.40%",
"34.55%",
"50.32%",
"37.75%",
"54.90%",
"SFT",
"17.66%",
"38.28%",
"13.33%",
"29.47%",
"15.20%",
"33.30%",
"Improve",
"Rate",
"135.52%",
"57.78%",
"159.15... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table15-1.png | {
"x1": 55.13100051879883,
"x2": 541.43603515625,
"y1": 495.2445373535156,
"y2": 513.2030029296875
} |
200 | 16 | 25 | Table | {
"x1": 72.72,
"x2": 524.16,
"y1": 138.96,
"y2": 621
} | Table 16: Chain of thought prompt for GPT4. | [
"In",
"order",
"to",
"complete",
"the",
"command",
"‘You",
"should",
"lie",
"in",
"wait’,",
"let",
"us",
"plan",
"the",
"states",
"of",
"the",
"agent",
"step",
"by",
"step",
"using",
"the",
"following",
"template:",
"1.",
"Analyze",
"the",
"verbal",
"order... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table16-1.png | {
"x1": 207.34300231933594,
"x2": 389.2301025390625,
"y1": 633.1275634765625,
"y2": 639.1300048828125
} |
200 | 7 | 14 | Table | {
"x1": 56.879999999999995,
"x2": 540,
"y1": 66.96,
"y2": 435.24
} | Table 7: Overview of sub-goal classes, we show a part of them here. | [
"Sub-goal",
"Class",
"Candidates",
"Damage",
"to",
"enemy",
"[Zero,Low,Little",
"low,Medium,Little",
"high,High]",
"Whether",
"knock",
"down",
"enemy",
"[True,False]",
"Whether",
"kill",
"enemy",
"[True,False]",
"Whether",
"seen",
"enemy",
"[True,False]",
"Whether",
"se... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table7-1.png | {
"x1": 162.31199645996094,
"x2": 434.2611389160156,
"y1": 447.3495178222656,
"y2": 453.35198974609375
} |
200 | 11 | 20 | Figure | {
"x1": 66.6,
"x2": 542.88,
"y1": 330.84,
"y2": 615.9599999999999
} | Figure 11: (a) Sub-goal distribution during co-training. The 20 most frequently occurring goal meta states are filtered out and displayed. The vertical axis represents the probability of the state being output by the language model; (b) For a collected trajectory segment with length k = 200, we firstly estimate the bas... | [
"where",
"φ⋆",
"a",
"target",
"network",
"which",
"shares",
"the",
"same",
"architecture",
"as",
"the",
"RND",
"predictor",
"but",
"the",
"network",
"is",
"non-trainable.",
"∥φ(E(st,",
"g))−",
"φ⋆(E(st,",
"g))∥,",
"(13)",
"t=0",
"Rfrnd",
"=",
"−",
"T∑",
"•",... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure11-1.png | {
"x1": 55.439998626708984,
"x2": 541.4409790039062,
"y1": 261.9555358886719,
"y2": 303.822998046875
} |
200 | 1 | 1 | Figure | {
"x1": 88.56,
"x2": 506.52,
"y1": 73.8,
"y2": 256.32
} | Figure 1: Overview of co-training in OpenPAL. The Policy and LLM is pre-trained with multi-step fine-tuning and goalconditioned RL, respectively. Then, the co-training aligns them towards achieving instruction open-endedness. | [
"G",
"oal",
"R",
"ew",
"ard",
"s",
"Rewards",
"Feedback",
"Goals",
"Planning",
"Observations",
"Agent",
"Actions",
"Human",
"Instructions",
"Be",
"Careful",
"!”",
"Prepare",
"to",
"Ambush.”",
"“Find",
"Them",
"Out",
"!”",
"Multi-step",
"Fine-tuning",
"Goal-condi... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure1-1.png | {
"x1": 55.439998626708984,
"x2": 543.0977172851562,
"y1": 274.0145263671875,
"y2": 291.97198486328125
} |
200 | 3 | 6 | Table | {
"x1": 307.08,
"x2": 543.24,
"y1": 611.64,
"y2": 717.12
} | Table 3: Comparison of goal-generation. Cyan the helpful, pink the conflicting, and orange the critical sub-goals. It is evident that co-training enables goal-generation to avoid conflicts of sub-goals and improves reasonability by including helpful and critical sub-goals. | [
"ments",
"mainly",
"lies",
"in",
"2",
"≤",
"|g|",
"≤",
"4,",
"because",
"|g|",
"=",
"1",
"is",
"too",
"easy",
"while",
"|g|",
"≥",
"5",
"is",
"too",
"hard",
"to",
"complete.",
"Figure",
"3(b)",
"shows",
"a",
"case",
"of",
"|g|",
"=",
"3",
"that",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table3-1.png | {
"x1": 307.1310119628906,
"x2": 543.0897827148438,
"y1": 530.1415405273438,
"y2": 583.9649658203125
} |
200 | 12 | 21 | Figure | {
"x1": 54.72,
"x2": 542.16,
"y1": 209.88,
"y2": 407.15999999999997
} | Figure 12: Implementation of the RND predictor network. | [] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure12-1.png | {
"x1": 182.2519989013672,
"x2": 414.6295166015625,
"y1": 418.42352294921875,
"y2": 424.4259948730469
} |
200 | 8 | 13 | Table | {
"x1": 274.68,
"x2": 540,
"y1": 408.59999999999997,
"y2": 520.1999999999999
} | Table 8: The major changes in the training procedure. | [
"4/14/2023",
"1",
"1802702",
"Experiment",
"started",
"4/27/2023",
"1808552",
"1802702",
"Env-init:",
"Random",
"weapons",
"5/8/2023",
"2829170",
"1803087",
"Action:",
"Add",
"a",
"fire",
"action",
"for",
"long",
"distance",
"5/10/2023",
"3034011",
"1803087",
"Env-i... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table8-1.png | {
"x1": 300.52301025390625,
"x2": 514.748779296875,
"y1": 532.37255859375,
"y2": 538.375
} |
200 | 5 | 13 | Table | {
"x1": 96.84,
"x2": 500.03999999999996,
"y1": 66.96,
"y2": 364.32
} | Table 5: The introduction of different rewards. | [
"Feature",
"Weight",
"Description",
"enemy",
"discovery",
"0.02",
"reward",
"for",
"see",
"an",
"enemy",
"detected",
"by",
"enemy",
"-0.002",
"punishment",
"for",
"being",
"seen",
"by",
"an",
"enemy",
"scout",
"0.0001",
"reward",
"for",
"search",
"for",
"an",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table5-1.png | {
"x1": 205.58399963378906,
"x2": 390.98785400390625,
"y1": 376.2755432128906,
"y2": 382.27801513671875
} |
200 | 7 | 17 | Figure | {
"x1": 78.84,
"x2": 518.04,
"y1": 66.96,
"y2": 244.07999999999998
} | Figure 7: This training system has four key parts: Actor, Learner, League and LLM replicas. Actors are responsible for data collection, the Learner trains the policy model using this data, the League coordinates the overall training process and displays results, and the LLM Replicas handle goal generation and distribut... | [
"Model",
"parameters",
"--+",
"Environments",
"Rollout",
"outcomes",
"Checkpoints",
"\\",
"Actors",
"1",
",",
"Actions",
"I",
"/",
"---------",
"1",
"/;___",
"--��====���",
"\\",
"\\",
"\\",
";'",
"State",
"ab�tractions",
",;--",
"l",
"LLM",
"Replica",
"!,I",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure7-1.png | {
"x1": 55.439998626708984,
"x2": 541.6138916015625,
"y1": 255.96652221679688,
"y2": 285.8800048828125
} |
200 | 9 | 22 | Table | {
"x1": 72.72,
"x2": 524.16,
"y1": 156.96,
"y2": 603
} | Table 9: Chain of thought response from GPT4. | [
"1.",
"Analyze",
"the",
"verbal",
"orders",
"of",
"teammates",
"and",
"players,",
"what",
"do",
"you",
"want",
"to",
"do?",
"According",
"to",
"the",
"command,",
"also",
"analysis",
"the",
"relevant",
"states",
"of",
"teammates",
"and",
"enemies",
"that",
"n... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table9-1.png | {
"x1": 202.91900634765625,
"x2": 393.6529541015625,
"y1": 614.9955444335938,
"y2": 620.9979858398438
} |
200 | 6 | 12 | Table | {
"x1": 394.91999999999996,
"x2": 545.04,
"y1": 335.88,
"y2": 423
} | Table 6: Action space. | [
"Sub",
"Action",
"Space",
"Dim",
"Size",
"movement",
"direction",
"16",
"yaw",
"direction",
"16",
"pitch",
"direction",
"3",
"body",
"action",
"9",
"basic",
"action",
"7",
"switch",
"weapon",
"action",
"3"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table6-1.png | {
"x1": 423.6629943847656,
"x2": 513.1072387695312,
"y1": 435.44952392578125,
"y2": 441.4519958496094
} |
200 | 5 | 12 | Figure | {
"x1": 54.72,
"x2": 542.16,
"y1": 538.92,
"y2": 694.0799999999999
} | Figure 5: Network structure of our proposed policy. | [] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure5-1.png | {
"x1": 195.38299560546875,
"x2": 401.49908447265625,
"y1": 705.4715576171875,
"y2": 711.4739990234375
} |
200 | 3 | 7 | Figure | {
"x1": 58.68,
"x2": 533.16,
"y1": 254.88,
"y2": 383.4
} | Figure 3: (a) The completion ratio of goals with dimension size ranges from 1 to 7; (b) The goal completion ratio of goals that |g| = 3, the trend curve reflects the improving completion ratio; (c) The sub-goals distribution changes along the training in one loop of co-training, where the description of each gi is incl... | [
"(c)",
"Sub-goals",
"distribution",
"y",
"ib",
"ilit",
"Po",
"ss",
"0.200",
"0.175",
"0.150",
"0.125",
"0.100",
"0.075",
"0.050",
"0.025",
"Sub",
"goal",
"g4",
"g8",
"g12",
"g16",
"g20",
"Time",
"in",
"one",
"loop",
"(b)",
"Goal",
"completion",
"rate",
"... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure3-1.png | {
"x1": 55.439998626708984,
"x2": 541.4445190429688,
"y1": 401.0495300292969,
"y2": 430.9620056152344
} |
200 | 2 | 7 | Figure | {
"x1": 66.96,
"x2": 533.16,
"y1": 83.88,
"y2": 199.44
} | Figure 2: (a) The goal completion rate on training dataset; (b) The goal completion rate on unseen goals, i.e., the test dataset; (c) The evaluation of policy learning in cases of w/ and w/o KL-divergence regularizer. | [
"(c)",
"ics",
"Mean",
"basic",
"reward",
"per",
"step",
"Mean",
"basic",
"reward",
"per",
"step",
"(No",
"KL)",
"#Enemies",
"killed",
"#Enemies",
"knocked",
"down",
"#Enemies",
"killed",
"(No",
"KL)",
"#Enemies",
"knocked",
"down",
"(No",
"KL)",
"at",
"ist",... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure2-1.png | {
"x1": 55.11199951171875,
"x2": 542.2698974609375,
"y1": 216.98654174804688,
"y2": 234.94500732421875
} |
200 | 11 | 18 | Table | {
"x1": 347.76,
"x2": 540,
"y1": 379.8,
"y2": 509.03999999999996
} | Table 11: Parameter settings for RL. | [
"PPO",
"clip",
"eps",
"0.2",
"Optimizer",
"Adam",
"Learning",
"rate",
"0.0001",
"Batch",
"size",
"20480",
"Number",
"of",
"CPUs",
"5120",
"(AMD",
"EPYC",
"7H12",
"64-Core)",
"Number",
"of",
"GPUs",
"2",
"(A100)",
"γ",
"(basic)",
"0.995",
"γ",
"(oa)",
"0.92... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table11-1.png | {
"x1": 371.489990234375,
"x2": 516.6849365234375,
"y1": 520.978515625,
"y2": 526.98095703125
} |
200 | 8 | 18 | Figure | {
"x1": 64.44,
"x2": 531,
"y1": 72,
"y2": 300.59999999999997
} | Figure 8: Overview of the training framework with LLM. This training framework has three kinds of LLM tuning approaches: CoFT (Chain of Thoughts assisted Fine-Tuning), SFT (Supervised Fine-Tuning), EFT (Ensemble Fine-Tuning); and one LLM-RL co-training approach. | [
"State",
"Co-Training",
"selecting",
"Format",
"recognition",
"PPO",
"Tuning",
"Complete",
"status",
"Reward",
"Interaction",
"Exam",
"Set",
"InstructionResponse",
"Format",
"Reward",
"Examination",
"Reward",
"Goal",
"Completion",
"Reward",
"Rewards",
"State",
"ENV",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure8-1.png | {
"x1": 55.439998626708984,
"x2": 542.8262939453125,
"y1": 320.9515380859375,
"y2": 350.864013671875
} |
200 | 12 | 18 | Table | {
"x1": 299.88,
"x2": 540,
"y1": 621.72,
"y2": 682.1999999999999
} | Table 12: Evaluation on lora rank. | [
"8",
"0.544",
"0.672",
"0.482",
"0.608",
"0.502",
"0.629",
"0.060",
"0.124",
"16",
"0.550",
"0.673",
"0.487",
"0.601",
"0.507",
"0.626",
"0.070",
"0.124",
"32",
"0.555",
"0.685",
"0.505",
"0.621",
"0.529",
"0.652",
"0.065",
"0.159",
"64",
"0.547",
"0.675",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table12-1.png | {
"x1": 351.6669921875,
"x2": 487.905517578125,
"y1": 694.26953125,
"y2": 700.27197265625
} |
200 | 4 | 11 | Table | {
"x1": 202.67999999999998,
"x2": 540,
"y1": 153.72,
"y2": 476.28
} | Table 4: The details of features in the observation space. | [
"BEV",
"Region",
"Altitude",
"map",
"and",
"aerial",
"view",
"map",
"3x64x64",
"4.Spatial",
"feature",
"Scalar",
"BEV",
"12288",
"Pose",
"Position,",
"rotation,",
"camera",
"position,",
"camera",
"rotation,",
"etc.",
"43",
"Attribute",
"Character",
"ID,team",
"ID,... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table4-1.png | {
"x1": 258.8039855957031,
"x2": 483.5702209472656,
"y1": 487.9355163574219,
"y2": 493.93798828125
} |
200 | 17 | 26 | Table | {
"x1": 93.96,
"x2": 503.28,
"y1": 91.8,
"y2": 668.16
} | Table 17: Example of prompt and response. | [
"Whether",
"prone",
"position:True",
"Average",
"velocity:Static",
"Length",
"of",
"distance",
"moved:No",
"movement",
"Whether",
"hold",
"a",
"gun:True",
"response",
"goal",
"meta-state",
"prompt",
"In",
"order",
"to",
"complete",
"the",
"command",
"‘You",
"should"... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table17-1.png | {
"x1": 211.08900451660156,
"x2": 385.48431396484375,
"y1": 679.9525756835938,
"y2": 685.9550170898438
} |
200 | 13 | 23 | Table | {
"x1": 129.96,
"x2": 467.28,
"y1": 315,
"y2": 378
} | Table 13: Evaluation on LoRA target. | [
"All",
"0.529",
"0.642",
"0.471",
"0.581",
"0.485",
"0.596",
"0.069",
"0.119",
"Attention",
"0.555",
"0.685",
"0.505",
"0.621",
"0.529",
"0.652",
"0.065",
"0.159",
"Mlp",
"0.549",
"0.664",
"0.482",
"0.587",
"0.514",
"0.620",
"0.065",
"0.134",
"F1",
"(Choice)",... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table13-1.png | {
"x1": 223.0590057373047,
"x2": 373.51416015625,
"y1": 389.69354248046875,
"y2": 395.6960144042969
} |
200 | 10 | 23 | Table | {
"x1": 72.72,
"x2": 524.16,
"y1": 74.88,
"y2": 270
} | Table 10: Rule prompt for GPT4. | [
"1.Only",
"select",
"the",
"most",
"relevant",
"and",
"necessary",
"states",
"for",
"planning,",
"and",
"the",
"unplanned",
"states",
"will",
"be",
"adjusted",
"by",
"the",
"agent",
"itself",
"2.[Choose",
"1,",
"Choose",
"2,",
"...]",
"indicates",
"the",
"valu... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table10-1.png | {
"x1": 231.6959991455078,
"x2": 364.8760070800781,
"y1": 282.1695251464844,
"y2": 288.1719970703125
} |
200 | 14 | 23 | Table | {
"x1": 153.72,
"x2": 443.15999999999997,
"y1": 423,
"y2": 685.0799999999999
} | Table 14: Top 20 sub-goals ranked by frequency. | [
"g1",
"Average",
"velocity",
"g2",
"Horizontal",
"direction",
"of",
"movement",
"g3",
"Whether",
"seen",
"enemy",
"g4",
"Whether",
"hold",
"a",
"gun",
"g5",
"Whether",
"prone",
"position",
"g6",
"Length",
"of",
"distance",
"moved",
"g7",
"Length",
"of",
"dis... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Table14-1.png | {
"x1": 201.01100158691406,
"x2": 395.560546875,
"y1": 697.5615844726562,
"y2": 703.5640258789062
} |
200 | 10 | 19 | Figure | {
"x1": 83.52,
"x2": 522,
"y1": 283.68,
"y2": 403.56
} | Figure 10: Distribution comparison between real goals (Oracles) and goals generated by Gop (Prediction). The illustration shows that Gop generates goals that follows the real distribution, indicating good generalization on open-ended goal generation. | [
"(c)",
"Corresponding",
"to",
"∆V",
"Oracles",
"Prediction",
"io",
"n",
"je",
"ct",
"P",
"ro",
"Go",
"al",
"100",
"50",
"0",
"50",
"100",
"20",
"15",
"10",
"5",
"0",
"5",
"10",
"15",
"V",
"(b)",
"Corresponding",
"to",
"∆T",
"Oracles",
"Prediction",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure10-1.png | {
"x1": 55.439998626708984,
"x2": 541.4405517578125,
"y1": 420.9795227050781,
"y2": 450.8919982910156
} |
200 | 9 | 19 | Figure | {
"x1": 58.68,
"x2": 542.16,
"y1": 66.96,
"y2": 204.84
} | Figure 9: Illustration of BEV features in observation space. (a) and (b) are the altitude maps where bright areas are higher than dark areas. (c) is the aerial view map where the disconnected areas are windows or doors. One pixel in (a), (b) and (c) denotes 0.8 meter, 4 meters and 0.4 meter respectively. The small yell... | [
"(a)",
"Altitude",
"map",
"(0.8m)",
"(b)",
"Altitude",
"map",
"(4m)",
"(c)",
"Aerial",
"view",
"map",
"(0.4m)"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure9-1.png | {
"x1": 55.439998626708984,
"x2": 542.1045532226562,
"y1": 217.79751586914062,
"y2": 259.66497802734375
} |
200 | 6 | 15 | Figure | {
"x1": 60.12,
"x2": 537.12,
"y1": 68.75999999999999,
"y2": 223.92
} | Figure 6: The value changes during the training process. | [
"ue",
"V",
"al",
"Ba",
"sic",
"5",
"4",
"3",
"2",
"1",
"0",
"-1",
"Dates",
"07/0",
"7",
"06/2",
"3",
"06/0",
"9",
"05/2",
"6",
"05/1",
"2",
"04/2",
"8",
"04/1",
"4"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00006_1762245707/figures/2401.00006-Figure6-1.png | {
"x1": 186.0679931640625,
"x2": 410.8142395019531,
"y1": 239.76754760742188,
"y2": 245.77001953125
} |
200 | 1 | 5 | Table | {
"x1": 64.8,
"x2": 544.3199999999999,
"y1": 117.72,
"y2": 320.03999999999996
} | Table 1: Performance comparison of all baselines and our models. The best and second-best results are shown in bold and underlined, respectively. All values are multiplied by 100. | [
"Hybrid",
"AUC",
"66.98",
"±",
"1.23",
"67.27",
"±",
"2.15",
"68.79",
"±",
"1.99",
"68.81",
"±",
"1.84",
"69.52",
"±",
"2.91",
"69.39",
"±",
"3.90",
"70.60",
"±",
"1.54",
"72.27",
"±",
"1.64",
"ACC",
"65.93",
"±",
"1.57",
"67.24",
"±",
"3.06",
"68.46... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Table1-1.png | {
"x1": 53.50199890136719,
"x2": 558.1998901367188,
"y1": 85.34053802490234,
"y2": 102.31597900390625
} |
200 | 2 | 5 | Table | {
"x1": 52.919999999999995,
"x2": 295.92,
"y1": 379.8,
"y2": 607.68
} | Table 2: Statistics of datasets. M refers to Member, J refers to Job, S refers to Skill, CP refers to Candidate Pair, and PC refers to Professional Connection. | [
"job",
"descriptions",
"independently,",
"and",
"the",
"matching",
"degree",
"is",
"calculated",
"by",
"cosine",
"similarity.",
"•",
"BPJFNN",
"[28]",
"leverages",
"bidirectional",
"LSTM",
"to",
"learn",
"the",
"representations",
"of",
"resumes",
"and",
"job",
"de... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Table2-1.png | {
"x1": 53.50199890136719,
"x2": 294.04852294921875,
"y1": 336.924560546875,
"y2": 364.8590087890625
} |
200 | 1 | 1 | Figure | {
"x1": 370.08,
"x2": 508.32,
"y1": 82.8,
"y2": 199.07999999999998
} | Figure 1: The metagraph of Workplace Heterogeneous Information Network. It encompasses not only members (M) and jobs (J), which are crucial for Person-Job Fit, but also entities such as skills (S), companies (C), and schools (H). | [
"auxiliary",
"entity",
"direct",
"path",
"meta",
"path",
"recommended",
"entities",
"H",
"S",
"requireC",
"educate",
"post",
"master",
"work(ed)",
"co-apply",
"co-applied",
"JM",
"apply",
"connect"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Figure1-1.png | {
"x1": 317.9549865722656,
"x2": 559.8109741210938,
"y1": 215.28756713867188,
"y2": 254.1810302734375
} |
200 | 4 | 6 | Figure | {
"x1": 353.52,
"x2": 522.36,
"y1": 89.64,
"y2": 268.2
} | Figure 4: Hyperparameter tuning experiments to investigate the specific effects of social relations and job-specific attention mechanism. | [
"(b)",
"Model",
"performance",
"varying",
"the",
"number",
"of",
"CSAGNN",
"layers",
"while",
"fixing",
"the",
"number",
"of",
"sampled",
"skills",
"to",
"10.",
"tech",
"finance",
"hybrid",
"F1",
"0.71",
"0.70",
"0.69",
"0.68",
"0.67",
"0.66",
"0.65",
"0.64"... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Figure4-1.png | {
"x1": 317.9549865722656,
"x2": 559.8074340820312,
"y1": 287.14056396484375,
"y2": 315.07501220703125
} |
200 | 3 | 6 | Table | {
"x1": 59.76,
"x2": 287.28,
"y1": 219.6,
"y2": 461.15999999999997
} | Table 3: Ablation studies conducted on all datasets, with all values multiplied by 100. | [
"w/o",
"CSA",
"68.37",
"69.39",
"64.84",
"62.65",
"w/o",
"CSA&H",
"65.85",
"64.32",
"63.26",
"59.49",
"CSAGNN",
"72.27",
"72.37",
"69.58",
"64.49",
"w/o",
"S",
"70.81",
"71.67",
"68.68",
"64.67",
"w/o",
"A",
"70.03",
"71.47",
"68.18",
"64.06",
"Hybrid",
"w... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Table3-1.png | {
"x1": 53.50199890136719,
"x2": 294.04779052734375,
"y1": 186.98855590820312,
"y2": 203.9639892578125
} |
200 | 2 | 2 | Figure | {
"x1": 117,
"x2": 494.28,
"y1": 83.88,
"y2": 176.04
} | Figure 2: Steps of WHIN pre-training. (a) Workplace heterogeneous graph with metapath. (b) Subgraph sampling for mini-batch pre-training. (c) A pre-training model with encoder-decoder architecture using Link-level pre-training task. | [
"pos",
"path",
"pos",
"metapath",
"neg",
"path",
"neg",
"metapath",
"company",
"school",
"skill",
"job",
"member",
"(a)",
"(b)",
"(c)",
"…",
"loss",
"…",
"RGCN",
"MLP",
"layers"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Figure2-1.png | {
"x1": 53.79800033569336,
"x2": 558.2023315429688,
"y1": 189.54055786132812,
"y2": 206.5159912109375
} |
200 | 5 | 7 | Figure | {
"x1": 180,
"x2": 440.28,
"y1": 82.8,
"y2": 236.16
} | Figure 5: A case where professional connections improve performance on Person-Job Fit. CSAGNN can improve the performance of Person Job Fit by filtering and aggregating information from professional networks. For privacy protection reasons, we have rewritten the statement in the example while ensuring that the semantic... | [
"Education",
"require",
"No",
"job–related",
"description",
"Rich",
"job–related",
"description",
"connect",
"Mathematics",
"Microsoft",
"Azure",
"…",
"…",
"connect",
"score:",
"0.6115",
"score:",
"0.3613",
"score:",
"0.3215",
"CSAGNN(ours)",
"CF-based",
"model(LightGCN... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Figure5-1.png | {
"x1": 53.79800033569336,
"x2": 558.2024536132812,
"y1": 248.77053833007812,
"y2": 276.70501708984375
} |
200 | 6 | 7 | Figure | {
"x1": 118.8,
"x2": 228.23999999999998,
"y1": 293.76,
"y2": 391.32
} | Figure 6: Visualization of WHIN Pre-trained skill embeddings showing clear distinction between Technology and Health-Related Skills. | [] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Figure6-1.png | {
"x1": 53.79800033569336,
"x2": 295.6476135253906,
"y1": 403.7155456542969,
"y2": 431.6499938964844
} |
200 | 3 | 3 | Figure | {
"x1": 116.64,
"x2": 489.96,
"y1": 82.8,
"y2": 203.04
} | Figure 3: Architecture of CSAGNN. When determining the match between job 𝑗0 and member𝑚0, the information from𝑚0’s professional connections, 𝑚1 and 𝑚2, is simultaneously acquired. The initial representations of both the member and job are formed by concatenating the WHIN pre-training embedding with the representat... | [
"Average",
"…",
"…",
"…",
"BERT",
"Pre-trained",
"Embeddings",
"ℎ!!",
"ℎ!!",
"×(𝑙",
"−",
"1)",
"ℎ!!",
"𝑦#!!,#!",
"ℎ!#",
"(%)",
"ℎ!\"",
"(%)",
"ℎ!!",
"(%)",
"ℎ!#",
"(#)",
"ℎ!\"",
"(#)",
"ℎ!!",
"(#)",
"ℎ!#",
"(%)",
"ℎ!!",
"(%)",
"𝑠#",
"𝑠\"",
"𝑠!",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00010_1762245724/figures/2401.00010-Figure3-1.png | {
"x1": 53.79800033569336,
"x2": 558.20458984375,
"y1": 216.61257934570312,
"y2": 277.42401123046875
} |
200 | 2 | 8 | Table | {
"x1": 174.95999999999998,
"x2": 420.12,
"y1": 267.84,
"y2": 366.12
} | Table 2: The result of the Hartree-Fock computation of HeH+ by the symbolic numeric method. We used four normalized right eigenvectors |φ1), ..., |φ4) of mTy that have real eigenvalues (which are given as Eig in the table) and computed (φi|mTj |φi) for j = x, y, e. The third and the fourth solutions give the ground sta... | [
"1",
"-1.114772",
"0.604062",
"-1.114772",
"-0.537546",
"2",
"1.114772",
"-0.604062",
"1.114772",
"-0.537546",
"3",
"-0.337484",
"-0.801308",
"-0.337484",
"-1.600455",
"4",
"0.337484",
"0.801308",
"0.337484",
"-1.600455",
"i",
"Eig",
"(φi|x|φi)",
"(φi|y|φi)",
"(φi|e|φ... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00019_1762245733/figures/2401.00019-Table2-1.png | {
"x1": 107.99967956542969,
"x2": 487.3741455078125,
"y1": 391.7254333496094,
"y2": 464.8599853515625
} |
200 | 3 | 11 | Table | {
"x1": 106.92,
"x2": 440.28,
"y1": 149.76,
"y2": 282.24
} | Table 3: The expectation values of the unitary operators (φi| exp(− √ −1A)|φi) for A = mTx ,m T y , and m T e in the simple toy model. The table contains the result for two different solutions, distinguished by two different eigenvectors (φ1 and φ2), which correspond to the solutions for e = −1 and e = 1, respectively.... | [
"1",
"mx",
"0.707107",
"0.760245-0.649637j",
"0.760245-0.649637j",
"1",
"my",
"0.707107",
"0.760245-0.649637j",
"0.760245-0.649637j",
"1",
"me",
"-1.000000",
"0.540302+0.841471j",
"0.540302+0.841471j",
"2",
"mx",
"0.707107",
"0.760245-0.649637j",
"0.760245-0.649637j",
"2",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00019_1762245733/figures/2401.00019-Table3-1.png | {
"x1": 108,
"x2": 487.36444091796875,
"y1": 296.6854553222656,
"y2": 386.8599853515625
} |
200 | 4 | 11 | Table | {
"x1": 106.92,
"x2": 450,
"y1": 412.56,
"y2": 647.28
} | Table 4: The expectation values of the unitary operators (φi| exp(− √ −1A)|φi) for A = mTx ,m T y , andm T e in the Hartree-Fock computation. For the computation of expectation values, we used four eigenvectors ({φi|i = 1, ..., 4}) of mTy , which have real eigenvalues. | [
"1",
"mx",
"0.604062",
"0.823035-0.567990j",
"0.823035-0.567990j",
"1",
"my",
"-1.114772",
"0.440383+0.897810j",
"0.440383+0.897810j",
"1",
"me",
"-0.537546",
"0.858968+0.512030j",
"0.858968+0.512030j",
"2",
"mx",
"-0.604062",
"0.823035+0.567990j",
"0.823035+0.567990j",
"2",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00019_1762245733/figures/2401.00019-Table4-1.png | {
"x1": 108,
"x2": 487.255126953125,
"y1": 661.12548828125,
"y2": 717.4600219726562
} |
200 | 1 | 7 | Table | {
"x1": 189.72,
"x2": 406.08,
"y1": 621.72,
"y2": 669.24
} | Table 1: The result of the Hartree-Fock computation of HeH+ by the standard self-consistent method with STO-3g basis set, at the interatomic distance R = 1.4632. | [
"STO-3G",
"0.801918",
"0.336800",
"-1.597448",
"x",
"y",
"e"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00019_1762245733/figures/2401.00019-Table1-1.png | {
"x1": 107.99972534179688,
"x2": 487.2760925292969,
"y1": 694.60546875,
"y2": 734.0200805664062
} |
200 | 4 | 5 | Figure | {
"x1": 56.879999999999995,
"x2": 557.28,
"y1": 62.64,
"y2": 445.32
} | Fig. 4: We compare with state-of-the-art video pre-training methods on language-conditioned manipulation tasks in the LIBERO benchmark [27]. (a) Visualization of the LIBERO tasks separated into four suites, focusing on different aspects of the manipulation policies in spatial reasoning, object reasoning, task understan... | [
"put",
"the",
"bowl",
"on",
"top",
"of",
"the",
"cabinet",
"put",
"the",
"yellow",
"and",
"white",
"mug",
"in",
"the",
"microwave",
"and",
"close",
"it",
"pick",
"up",
"the",
"black",
"bowl",
"in",
"the",
"drawer",
"and",
"place",
"it",
"on",
"the",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure4-1.png | {
"x1": 48.95899963378906,
"x2": 563.0291137695312,
"y1": 457.01153564453125,
"y2": 510.8349609375
} |
200 | V | 14 | Table | {
"x1": 103.67999999999999,
"x2": 505.08,
"y1": 293.76,
"y2": 382.32
} | TABLE V: Average success rate on LIBERO benchmark. Our method performs consistently better than all the baselines across all suites. UniPi-Replan is only evaluated for a single seed due to the computation cost. | [
"UniPi-Replan",
"[5]",
"-",
"31.00",
"3.00",
"-",
"-",
"ATM",
"(Ours)",
"68.50±",
"1.78",
"68.00±",
"6.18",
"77.83±",
"0.82",
"39.33±",
"15.80",
"48.41±",
"2.09",
"VPT",
"[3]",
"37.83±",
"4.29",
"19.50±",
"0.82",
"3.33±",
"2.36",
"3.83±",
"1.65",
"-",
"Uni... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-TableV-1.png | {
"x1": 48.95899963378906,
"x2": 563.029052734375,
"y1": 266.3186340332031,
"y2": 284.2760925292969
} |
200 | VI | 14 | Table | {
"x1": 124.92,
"x2": 484.2,
"y1": 423.71999999999997,
"y2": 463.32
} | TABLE VI: Detailed results of Diffusion policy on LIBERO. The Diffusion policy can be further improved by our method, suggesting that our Any-point Trajectory Modeling framework is an important building block to apply to any policy model. | [
"Diffusion",
"Policy",
"67.67±",
"1.25",
"78.00±",
"2.45",
"35.00±",
"3.74",
"37.33±",
"2.05",
"33.85±",
"1.71",
"ATM",
"Diffusion",
"Policy",
"79.00±",
"3.74",
"81.00±",
"2.45",
"58.67±",
"4.64",
"44.00±",
"6.38",
"62.89±",
"1.10",
"Method",
"Libero-Spatial",
"... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-TableVI-1.png | {
"x1": 48.95899963378906,
"x2": 563.0291748046875,
"y1": 396.6045227050781,
"y2": 414.5619812011719
} |
200 | 11 | 14 | Figure | {
"x1": 66.24,
"x2": 551.16,
"y1": 59.76,
"y2": 186.12
} | Fig. 11: The attention maps of BC and Ours in the spatial transformer. We extract the attention weights between spatial CLS tokens and RGB tokens, highlighting the policy’s focus on specific spatial regions during decision-making. The heatmaps reveal our policy’s targeted attention on task-relevant areas, in contrast t... | [
"BC",
"Ours",
"put",
"the",
"wine",
"bottle",
"on",
"the",
"rack",
"put",
"the",
"bowl",
"on",
"top",
"of",
"the",
"cabinet",
"put",
"the",
"cream",
"cheese",
"in",
"the",
"bowl"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure11-1.png | {
"x1": 48.95899963378906,
"x2": 563.0291137695312,
"y1": 194.58755493164062,
"y2": 248.41009521484375
} |
200 | 5 | 6 | Figure | {
"x1": 49.68,
"x2": 563.04,
"y1": 54,
"y2": 237.23999999999998
} | Fig. 5: Real robot experiments on a dining table setup consisting of five tasks. The left figure shows our real-world setup and the tasks. The top right figure shows an example of the predicted particle trajectories and the policy execution, which closely follows the predicted trajectories. From the quantitative result... | [
"time",
"Policy",
"rollout:",
"Put",
"the",
"tomato",
"into",
"the",
"bowl",
"wrist",
"camera",
"base",
"camera",
"Task",
"1:",
"Squeeze",
"the",
"mustard",
"on",
"the",
"carrot",
"Task",
"2:",
"Put",
"the",
"carrot",
"into",
"the",
"basket",
"Task",
"3:",... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure5-1.png | {
"x1": 48.95899963378906,
"x2": 563.029296875,
"y1": 244.41555786132812,
"y2": 286.2840270996094
} |
200 | I | 6 | Table | {
"x1": 310.68,
"x2": 566.28,
"y1": 589.68,
"y2": 652.3199999999999
} | TABLE I: Average success rates of human-to-robot experiments. ATM trained with human videos significantly outperforms BC and ATM trained with only 10 robot videos, demonstrating the cross-embodiment capability of ATM. | [
"BC",
"\"",
"%",
"0%",
"10%",
"30%",
"ATM",
"\"",
"%",
"0%",
"0%",
"13%",
"ATM",
"\"",
"\"",
"63%",
"63%",
"60%",
"Method",
"Teleoperationdemos",
"Human",
"videos",
"fold",
"cloth",
"put",
"tomato",
"sweep",
"toys"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-TableI-1.png | {
"x1": 311.9729919433594,
"x2": 563.0306396484375,
"y1": 539.1205444335938,
"y2": 580.9889526367188
} |
200 | 6 | 6 | Figure | {
"x1": 313.2,
"x2": 562.3199999999999,
"y1": 310.68,
"y2": 468
} | Fig. 6: We implement ATM Diffusion Policy by adding the predicted future trajectories as additional conditioning and show consistent improvement over the base diffusion policies across the benchmark suites. | [
"Diffusion",
"Policy",
"ATM",
"Diffusion",
"Policy",
"(Ours)",
"80",
"70",
"60",
"50",
"40",
"30",
"20",
"10",
"Spatial",
"Object",
"Goal",
"Long",
"90",
"Overall0"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure6-1.png | {
"x1": 311.9729919433594,
"x2": 563.0305786132812,
"y1": 479.2445373535156,
"y2": 521.1119384765625
} |
200 | II | 9 | Table | {
"x1": 144,
"x2": 466.2,
"y1": 95.75999999999999,
"y2": 134.28
} | TABLE II: Ablation study on image masking of track transformer, where “w/o image masking” represents that we do not mask out image patches during track transformer training and “w/ image masking” means we randomly mask 50% patches. We can see that mask image modeling in track transformer improves the policy performance... | [
"w/o",
"image",
"masking",
"69.17±",
"6.38",
"65.00±",
"3.89",
"74.33±",
"3.66",
"30.83±",
"11.43",
"w/",
"image",
"masking",
"(default)",
"68.50±",
"1.78",
"68.00±",
"6.18",
"77.83±",
"0.82",
"39.33±",
"15.80",
"Image",
"Mask",
"Ratio",
"Spatial",
"Object",
"... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-TableII-1.png | {
"x1": 48.95899963378906,
"x2": 563.029296875,
"y1": 56.36655044555664,
"y2": 86.279052734375
} |
200 | III | 9 | Table | {
"x1": 145.79999999999998,
"x2": 463.32,
"y1": 175.68,
"y2": 234
} | TABLE III: Ablation study on the policy architecture. We explore the effect of the tracks fed into the policy in two positions: transformer input (early fusion) and MLP head (late fusion), as illustrated in Figure 3. | [
"\"",
"\"",
"68.50±",
"1.78",
"68.00±",
"6.18",
"77.83±",
"0.82",
"39.33±",
"15.80",
"\"",
"%",
"44.67±",
"1.84",
"56.67±",
"3.09",
"5.33±",
"0.24",
"22.33±",
"4.94",
"%",
"\"",
"65.50±",
"3.89",
"60.00±",
"1.47",
"72.83±",
"4.73",
"42.76±",
"14.62",
"earl... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-TableIII-1.png | {
"x1": 48.95899963378906,
"x2": 563.0289306640625,
"y1": 148.57052612304688,
"y2": 166.52801513671875
} |
200 | IV | 13 | Table | {
"x1": 322.92,
"x2": 549,
"y1": 178.92,
"y2": 227.16
} | TABLE IV: Computation cost and inference time for different methods on a V100 GPU. ATM performs trajectory generation instead of predicting high-dimensional frames, making it the most computationally efficient and feasible for closed-loop control. UniPi employs a video diffusion model for open-loop future goal generati... | [
"TFLOPS",
"per",
"generation",
"1.56",
"13.09",
"39.29",
"Time",
"per",
"generation",
"(s)",
"0.015",
"4.51",
"8.14",
"Computation",
"Close-Loop",
"Open-LoopATM",
"UniPi-Replan",
"UniPi"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-TableIV-1.png | {
"x1": 311.9729919433594,
"x2": 563.0305786132812,
"y1": 56.36655044555664,
"y2": 169.96612548828125
} |
200 | 15 | 17 | Figure | {
"x1": 56.16,
"x2": 557.28,
"y1": 84.96,
"y2": 646.1999999999999
} | Fig. 15: The visualizations of human demos and rollout videos of ATM policies trained with and without human data. We can see that ATM is able to take advantage of out-of-domain videos, i.e., human videos, to generate more precise tracks, resulting in better policy performance. | [
"use",
"the",
"broom",
"to",
"sweep",
"the",
"toys",
"into",
"the",
"dustpan",
"and",
"put",
"it",
"in",
"front",
"of",
"the",
"dustpan",
"ATM",
"trained",
"w/",
"human",
"videos",
"ATM",
"trained",
"w/o",
"human",
"videos",
"human",
"videos",
"put",
"th... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure15-1.png | {
"x1": 48.95899963378906,
"x2": 563.0291137695312,
"y1": 663.6385498046875,
"y2": 693.552001953125
} |
200 | 8 | 7 | Figure | {
"x1": 51.48,
"x2": 559.0799999999999,
"y1": 304.92,
"y2": 486
} | Fig. 8: Cross-morphology skill transfer for a pick-and-place task. Here, we collect 160 action-free videos of a Franka arm and 10 action-labeled demonstrations from a UR arm, with the final goal of learning a UR policy. We compare a vanilla BC baseline with ATM trained using types of data: using only the 10 UR videos, ... | [
"UR5",
"policy",
"learning",
"UR5",
"Pick-Place",
"Can",
"ATM",
"–",
"Franka",
"onlyATM",
"–",
"UR",
"only",
"ATM",
"-",
"Franka",
"⟹",
"UR160",
"Franka",
"Videos",
"Franka",
"videos"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure8-1.png | {
"x1": 48.95899963378906,
"x2": 563.032958984375,
"y1": 495.1395263671875,
"y2": 560.9169921875
} |
200 | 7 | 7 | Figure | {
"x1": 53.64,
"x2": 553.3199999999999,
"y1": 59.04,
"y2": 231.12
} | Fig. 7: Learning robotic skills from human videos for three tasks. We collect 100 videos of a human performing the tasks directly and 10 teleoperation demonstration trajectories. Each row from the top to the bottom shows three snapshots from the human videos, ATM trained without the human videos, and ATM trained with t... | [
"step",
"0",
"step",
"1",
"step",
"2",
"step",
"0",
"step",
"1",
"step",
"2",
"step",
"0",
"step",
"1",
"step",
"2",
"step",
"0",
"step",
"1",
"step",
"2",
"step",
"0",
"step",
"1",
"step",
"2",
"step",
"0",
"step",
"1",
"step",
"2",
"step",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure7-1.png | {
"x1": 48.95899963378906,
"x2": 563.0292358398438,
"y1": 245.24954223632812,
"y2": 287.1180114746094
} |
200 | 2 | 3 | Figure | {
"x1": 52.919999999999995,
"x2": 555.84,
"y1": 59.04,
"y2": 282.24
} | Fig. 2: Overview of our framework. (a) In the first stage, given an action-free video dataset, we first sample 2D points on one video frame and track their trajectories throughout the video using a pre-trained tracker. We then train a track transformer to predict future point trajectories given the current image observ... | [
"Language",
"Instruction",
"Off-the-shelf",
"Tracker",
"Track",
"Transformer",
"Track-guided",
"Policy",
"𝜋",
"action",
"(b)",
"Stage",
"2:",
"Track-guided",
"Policy",
"Learning(a)",
"Stage",
"1:",
"Any-point",
"Trajectory",
"Modeling",
"Action-labeled",
"Demos",
".",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure2-1.png | {
"x1": 48.95899963378906,
"x2": 563.0291748046875,
"y1": 296.5075378417969,
"y2": 362.28594970703125
} |
200 | 14 | 16 | Figure | {
"x1": 68.39999999999999,
"x2": 544.3199999999999,
"y1": 57.96,
"y2": 409.32
} | Fig. 14: Attention maps of Track Transformer trained with and without human videos. Including large-scale human video leads to much clearer attention maps, focusing on the object and robot arm, while training without human videos will attend to incorrect areas such as background walls. | [
"use",
"the",
"broom",
"to",
"sweep",
"the",
"toys",
"into",
"the",
"dustpan",
"and",
"put",
"it",
"in",
"front",
"of",
"the",
"dustpan",
"ATM",
"trained",
"w/",
"human",
"videos",
"ATM",
"trained",
"w/o",
"human",
"videos",
"human",
"videos",
"fold",
"t... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure14-1.png | {
"x1": 48.95899963378906,
"x2": 563.0291137695312,
"y1": 422.7765197753906,
"y2": 452.68896484375
} |
200 | 10 | 8 | Figure | {
"x1": 326.88,
"x2": 548.28,
"y1": 55.8,
"y2": 196.2
} | Fig. 10: We plot the success rates of the policies learned with predicted trajectories of different lengths. Generally, longer trajectory length improves the performance, but the benefit tends to plateau after 16. | [
"LIBERO-Goal",
"LIBERO-Long",
"LIBERO-Spatial",
"LIBERO-Object",
"(%",
")",
"at",
"e",
"es",
"s",
"R",
"Su",
"cc",
"80",
"70",
"60",
"50",
"40",
"30",
"20",
"10",
"4",
"8",
"16",
"32",
"64",
"track",
"length"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure10-1.png | {
"x1": 311.9729919433594,
"x2": 563.0305786132812,
"y1": 207.46353149414062,
"y2": 249.3310546875
} |
200 | 9 | 8 | Figure | {
"x1": 50.4,
"x2": 299.15999999999997,
"y1": 55.44,
"y2": 267.12
} | Fig. 9: Success rate of our policy trained with 4%, 10% and 20% action-labeled demos. Our policy trained with only 4% demos performs comparably to BC baseline with 20% demos on LIBERO-Spatial, Object, and GOAL, and even better on LIBERO-Spatial. When trained on 20% demos, our performance approaches BC with all training... | [
"BC",
"w/",
"20%",
"BC",
"w/",
"100%",
"ATM",
"40",
"LIBERO-Long",
"35",
"30",
"25",
"20",
"15",
"4",
"10",
"20",
"training",
"demos",
"(%)",
"LIBERO-Goal",
"40",
"45",
"50",
"55",
"60",
"65",
"70",
"75",
"Su",
"cc",
"es",
"s",
"R",
"at",
"e",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure9-1.png | {
"x1": 48.95899963378906,
"x2": 300.0165100097656,
"y1": 278.5915222167969,
"y2": 344.36993408203125
} |
200 | 3 | 4 | Figure | {
"x1": 79.92,
"x2": 264.24,
"y1": 54,
"y2": 217.79999999999998
} | Fig. 3: A visual illustration of the architecture of the trackguided policy. Given the current observation and the predicted tracks from the frozen pre-trained track transformer, we train a track-guided policy from a limited demonstration dataset. | [
"action",
"early",
"fusion",
"late",
"fusion",
"Transformer",
"MLP",
"predicted",
"track",
"tokens",
"image",
"patch",
"tokens",
"CLS",
"token"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure3-1.png | {
"x1": 48.95899963378906,
"x2": 300.0165710449219,
"y1": 234.43551635742188,
"y2": 276.30303955078125
} |
200 | VIII | 15 | Table | {
"x1": 346.68,
"x2": 494.28,
"y1": 75.96,
"y2": 214.2
} | TABLE VIII: Hyperparameters of policy training. | [
"augmentation",
"ColorJitter,RandomShift",
"track",
"length",
"16",
"frame",
"stack",
"10",
"point",
"sampling",
"grid",
"number",
"of",
"points",
"32",
"learning",
"rate",
"5e-4",
"weight",
"decay",
"1e-4",
"lr",
"scheduler",
"Cosine",
"lr",
"warm",
"up",
"0",
... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-TableVIII-1.png | {
"x1": 320.0450134277344,
"x2": 523.7103271484375,
"y1": 60.84952163696289,
"y2": 66.85198974609375
} |
200 | 13 | 15 | Figure | {
"x1": 319.68,
"x2": 555.12,
"y1": 241.2,
"y2": 331.2
} | Fig. 13: Given a video (left), we query 1000 randomly sampled points using an off-the-shelf TAP model (middle), where each colored dot represents the starting position of a track. We then filter the tracks using a heuristic of their position displacement across the video and re-sample around these points (right). We ca... | [
"“pick",
"up",
"the",
"milk",
"and",
"place",
"it",
"in",
"the",
"basket”",
"1.",
"random",
"tracking",
"2.",
"filter",
"&",
"retrackvideo"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure13-1.png | {
"x1": 311.9729919433594,
"x2": 563.0305786132812,
"y1": 339.8705139160156,
"y2": 429.55889892578125
} |
200 | VII | 15 | Table | {
"x1": 96.84,
"x2": 245.16,
"y1": 72,
"y2": 218.16
} | TABLE VII: Hyperparameters of track transformer training. | [
"image",
"mask",
"ratio",
"0.5",
"augmentation",
"ColorJitter,RandomShift",
"track",
"length",
"16",
"track",
"patch",
"size",
"4",
"point",
"sampling",
"variance",
"filtering",
"number",
"of",
"points",
"32",
"learning",
"rate",
"1e-4",
"weight",
"decay",
"1e-4",... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-TableVII-1.png | {
"x1": 49.60200119018555,
"x2": 295.07049560546875,
"y1": 56.36655044555664,
"y2": 62.3690185546875
} |
200 | 12 | 15 | Figure | {
"x1": 63.72,
"x2": 283.32,
"y1": 237.95999999999998,
"y2": 407.88
} | Fig. 12: To summarize spatial information, we perform selfattention on a sequence consisting of all views’ track and image patches and a CLS token. To integrate information across time, we perform casual self-attention between spatial CLS token, proprioception, and an action CLS token per timestep. To regress actions, ... | [
"early",
"fusion",
"late",
"fusion",
"𝑎!",
"MLP",
"Head",
"action",
"cls",
"timetime",
"timespatial",
"state",
"joint,",
"gripper",
"state",
"Temporal",
"Transformer",
"Spatial",
"Transformer",
"wrist",
"view",
"predicted",
"tracks",
"base",
"view"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00025_1762245738/figures/2401.00025-Figure12-1.png | {
"x1": 48.95899963378906,
"x2": 300.0165710449219,
"y1": 419.21453857421875,
"y2": 496.94793701171875
} |
200 | 1 | 0 | Figure | {
"x1": 307.8,
"x2": 546.12,
"y1": 250.92,
"y2": 425.15999999999997
} | Figure 1. Performance comparison on the RealBlur-J [23] test dataset in terms of PSNR and GMACs. Our proposed MLWNet achieves superiority in comparison with other state-of-the-arts. | [] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Figure1-1.png | {
"x1": 308.86199951171875,
"x2": 545.1087646484375,
"y1": 428.3247375488281,
"y2": 455.64495849609375
} |
200 | 1 | 5 | Table | {
"x1": 49.68,
"x2": 288,
"y1": 267.84,
"y2": 511.2
} | Table 1. Quantitative evaluations on the RealBlur dataset [23]. The experimental results were trained under the corresponding datasets respectively, and average runtime is tested on 256×256 patchs. | [
"MLWNet-S",
"33.02",
"0.933",
"-",
"-",
"0.04s",
"MLWNet-B",
"33.84",
"0.941",
"40.69",
"0.976",
"0.05s",
"DeblurGAN-v2",
"[12]",
"29.69",
"0.870",
"36.44",
"0.935",
"0.04s",
"SRN",
"[27]",
"31.38",
"0.909",
"38.65",
"0.965",
"0.07s",
"MPRNet",
"[37]",
"31.76"... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Table1-1.png | {
"x1": 50.11199951171875,
"x2": 286.35870361328125,
"y1": 516.1867065429688,
"y2": 543.5069580078125
} |
200 | 2 | 5 | Table | {
"x1": 309.96,
"x2": 544.3199999999999,
"y1": 267.84,
"y2": 404.28
} | Table 2. Quantitative evaluations trained on the RSBlur dataset [24], the RealBlur-J dataset was used for testing only. | [
"MLWNet-B",
"34.94",
"0.880",
"30.53",
"0.905",
"SRN",
"[27]",
"32.53",
"0.840",
"29.86",
"0.886",
"MIMO-Unet",
"[3]",
"32.73",
"0.846",
"29.53",
"0.876",
"MIMO-Unet+",
"[3]",
"33.37",
"0.856",
"29.99",
"0.889",
"MPRNet",
"[37]",
"33.61",
"0.861",
"30.46",
"0.... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Table2-1.png | {
"x1": 308.86199951171875,
"x2": 545.1087646484375,
"y1": 408.59075927734375,
"y2": 424.9520263671875
} |
200 | 4 | 5 | Figure | {
"x1": 49.68,
"x2": 546.12,
"y1": 72,
"y2": 244.07999999999998
} | Figure 4. Visual comparisons on the RealBlur-J dataset [23]. The proposed method generates an image with clearer characters. | [] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Figure4-1.png | {
"x1": 70.08799743652344,
"x2": 525.1326904296875,
"y1": 247.72177124023438,
"y2": 253.1240234375
} |
200 | 3 | 5 | Table | {
"x1": 311.76,
"x2": 542.16,
"y1": 436.68,
"y2": 573.12
} | Table 3. Quantitative evaluation for generalizability shows the results of models trained on the RealBlur-J dataset and tested on the RSBlur dataset, MACs are measured on 256 × 256 patches. | [
"MLWNet-B",
"30.91",
"0.818",
"108.2",
"Method",
"[23]→",
"[24]",
"MACs(G)PSNR",
"SSIM",
"DeblurGAN-v2",
"[12]",
"30.15",
"0.766",
"42.0",
"MPRNet",
"[37]",
"29.56",
"0.785",
"760.8",
"MIMO-UNet+",
"[3]",
"29.69",
"0.792",
"154.4",
"BANet",
"[29]",
"30.19",
"0.8... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Table3-1.png | {
"x1": 308.86199951171875,
"x2": 545.1088256835938,
"y1": 577.65771484375,
"y2": 604.97802734375
} |
200 | 4 | 6 | Table | {
"x1": 308.88,
"x2": 545.04,
"y1": 442.8,
"y2": 663.12
} | Table 4. Quantitative evaluations trained and tested on the GoPro dataset [20]. Our proposed MLWNet obtains competitive results with a combination of time efficiency and accuracy. | [
"MLWNet-B",
"33.83",
"0.968",
"0.05s",
"DeblurGAN-v2",
"[12]",
"29.55",
"0.934",
"0.04s",
"SRN",
"[27]",
"30.26",
"0.934",
"0.07s",
"DMPHN",
"[39]",
"31.20",
"0.945",
"0.21s",
"SDWNet",
"[42]",
"31.26",
"0.966",
"0.04s",
"MPRNet",
"[37]",
"32.66",
"0.959",
"0.... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Table4-1.png | {
"x1": 308.86199951171875,
"x2": 545.1087646484375,
"y1": 667.521728515625,
"y2": 694.8419799804688
} |
200 | 5 | 6 | Figure | {
"x1": 49.68,
"x2": 546.12,
"y1": 70.92,
"y2": 249.12
} | Figure 5. Visual comparisons on the RSBlur dataset [24]. The deblurring performance of the proposed method in low-light is impressive, The recovery of characters and texture structures far exceeds other advanced methods. | [] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Figure5-1.png | {
"x1": 50.11199951171875,
"x2": 545.1110229492188,
"y1": 252.10073852539062,
"y2": 268.46197509765625
} |
200 | 6 | 6 | Figure | {
"x1": 49.68,
"x2": 546.12,
"y1": 269.28,
"y2": 424.08
} | Figure 6. Visual comparisons on the GoPro dataset [20]. Our method better preserves texture information without sharpening. | [] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Figure6-1.png | {
"x1": 71.73300170898438,
"x2": 523.4960327148438,
"y1": 427.21875,
"y2": 432.6210021972656
} |
200 | 2 | 2 | Figure | {
"x1": 49.68,
"x2": 546.12,
"y1": 70.92,
"y2": 287.28
} | Figure 2. The overall architecture of the proposed MLWNet, the SEB is a simple module designed with reference [2], the WFB and WHB apply the LWN that implements the learnable 2D-DWT. In training phase, supervised learning is performed using Lmulti and selfsupervised restraint of the wavelet kernel is performed using Lw... | [] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Figure2-1.png | {
"x1": 50.11199951171875,
"x2": 545.1110229492188,
"y1": 292.7017517089844,
"y2": 321.083984375
} |
200 | 6 | 7 | Table | {
"x1": 49.68,
"x2": 295.2,
"y1": 425.88,
"y2": 509.03999999999996
} | Table 6. Ablation study on components of the proposed MLWNet. We set the baseline network to use SEB in its entirety, and models that do not use SIMO will represent single scales using SISO. | [
"✓",
"32.37",
"0.929",
"19.29",
"✓",
"✓",
"✓",
"32.40",
"0.928",
"28.21",
"✓",
"✓",
"✓",
"32.49",
"0.928",
"25.28",
"✓",
"✓",
"✓",
"32.57",
"0.929",
"22.22",
"✓",
"✓",
"✓",
"✓",
"32.62",
"0.931",
"28.21",
"SIMO",
"WFB",
"WHB",
"Lwavelet",
"PSNR",
"... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Table6-1.png | {
"x1": 50.11199951171875,
"x2": 286.3587341308594,
"y1": 513.9717407226562,
"y2": 541.2919921875
} |
200 | 7 | 7 | Table | {
"x1": 310.68,
"x2": 543.24,
"y1": 324.71999999999997,
"y2": 372.24
} | Table 7. Performance comparison at different noise difference levels, where L3 contains the noise difference mean. | [
"GoPro",
"34.81",
"34.66",
"33.76",
"33.19",
"32.63",
"RealBlur-J",
"33.92",
"33.81",
"33.97",
"33.93",
"33.54",
"Noise",
"Level",
"L1",
"L2",
"L3",
"L4",
"L5"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Table7-1.png | {
"x1": 308.86199951171875,
"x2": 545.1087036132812,
"y1": 376.8697509765625,
"y2": 393.23101806640625
} |
200 | 8 | 7 | Figure | {
"x1": 307.8,
"x2": 546.12,
"y1": 163.79999999999998,
"y2": 278.28
} | Figure 8. The difference between realistic blur (a) and synthetic blur (b). In the green box, the synthetic blur appears with color averaging resulting in high and low frequency confusion, and in the blue box has unnatural discontinuous trajectories. | [] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Figure8-1.png | {
"x1": 308.86199951171875,
"x2": 545.108642578125,
"y1": 281.5787353515625,
"y2": 319.85699462890625
} |
200 | 5 | 7 | Table | {
"x1": 49.68,
"x2": 288,
"y1": 374.76,
"y2": 411.12
} | Table 5. Comparison in various input and output modes. | [
"Method",
"SISO",
"MIMO",
"SIMO",
"PSNR/SSIM",
"32.29/0.924",
"32.19/0.928",
"32.37/0.929",
"MACs(G)",
"19.24",
"21.83",
"19.29"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Table5-1.png | {
"x1": 67.32099914550781,
"x2": 269.1546630859375,
"y1": 415.708740234375,
"y2": 421.1109924316406
} |
200 | 7 | 7 | Figure | {
"x1": 49.68,
"x2": 287.28,
"y1": 546.84,
"y2": 606.24
} | Figure 7. Feature maps representing high-and low-frequency components generated after learnable wavelet convolution. Zoom in on the screen for the best view. | [
"input"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Figure7-1.png | {
"x1": 50.11199951171875,
"x2": 286.358642578125,
"y1": 610.8157348632812,
"y2": 638.135986328125
} |
200 | 3 | 3 | Figure | {
"x1": 49.68,
"x2": 287.28,
"y1": 349.91999999999996,
"y2": 562.3199999999999
} | Figure 3. (a)The process of learnable 2D-wavelet convolution. (b)The construction process of the N ×N 2D-wavelet kernel. | [] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00027_1762245750/figures/2401.00027-Figure3-1.png | {
"x1": 50.11199951171875,
"x2": 286.3586730957031,
"y1": 569.6597290039062,
"y2": 586.02001953125
} |
200 | 1 | 0 | Figure | {
"x1": 329.4,
"x2": 511.2,
"y1": 252.72,
"y2": 428.03999999999996
} | Figure 1. Mean word accuracy vs Parameters on the 6 common test benchmarks. P-Ti, P-S and P-B refer to PARSeq-Ti, PARSeqS and PARSeq-B, respectively. * indicates training with REBUSyn. | [
"MaskOCR",
"SRN",
"ABINet",
"MAERec",
"parseq",
"parseq*",
"CLIP4STR",
"CLIP4STR*",
"TrOCR",
"ABINet",
"MAERec",
"SRN",
"MaskOCR-B",
"MaskOCR-L",
"TrOCR-L",
"TrOCR-B",
"CLIP4STR-L",
"CLIP4STR-B*",
"CLIP4STR-L*",
"CLIP4STR-B",
"P-S*",
"P-B*",
"P-S",
"P-B",
"P-Ti",
"[... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Figure1-1.png | {
"x1": 308.86199951171875,
"x2": 545.1087036132812,
"y1": 442.6107482910156,
"y2": 480.88995361328125
} |
200 | 4 | 5 | Figure | {
"x1": 325.44,
"x2": 511.2,
"y1": 319.68,
"y2": 442.44
} | Figure 4. Average word error rate on 6 common test benchmarks, with respect to images seen (batch size times number of steps) during PARSeq training stage of different sizes. | [
"parseq_l",
"parseq_b",
"parseq_s",
"5.0",
"5.5",
"Av",
"er",
"ag",
"e",
"Er",
"ro",
"r",
"R",
"at",
"e[",
"%",
"]",
"4.5",
"4.0",
"3.5",
"102",
"2×102",
"Images",
"Seen(M)"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Figure4-1.png | {
"x1": 308.86199951171875,
"x2": 545.1087646484375,
"y1": 457.1807556152344,
"y2": 484.5009765625
} |
200 | 3 | 5 | Figure | {
"x1": 75.24,
"x2": 242.28,
"y1": 318.24,
"y2": 435.24
} | Figure 3. The average word error rate on 6 common test benchmarks was calculated using the PARSeq model size. The solid line represents the fitted power law E(·), and the points on the dotted line correspond to the power law equation. | [
"E=",
"(6.316",
"⋅",
"10−74/N)0.018",
"]",
"at",
"e",
"[%",
"e",
"W",
"or",
"d",
"Er",
"ro",
"r",
"R",
"Av",
"er",
"ag",
"3.15",
"3.1",
"3.05",
"3.0",
"2.95",
"108",
"2.9"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Figure3-1.png | {
"x1": 50.11199951171875,
"x2": 286.35870361328125,
"y1": 447.01776123046875,
"y2": 485.2969970703125
} |
200 | 2 | 5 | Figure | {
"x1": 59.04,
"x2": 528.12,
"y1": 85.67999999999999,
"y2": 204.12
} | Figure 2. Improvement in TrOCR model performance with increasing model size, data volume, and training computation. Model performance is measured by calculating the average word error rate on 6 common test benchmarks Left: Evaluation of model performance with changing model sizes. Center: Evaluation of model performanc... | [
"(c)",
"Computation",
"(training",
"hours)",
"E=",
"(4.45",
"⋅",
"104/C)−0.764",
"3.75M",
"7.5M",
"15M",
"]",
"at",
"e",
"[%",
"e",
"W",
"or",
"d",
"Er",
"ro",
"r",
"R",
"Av",
"er",
"ag",
"6×101",
"4×101",
"3×101",
"2×101",
"1032×102",
"3×102",
"4×102"... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Figure2-1.png | {
"x1": 50.11199951171875,
"x2": 545.1143798828125,
"y1": 219.78976440429688,
"y2": 279.9869384765625
} |
200 | 1 | 10 | Figure | {
"x1": 123.83999999999999,
"x2": 471.24,
"y1": 72,
"y2": 649.0799999999999
} | Figure 1. Error analysis of the Union14M benchmark. We select three representative models and show their prediction results (Text in black represents correct prediction and red text vice versa). | [] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Figure1-1.png | {
"x1": 50.11199951171875,
"x2": 545.1109619140625,
"y1": 673.1716918945312,
"y2": 689.5330200195312
} |
200 | 10 | 6 | Table | {
"x1": 333.71999999999997,
"x2": 519.12,
"y1": 191.88,
"y2": 244.07999999999998
} | Table 10. Average accuracy achieved by using visual task pretraining and OCR task pre-training on 6 common test benchmarks. | [
"ImageNet-21k",
"R+E+B+U+Syn",
"ViT-L",
"96.74",
"R+E+B+U+Syn",
"R+E+B+U",
"ViT-S",
"96.96",
"Scratch",
"R+E+B+U+Syn",
"ViT-L",
"97.03",
"Pretrain",
"Dataset",
"Backbone",
"Word",
"Acc",
"Scratch",
"R+E+B+U",
"ViT-S",
"96.12",
"Scratch",
"R+E+B+U+Syn",
"ViT-S",
"96.85... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table10-1.png | {
"x1": 308.86199951171875,
"x2": 545.10888671875,
"y1": 256.0837707519531,
"y2": 272.44500732421875
} |
200 | 8 | 6 | Table | {
"x1": 81.72,
"x2": 255.23999999999998,
"y1": 371.88,
"y2": 407.15999999999997
} | Table 8. PARSeq-S average accuracy of integrating diverse synthetic and real data types. | [
"Real",
"DataSet",
"Syn",
"DataSet",
"Data",
"Ratio",
"Word",
"Acc",
"R+E+B+U",
"Syn",
"1:0.5",
"96.19",
"R+E+B+U",
"MJ+ST",
"1:2.5",
"96.24",
"R+E+B+U",
"MJ+ST+Syn",
"1:3",
"96.85"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table8-1.png | {
"x1": 50.11199951171875,
"x2": 286.3586730957031,
"y1": 418.74273681640625,
"y2": 435.1029968261719
} |
200 | 9 | 6 | Table | {
"x1": 115.92,
"x2": 220.32,
"y1": 580.68,
"y2": 650.16
} | Table 9. PARSeq-S average accuracy on 6 common test benchmarks with varying ratios of synthetic and real data. | [
"Data",
"Ratio",
"Word",
"Acc",
"Real:Syn=1:0.5",
"96.32",
"Real:Syn=1:1",
"96.50",
"Real:Syn=1:2",
"96.59",
"Real:Syn=1:3",
"96.85",
"Real:Syn=1:4",
"96.76",
"Real:Syn=1:5",
"95.70"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table9-1.png | {
"x1": 50.11199951171875,
"x2": 286.3587341308594,
"y1": 656.4867553710938,
"y2": 672.8480224609375
} |
200 | 4 | 9 | Table | {
"x1": 92.88,
"x2": 245.16,
"y1": 460.79999999999995,
"y2": 504
} | Table 4. Average accuracy using language-specific pretraining on benchmark test set, training model in real dataset of REB. | [
"Arabic",
"PARSeq",
"REB",
"95.62",
"Cn-En",
"PARSeq",
"REB",
"95.81",
"Latin",
"PARSeq",
"REB",
"95.82",
"Pretrain",
"Model",
"Datasets",
"Word",
"Acc",
"From",
"Scratch",
"PARSeq",
"REB",
"95.60"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table4-1.png | {
"x1": 50.11199951171875,
"x2": 286.358642578125,
"y1": 526.24072265625,
"y2": 542.6019897460938
} |
200 | 3 | 9 | Table | {
"x1": 54,
"x2": 541.0799999999999,
"y1": 72,
"y2": 429.12
} | Table 3. Word accuracy on Union14M benchmark, * indicates training with REBU-Syn. | [
"PARSeq-S*",
"REBU-Syn",
"85.2",
"89.4",
"94.0",
"88.0",
"93.1",
"89.9",
"89.8",
"89.9",
"CLIP4STR-B*",
"REBU-Syn",
"88.6",
"90.1",
"96.4",
"89.1",
"96.3",
"92.2",
"91.9",
"92.1",
"CLIP4STR-L*",
"REBU-Syn",
"88.6",
"90.4",
"96.4",
"89.3",
"97.2",
"90.7",
"92.7... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table3-1.png | {
"x1": 139.29400634765625,
"x2": 455.9335632324219,
"y1": 441.2557373046875,
"y2": 446.6579895019531
} |
200 | 6 | 13 | Table | {
"x1": 49.68,
"x2": 287.28,
"y1": 382.68,
"y2": 437.03999999999996
} | Table 6. Word accuracy with different model size of CLIP4STR. Test data: Union14M. | [
"Method",
"Param",
"(M)",
"Avg",
"PARSeq-S",
"22.5",
"89.89",
"PARSeq-B",
"104.0",
"90.37",
"PARSeq-L",
"335.9",
"90.81"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table6-1.png | {
"x1": 50.11199951171875,
"x2": 286.3587341308594,
"y1": 449.23675537109375,
"y2": 465.5980224609375
} |
200 | 7 | 13 | Table | {
"x1": 109.8,
"x2": 226.07999999999998,
"y1": 469.79999999999995,
"y2": 505.08
} | Table 7. Word accuracy with different model size of CLIP4STR. Test data: Union14M. | [
"Method",
"Param",
"(M)",
"Avg",
"CLIP4STR-S",
"43.6",
"91.90",
"CLIP4STR-B",
"86.7",
"92.08",
"CLIP4STR-L",
"268.2",
"92.19"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table7-1.png | {
"x1": 50.11199951171875,
"x2": 286.3587341308594,
"y1": 516.7887573242188,
"y2": 533.1500244140625
} |
200 | 8 | 13 | Table | {
"x1": 123.83999999999999,
"x2": 213.12,
"y1": 657,
"y2": 683.28
} | Table 8. Accuracy for CLIP4STR-L on FUNSD. | [
"Model",
"Word",
"Acc",
"CLIP4STR-L",
"96.02",
"CLIP4STR-L*",
"96.50"
] | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table8-1.png | {
"x1": 80.98600006103516,
"x2": 255.49005126953125,
"y1": 695.2807006835938,
"y2": 700.6829833984375
} |
200 | 1 | 2 | Table | {
"x1": 58.68,
"x2": 278.28,
"y1": 498.96,
"y2": 551.16
} | Table 1. Architecture specifications of TrOCR variants. | [
"Model",
"Encoder",
"FLOPs",
"(G)",
"Params",
"(M)layers",
"hidden",
"sizes",
"heads",
"TROCR-S",
"12",
"384",
"6",
"13.31",
"43.09",
"TROCR-B",
"12",
"768",
"12",
"62.01",
"281.87",
"TROCR-L",
"24",
"1024",
"16",
"191.00",
"505.50",
"TROCR-H",
"48",
"1200",... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table1-1.png | {
"x1": 68.98899841308594,
"x2": 267.4870910644531,
"y1": 559.5577392578125,
"y2": 564.9600219726562
} |
200 | 2 | 2 | Table | {
"x1": 316.8,
"x2": 537.12,
"y1": 200.88,
"y2": 253.07999999999998
} | Table 2. Architecture specifications of PARSeq variants. | [
"Model",
"Encoder",
"FLOPs",
"(G)",
"Params",
"(M)layers",
"hidden",
"sizes",
"heads",
"PARSeq-S",
"12",
"384",
"6",
"2.76",
"22.51",
"PARSeq-B",
"12",
"768",
"12",
"17.20",
"104.01",
"PARSeq-L",
"24",
"1024",
"16",
"49.90",
"335.92",
"PARSeq-H",
"32",
"1280... | /home/yz979/palmer_scratch/chengye/SciMolmo-LaTeX-Source-Processing-Toolkit/pdfparser/pdf2latex/2024_new/temp/figures_2401.00028_1762245767/figures/2401.00028-Table2-1.png | {
"x1": 326,
"x2": 527.97705078125,
"y1": 260.5127258300781,
"y2": 265.91497802734375
} |
End of preview. Expand in Data Studio
No dataset card yet
- Downloads last month
- 6