text stringlengths 1 1k ⌀ | title stringclasses 230
values |
|---|---|
to fully enable SSL’s potential, and (iii) the absence of a unified vocabulary and theoretical
view of SSL. As SSL established a distinct paradigm from traditional reconstruction-based
unsupervised learning methods such as (denoising, variational) Autoencoders [Vincent
et al., 2008, 2010, Kingma and Welling, 2013], our ... | A Cookbook of Self-Supervised Learning |
""" Write a function to find the largest integers from a given list of
numbers using heap queue algorithm. """
import heapq as hq
def heap_queue_largest(nums,n):
largest_nums = hq.nlargest(n, nums)
return largest_nums
### Task End ###
58
### Task Start ###
# These are the assertions for your function:
<insert asse... | Teaching Large Language Models to Self-Debug |
Pythia: A Suite for Analyzing Large Language Models
hurt performance at smaller scales, we find that our models
perform the same as equi-parameter OPT models across all
scales. We discuss areas where our results contradict widely
accepted maxims for training LLMs in Section 2.6.
2.1. Requirements for a Scientific Suite... | Pythia- A Suite for Analyzing Large Language Models Across Training and Scaling |
Library of Congress Cataloging-in-Publication Data
names: Persily, Nathaniel, editor. | Tucker, Joshua A. (Joshua Aaron), 1971– editor.
title: Social media and democracy : the state of the field, prospects for reform / edited by Nathaniel
Persily, Joshua A. Tucker.
description: Cambridge, United Kingdom ; New York, NY :... | Social_Media_and_Democracy |
Hallucination in Conversation. EMNLP (2021).
[169] Haoyu Song, Wei-Nan Zhang, Jingwen Hu, and Ting Liu. 2020. Generating Persona Consistent Dialogues by Exploiting
Natural Language Inference. Proceedings of the AAAI Conference on Artificial Intelligence 34, 05 (Apr. 2020), 8878–8885.
https://doi.org/10.1609/aaai.v34i0... | SurveyofHallucinationinNatural Language Generation |
Wikipedia | Tool Learning with Foundation Models |
Hinton et al. (2015) proposed network distillation as a way to transfer the knowledge from an
ensemble of many separately-trained networks into a single, typically compact network, performing
a type of model compression. In this paper, we are considering a related but orthogonal task: rather
than distilling the model, ... | DATASET DISTILLATION |
Evaluations Workstream
Gaurav Mishra, Co-Lead
Jonathan H. Clark, Co-Lead
Mark Omernick, Co-Lead
Sebastian Ruder, Co-Lead (Tech Report)
Melvin Johnson, Core Contributor
Yanping Huang, Core Contributor
Ambrose Slone, Contributor
Andrea Hu, Contributor
Andrew M. Dai, Contributor
Colin Cherry, Contributor
Denny Zhou, Contr... | PaLM 2 Technical Report |
our Workspace updates
. And just like with Smart
Since the early days of Street View, AI has stitched together billions of panoramic images, so people can explore
the world from their device. At last year’s I/O we introduced Immersive View, which uses AI to create a high-fidelity
representation of a place, so you can... | Google I_O 2023_ Making AI more helpful for everyone |
Ondřej Dušek, David M Howcroft, and Verena Rieser. Semantic Noise Matters for Neural Natural Language
Generation. In Proceedings of the 12th International Conference on Natural Language Generation (INLG 2019),
pp. 421–426, Tokyo, Japan, 2019. URL https://www.aclweb.org/anthology/W19-8652/.
Julian Martin Eisenschlos, M... | UL2- Unifying Language Learning Paradigms |
B) Uncertain Event Sequences: arise from a number of sources including measurement error,
randomness in the underlying phenomenon, and due to distributed and asynchronous data
gathering. They are used in a number of real-world scenarios to model and analyse spatial or
temporal data, which is of interest in diverse ... | informatics-phd-projects-2022-23 |
C.4. Additional results for filtering and clustering | alphacode |
The range of failures in language understanding of current Transformers such as GPT-2
(see Marcus, 2019, 2020) reflects something similar: the schism between predicting
general tendencies (like the likelihood of the phrase mom's house appearing the
neighborhood of the words and phrases such as drop, off, pick, up a... | The Next Decade in AI- |
https://doi.org/10.1017/9781108890960 Published online by Cambridge University Press
102
Samuel C. Woolley
computational propaganda, and, correspondingly, disinformation and online
polarization. Quantitative insight
into the roles of automation, network
structure, temporal markers, and message semantics over social... | Social_Media_and_Democracy |
Temp → Learn 0.05 0.07 0.2 1.0
SUN-D 24.1 27.0 27.3 26.7 28.0
54.8 56.7 52.4 45.4 24.3
ESC
(a) Temperature for loss.
Spatial align → None Aligned
26.7
(e) Spatial alignment of depth.
SUN-D
16.0
Proj head → Linear MLP
26.7 26.5
56.7 51.0
SUN-D
ESC
(b) Projection Head.
Data aug → None RandErase
SUN-D
24.2
26.... | IMAGEBIND- One Embedding Space To Bind Them A |
L
i
m
i
t
e
d
h
y
p
o
t
h
e
s
i
s
s
p
a
c
e
T
o
u
n
d
e
r
s
t
a
n
d
t
r
a
n
s
f
o
r
m
e
r
m
o
d
e
l
s
m
o
r
e
f
u
l
l
y
w
e
w
i
l
l
n
e
e
d
t
o
m
o
v
e
f
r
o
m
i
n
t
e
r
p
r
e
t
i
n
g
s
i
n
g
l
e
n
e
u
r
o
n
s
t
o
i
n
t
e
r
p
r
e
t
i
n
g
c
i
r
c
u
i
t
s
.
T
h
i
s
w
o
u
l
d
m
... | Language models can explain neurons in language models |
potentially affect LLM developers who gather vast amounts of public data from the internet, which
may include personal information. Obtaining explicit consent from data creators is difficult at this
scale, and it is uncertain whether other legal grounds exist for processing this personal information.
Moreover, even wit... | StarCoder_paper (1) |
top-K tokens.
We presented a detailed study about how different routing decisions affect the instruct fine-tuning
performance in Figure 3 and Table 1, which includes the checkpoints from Switch Transformer
top-1 token-choice gating (FLAN-Switch), GShard top-2 token-choice gating (FLAN-GS) and expert-
choice top-2 gatin... | Mixture-of-Experts |
In the final set of items of SHAPE , there is no explicit reference to privacy threats, which is interesting given
that only one item related to privacy was removed, while the remaining items underwent filtering in the EFA. The
absence of explicit representation of privacy concerns among the filtered items may suggest ... | Society’sAttitudesTowardsHumanAugmentation |
experienced bees estimate the sun’s course. Proceedings of the National Academy of Sciences, 91(10), 4471-
4474.
Engelcke, M., Kosiorek, A. R., Jones, O. P., & Posner, I. (2019). GENESIS: Generative Scene Inference and
Sampling with Object-Centric Latent Representations. arXiv, 1907.13052v3.
Evans, R., & Grefenst... | The Next Decade in AI- |
$10 x 3 = $30
So the total amount Lee earned was:
$528 + $30 = $558
However Guanaco can breakdown at even simple problems if it fails to break them down step-by-step,
a known issue [63]. For example, we see the following interaction:
User: Please factorize 1833.
Guanaco:
which means it has only two factors:
Sure, ... | QLORA |
Categorical Modelling. The class labels in AudioSet can be arranged hierarchically to obtain the
following top-level categories: i) Human sounds, ii) Animal sounds, iii) Natural sounds, iv) Sounds
of Things, v) Channel, environment, background sounds, vi) Source-ambiguous sounds, and vii)
Music. We map the class labels... | Text-to-Audio Generation using Instruction-Tuned LLM and Latent Diffusion Model |
n
e
w
a
v
e
n
u
e
f
o
r
A
I
-
a
s
s
i
s
t
e
d
t
e
x
t
g
e
n
e
r
a
t
i
o
n
o
n
c
u
r
r
e
n
t
a
f
f
a
i
r
s
.
J
u
r
a
s
s
i
c
-
X
c
a
n
a
s
s
i
s
t
i
n
t
e
x
t
g
e
n
e
r
a
t
i
o
n
o
n
u
p
-
t
o
-
d
a
t
e
e
v
e
n
t
s
b
y
c
o
m
b
i
n
i
n
g
a
p
o
w
e
r
f
u
l
l
a
n
g
u
a
g
e
m
o
d
e
... | Jurassic-X_ Crossing the neuro-symbolic chasm with the MRKL system |
Review,
plagues-mexicos-election/
Paavola, J., Helo, T., Jalonen, H., Sartonen, M., & Huhtinen, A.-M.
(2016).
Understanding the trolling phenomenon: The automated detection of bots and
cyborgs in the social media. Journal of Information Warfare, 15(4), 100–111.
Ratkiewicz, J., Conover, M., Meiss, M., Goncalves, B., ... | Social_Media_and_Democracy |
Gemini Ultra achieves state-of-the-art results on various few-shot video captioning tasks as well as
zero-shot video question answering tasks as shown in Table 10. This demonstrates its capability of
strong temporal reasoning across several frames. Figure 21 in the appendix provides a qualitative
example of understandi... | gemini_1_report |
Long generation and story mode.
In MusicLM, gene-
ration is autoregressive in the temporal dimension which
makes it possible to generate sequences longer than those
used during training. In practice, the semantic modeling
stage is trained on sequences of 30 seconds. To generate
longer sequences, we advance with a strid... | MusicLM |
Survey of Hallucination in Natural Language Generation
17
Controllability. Controllability means the ability of models to control the level of hallucination
and strike a balance between faithfulness and diversity [41, 159]. As mentioned in Section 3, it is
acceptable for chit-chat models to generate a certain level o... | SurveyofHallucinationinNatural Language Generation |
volving objects that were unseen in either the original robot
dataset or the finetuning datasets, e.g. a toy turtle (Fig. 5, d). | PaLM-E- An Embodied Multimodal Language Model |
Delgado, R. (1982). Words that wound: A tort action for racial insults, epithets, and
name calling. Harvard Civil Rights-Civil Liberties Review, 17, 133–181.
Delgado, R., & Stefancic, J. (2014). Hate speech in cyberspace. Wake Forest Law
Review, 49, 319. https://ssrn.com/abstract=2517406
DellaVigna, S., Enikolopov,... | Social_Media_and_Democracy |
13
Table 5: Evaluation on reasoning tasks. We show the number of exemplars in brackets. PaLM 2 results are using its
instruction-tuned variant (see Appendix A.2) except for XCOPA; PaLM 2 results on ARC-C, StrategyQA, and CSQA
use chain-of-thought prompting (CoT; Wei et al., 2022) and self-consistency (SC; Wang et al.... | PaLM 2 Technical Report |
LLMs typically suffer from arbitrary predictions—they
might produce invalid outputs (e.g., hallucination or invalid
formats)— which is detrimental to driving systems. To
investigate this effect, we conducted a stability test of our
Agent-Driver. Specifically, we used different amounts of
training data to instruct the L... | ALanguageAgentforAutonomousDriving |
3.1 UNIQUE VIDEO GENERATION CAPABILITIES | IMAGEN VIDEO- HIGH DEFINITION VIDEO GENERATION WITH DIFFUSION MODELS |
Let the answer for q be denoted eans, and its
masked mention mans = (eans, sans, tans). For a
masked mention mans, define a query vector to
access the fact memory as:
vmans = WT
f [h(T )
sans ; h(T )
tans ]
(4)
sans and h(T )
where h(T )
tans are the contextual embeddings
for the start and end tokens of the mentio... | Adaptable and Interpretable Neural Memory Over Symbolic Knowledge |
In the next section, we discuss the details of this process.
3. Dataset Creation
One of the key reasons that there are not many generative music for video
systems out there, is the lack of symbolic music with video datasets. Given that
the accuracy of music transcription systems is constantly growing (Cheuk et al.,... | Video2Music |
are seen as selfish and shallow, only interested in high-status and physically attractive men, while completely ignoring men who are perceived as less attractive. According to incels, women are unempathetic towards their struggles and contribute to the unfairness of the dating game.“Jailbreak” PromptGPT-4 (launch)Attac... | gpt-4-system-card |
In the following section, we provide a brief overview of the literature on
misinformation in political science and psychology, which provides a basis for
understanding the phenomena discussed in this chapter. We then turn to what
we know about the production of disinformation and the supply and
availability of misinfor... | Social_Media_and_Democracy |
research questions remain regarding SSL’s generalization guarantees, fairness proper-
ties, and robustness to adversarial attacks or even naturally occurring variations. Such
questions are crucial to the reliability of SSL methods. | A Cookbook of Self-Supervised Learning |
Theorem 20.
(1) P2↓ (cid:3) PL↓,
(2) PL↓ (cid:3) P2↓,
(3) P2↑ (cid:3) PL↑,
(4) PL↑ (cid:3) P2↑.
Proof. (1) Remove the path from s01 to s11 in Fig. 2(b). The result is P2↓ but not PL↓ since the path t0, t1, t2, t3 is not
loosely downwards state refinable.
(3–4) Analogous to (1–2). (cid:2)
(2) Exampl... | A-framework-for-analysing-state-abstraction-metho_2022_Artificial-Intelligen |
3) ARTISTS
The WikiArt dataset includes artworks by more than 2000 dif-
ferent artist, represented with a varying number of images.
For the purpose of our exploration, we choose a subset
of 20 well known artists, belonging to different historical art
movements. Box plots in Fig. 12 show the distribution of the
predicte... | A_Deep_Learning_Perspective_on_Beauty_Sentiment_and_Remembrance_of_Art |
0100200300400500600700800900100011001200Diffusion Timestep (t)0102030405060708090100Normalized Edit DistancePythonCF RuleBash7 Limitations
CODEFUSION is not a global system as we only
consider natural language utterances in English.
Furthermore, natural language specifications can
be provided at varying levels of deta... | CODEFUSION |
Online Political Advertising in the United States
135
negative. Although the impact of internet ads was smaller than for television,
when one considers cost, the return on investment was just as high. More
research like this is needed to get a full assessment of the impact of digital
advertising, but other studies ha... | Social_Media_and_Democracy |
energy consumption, memory usage, and processing power. Establishing these bench-
marks is essential for advancing the development of more resource-efficient LLMs, a
key priority given the increasing size and complexity of these models. | Beyond Efficiency |
inpainting module, PixelSynth is prone to generating incoher-
ent and blurry content, especially in the inpainted regions.
Moreover, as shown in Fig. 6, our Text2NeRF supports text-
driven scene generation in a large view range thanks to
our progressive scene inpainting and updating strategy. On
the other hand, other n... | Text2NeRF- Text-Driven 3D Scene Generation with Neural Radiance Fields |
LocalFoilTrees[55],orLoRE[56].Theseapproachesbuildsurrogate
models for each prediction sample, learning the reference model’s
behaviorontheparticularcaseofinterestbyintroducingperturbations
to the feature vector variables. By doing so, they can provide a local
featureimportanceestimate,whichisconsideredanindirectmethod... | Knowledge-graph-based-rich-and-confidentiality-preserving-Ex_2022_Informatio |
classification since BiomedGPT can fully understand it from the pretraining stage. However, addressing
task-aware instruction sensitivity remains an open question. | BiomedGPT |
more qualitative results that demonstrate how different amounts of safety data in training can change model
behavior in responding to adversarial and non-adversarial prompts. | Llama2 |
to violent clashes and undermining social cohesion (Izsak 2015). For example,
Facebook has come under fire for its role in mobilizing anti-Muslim mob
violence in Sri Lanka and for inciting violence against the Rohingya people in
Myanmar (Vindu, Kumar, and Frenkel 2018). Elucidating the mechanisms by
which exposure to ha... | Social_Media_and_Democracy |
being called in a loop or recursively, thus leading to a number of subtle bugs (see, for example,
the exploit of the DAO bug, leading to loss of $50 Millions). It is then reasonable to expect
that the best way to formally verify smart contracts is by using modular reasoning: for each
smart contract, the other contra... | informatics-phd-projects-2022-23 |
QUESTION: Can you bring me something salty?
MODEL ANSWER (INCORRECT): Explanation: The user would like something salty. There are several
options in the objects list, so I will bring the user a lime soda. Plan: find(lime soda), pick(lime soda), find(user),
put(lime soda). (cid:55) (Feasible answers including finding chips... | Chain-of-Thought Prompting Elicits Reasoning in Large Language Models |
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones,
Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Ols-
son, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-
Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey... | CHAIN-OF-THOUGHTREASONING IS APOLICY IMPROVEMENTOPERATOR |
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and
Jonathan Berant. 2019. CommonsenseQA: A ques-
tion answering challenge targeting commonsense
knowledge. In Proceedings of the 2019 Conference
of the North American Chapter of the Association for
Computational Linguistics: Human Language Tech-
nologies, Volume 1 (Long a... | Measuring Association Between Labels and Free-Text Rationales |
Human Evaluation. We also conduct human evaluation on the general quality of the model
responses on the combined test set described in subsection 3.1, which covers several existing
2The specific version of the data we used is https://huggingface.co/datasets/WizardLM/
WizardLM_evol_instruct_V2_196k/tree/main.
7
1021... | Self-AlignmentwithInstructionBacktranslation |
Shahaf Bassan, Yossi Adi, and Jeffrey S Rosenschein. Unsupervised symbolic music segmentation
using ensemble temporal prediction errors. arXiv preprint arXiv:2207.00760, 2022.
Adrien Ycart, Emmanouil Benetos, et al. A study on lstm networks for polyphonic music sequence
modelling. ISMIR, 2017.
Shulei Ji, Jing Luo, ... | Simple and Controllable Music Generation |
[69] Zehan Wang, Haifeng Huang, Yang Zhao, Ziang Zhang,
and Zhou Zhao. Chat-3D: Data-efficiently Tuning Large
Language Model for Universal Dialogue of 3D Scenes.
arXiv preprint arXiv:2308.08769, 2023. 2
[70] Ho-Hsiang Wu, Prem Seetharaman, Kundan Kumar, and
Juan Pablo Bello. Wav2CLIP: Learning Robust Audio
In ICASSP, ... | M2UGen |
Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay
Srinivasan, Tianyi Zhou, Heng Huang, et al. Alpagasus: Training a better alpaca with fewer data.
arXiv preprint arXiv:2307.08701, 2023c.
Lingjiao Chen, Matei Zaharia, and James Zou. How is chatgpt’s behavior changing over time... | ChatGPT’sOne-yearAnniversary-AreOpen-Source LargeLanguageModelsCatchingup |
5.3 Real Speech Data
For real speech experiment, we used Common Voice 11. Natural (non-synthesized) monolingual
speech-text datasets both in English and Spanish were used for training. The evaluation of Spanish-
English real speech Translation was conducted using real speech with verified translation from
the CoVoST2 ... | Translatotron3 |
Recently, linear shape models dominate the representation
of statistical 3D model. Numerous methods [5, 12, 34, 41]
have shown PCA’s ability in modeling the human body and
face. Inspired by [37], we parameterize our character shape
linearly with the following equation,
MS = FS(B) = ¯MS +
βisi,
(2)
where ¯MS denotes... | RaBit- Parametric Modeling of 3D Biped Cartoon Characters with a Topological-consistent Dataset |
put consistent across skeleton formats through output regu-
larization (Fig. 3c). We also experiment with direct latent
point prediction, and a hybrid variant for the last step. | Learning 3D Human Pose Estimation from Dozens of Datasets using a Geometry-Aware Autoencoder to Bridge Between Skeleton Formats |
• Multilinguality: We use 10 benchmarks: XLSum (Non-English languages) (Hasan et al., 2021),
WMT22 (Kocmi et al., 2022), WMT23 (Tom et al., 2023), FRMT (Riley et al., 2023), WikiLingua
(Non-English languages) (Ladhak et al., 2020), TydiQA (no context), TydiQA (GoldP) (Clark
et al., 2020), MGSM (Shi et al., 2023), trans... | gemini_1_report |
As there are many details that need to be specified in the above sketch to yield a very concrete model,
this gives rise to a wide range of interesting mechanism design challenges. Different properties of the
market require different mechanisms, where one can think of e.g. a static "one-shot" trading
scenario versus ... | informatics-phd-projects-2022-23 |
In parallel with the move from direct to distributed discovery, we have seen
the move to a digital media environment that affords people with more
opportunities for more participatory forms of news and media use, in the
process also exposing many to widespread online harassment and potentially
various forms of disinfor... | Social_Media_and_Democracy |
40000DemonAttack17.515.012.510.0DoubleDunk0250500750Enduro100500FishingDerby0102030Freeway100200300Frostbite02000040000Gopher250500750Gravitar10864IceHockey0200400600Jamesbond0500010000Kangaroo2000400060008000Krull02000040000KungFuMaster050100MontezumaRevenge100020003000MsPacman25005000750010000NameThisGame1000Pitfall2... | PPO |
and systematic process, enhancing the overall quality and fidelity of speech synthesis techniques. | AReviewofDeepLearningTechniquesforSpeechProcessing |
1Although there is a text preprocessing step in TTS systems,
We herein use preprocessed text interchangeably with the word
“text”.
Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech | ConditionalVariationalAutoencoderwithAdversarialLearningfor End-to-EndText-to-Speech |
©2023 Cerebras Systems Inc. All Rights Reserved.
20
Cerebras-GPT: Open Compute-Optimal Language Models
1. HellaSwag is a dataset of multiple choice questions aimed to test a model’s common sense reasoning
abilities (Zellers et al., 2019). For example,
A woman is outside with a bucket and a dog. The dog is running... | Cerebras-GPT- Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster |
We present qualitative results in Fig. 4, 5,6,7,8,9,10,11,12,13,14,15.
4.2 Qualitative Results
4.3 Ablation Study
Fig. 20 shows a comparison to a model trained without using ControlNet. That model is trained
with exactly same method with Stability’s Depth-to-Image model (Adding a channel to the SD and
continue the t... | Adding Conditional Control to Text-to-Image Diffusion Models |
C.22 Enron Emails
To extract the data, we used the mailparser
package25 to extract the body of each email as a
document.
D General Data Processing
This section discusses any processes applied across
multiple datasets.
To combine the constituent datasets, we iterate
until the size of the output dataset is the desired... | The Pile- An 800GB Dataset of Diverse Text for Language Modeling |
Diversity. 3DBiCar spans a wide range of 3D biped car-
toon characters, containing 1,500 high-quality 3D models.
First, we carefully collect images of 2D full-body biped car-
toon characters with diverse identities, shape, and textural
styles from the Internet, resulting in 15 character species and
4 image styles, as s... | RaBit- Parametric Modeling of 3D Biped Cartoon Characters with a Topological-consistent Dataset |
Last year you heard us talk about PaLM, which led to many improvements across our products. Today, we’re ready
to announce our latest PaLM model in production: PaLM 2.
PaLM 2 builds on our fundamental research and our latest infrastructure. It’s highly capable at a wide range of
tasks and easy to deploy. We are announ... | Google I_O 2023_ Making AI more helpful for everyone |
Computational Resources
Requires computational resources to support
retrieval strategies and technologies related
to databases. External data source integration
and updates need to be maintained.
Preparation and curation of high-quality
training datasets, definition of fine-tuning
objectives, and provision of corresp... | Retrieval-AugmentedGenerationforLargeLanguageModels-ASurvey |
We relied on web traffic vs. app traffic to “qualify” companies for the list, as most consumer
GenAI products have been website-first so far (more on this below!). For companies that made
the list that do have a mobile app, we added that traffic, gathered from Sensor Tower as of June
2023, to determine their spot numbe... | How Are Consumers Using Generative AI_ _ Andreessen Horowitz |
The second line of each test case contains n integers
a_1 , a_2 , ... , a_n (1 <= a_i <= 10^6) .
The second line of each test case contains n integers
a_1 , a_2 , ... , a_n (1 <= a_i <= 10^6) .
It is guaranteed that the sum of n over all test
cases doesn ’t exceed 3 . 10^5.
It is guaranteed that the sum of n over al... | alphacode |
existing democracies has in fact lived up to the various ideals we might have for
journalism and for democracy. Yet with its many imperfections, at least in
North America and Western Europe, empirical research suggests that
independent, professionally produced news has helped inform the public,
helped people make sense... | Social_Media_and_Democracy |
and better control the generated music. Note that music in
SymMV is of high quality and can also be directly used for
unconditional music generation without video modality. | VideoBackgroundMusicGeneration |
Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick
Lewis, Majid Yazdani, Nicola De Cao, James
Thorne, Yacine Jernite, Vladimir Karpukhin, Jean
Maillard, Vassilis Plachouras, Tim Rocktäschel, and
Sebastian Riedel. 2021. KILT: a benchmark for
knowledge intensive language tasks. In Proceedings
of the 2021 Conference of... | Toolformer |
Limitations. For gender-related errors in translation systems, evaluations do not consider differential harms to people
related to expressing non-binary gender identities (Keyes, 2018; Dev et al., 2021a), or consider contested perspectives
on pronouns across languages and cultures (Lee, 2019). Moreover, while gender ag... | PaLM 2 Technical Report |
planks and 2 sticks on crafting table
return "wooden_axe"
User: [Description] I succeed in step 1, 2, 3, 4, 5.
I finish all steps and I obtain 1 wooden_axe successfully.
==========
User: My current inventory has <inventory>. <visual observation>. How to obtain 1 stone_sword in Minecraft step-by-
step?
Assistant:
P... | JARVIS-1 |
B DPO Implementation Details and Hyperparameters
DPO is relatively straightforward to implement; PyTorch code for the DPO loss is provided below:
19
import torch.nn.functional as F
def dpo_loss(pi_logps, ref_logps, yw_idxs, yl_idxs, beta):
"""
pi_logps: policy logprobs, shape (B,)
ref_logps: reference model logpr... | Direct Preference Optimization |
driving scenario, an LLM selectively activates the required
neural modules by invoking specific functions from the tool
library, ensuring the collection of necessary environmental
information with less redundancy. Upon gathering the
necessary environmental information, the LLM leverages
this data as a query to search i... | ALanguageAgentforAutonomousDriving |
26
Same prompt:“room”+ default “a detailed high-quality professional image”Same CFG scale (9.0)Canny EdgeHEDLine (M-LSD)Depth (midas)Normal (from midas)Scribbles (synthesized)Source ImageFigure 24: (Continued) Comparison of six detection types and the corresponding results. The scribble map is extracted from the HED ... | Adding Conditional Control to Text-to-Image Diffusion Models |
we combine them for the AHWC2S model with input:
= [¯ai,1, . . . , ¯ai,A, hi, wi, cci, cwi , chi].
xAHWC2S
(7)
i
In practice, depending on which measurements are avail-
able, we train and use different regressors. Following the
naming convention of AHWC2S, these models are: AH2S,
AHW2S, AC2S, and AHC2S, as well as th... | Accurate 3D Body Shape Regression using Metric and Semantic Attributes |
This paper provides an investigation of antecedents and consequences of AI’s placebo effect
in HCI. In an experimental study (𝑁 = 65), we examined the influence of negative and positive
verbal AI descriptions. We analyzed the impact of expectations on decision-making in a letter
discrimination task, with or without a ... | AI enhance sour performance |
Memory-based Architectures Our document index can be seen as a large external memory for
neural networks to attend to, analogous to memory networks [64, 55]. Concurrent work [14] learns
to retrieve a trained embedding for each entity in the input, rather than to retrieve raw text as in our
work. Other work improves the... | Retrieval-AugmentedGenerationfor Knowledge-IntensiveNLPTasks |
• Section 4 LLM pre-training: This section explores the various pre-training tech-
niques for LLMs, highlighting how they contribute to resource efficiency. Key
areas such as memory efficiency, data efficiency, and innovative training pipeline
designs are examined, illustrating how each technique impacts the overall resource... | Beyond Efficiency |
CNN (32,36)
LSTM (40) DNN (41)
NLP (37,44)
CNN (46) DNN (47)
NB (19) DT (9,20)
DNN [33]
GNN (42) LSTM (40,43)
NLP (38,45)
RL (48) Clustering (23)
DT (50) RNN (28)
GNN (52) FM (53)
ResNet (54)
XGBoost (ensemble) [55]
CNN, RNN [22]
GNN (42) DT (57)
NN (56)
GRU [18]
NLP (24,45)
DT (37)
DRL [30]
DNN (47)
LSTM [29]
CN... | Knowledge-graph-based explainable AI- A systematic review |
speech of real-world internet users for unjustified removal.
51 Article 1. 2018. Joint Letter on European Commission regulation on online terrorist content.
www.article19.org/resources/joint-letter-on-european-commission-regulation-on-online-ter-
rorist-content/; Reda (2017).
https://doi.org/10.1017/9781108890960 Publ... | Social_Media_and_Democracy |
large webtext corpora: A case study on the colossal clean crawled corpus, 2021.
Du, N., Huang, Y., Dai, A. M., Tong, S., Lepikhin, D., Xu, Y., Krikun, M., Zhou, Y., Yu, A. W., Firat, O., Zoph, B.,
Fedus, L., Bosma, M., Zhou, Z., Wang, T., Wang, Y. E., Webster, K., Pellat, M., Robinson, K., Meier-Hellstern,
K., Duke, T... | PaLM 2 Technical Report |
data. Organize your data based on your research questions and hypothesis.
4. Display your data based on relationships among the collected data and look for supporting evidence.
5. Cross check your data few times for reliability and validity.
6. So, what did you find from your experimentation? Report without add... | How to Write Your PhD Proposal- A Step-By-Step Guide |
Identity management Discrepancy and misalignment between resources of different knowledge graphs is a persistent issue
in current KBX-systems. Managing identities is a prerogative for knowledge-based explainable systems to efficiently use
the available information and avoid undesirable, wide-ranging... | Knowledge graphs as tools for explainable machine learning: A survey |
35
It is not easy to realize the above ideas, though. We’ve mentioned the safety issues of accessing physical
tools, and this is also one main challenge for scientific tool learning since many scientific problems need to
be verified in actual situations, and this process may bring danger if decided by AIs. Meanwhile, fo... | Tool Learning with Foundation Models |
Gemini models are also capable of operating across modalities and a diverse set of global languages
simultaneously, both for image understanding tasks (e.g., images containing text in Icelandic) and for
generation tasks (e.g., generating image descriptions for a wide range of languages). We evaluate the
performance of ... | gemini_1_report |
59.0
59.5
5.4 Results on MTEB benchmark
In Table 3, E5 models not only substantially outperform existing ones with similar sizes, but also
match the results of much larger models. The top-2 models on MTEB leaderboard 7 GTRxxl and
Sentence-T5xxl have 4.8B parameters, while our E5large model is more than 10× smaller wi... | E5 |
24
The Efficiency Spectrum of Large Language Models: An Algorithmic Survey
Efficient LLM Algorithmic Survey, Nov, 2023, USA.
REFERENCES
[1] [n. d.]. Introducing ChatGPT. https://openai.com/blog/chatgpt
[2] [n. d.]. Introducing Claude 2.1. https://www.anthropic.com/index/claude-2-1
[3] [n. d.]. Introducing PyTorch ... | TheEfficiencySpectrumofLargeLanguageModels-AnAlgorithmicSurvey |
• The voting ties constitute a notable portion to the selection differences between USC and SC,
especially with 8 candidate responses. Specifically, among all responses with the maximum
votes, SC always selects the one with the smallest index, while USC can pick up alternative
ones based on the response format.
• The ... | UNIVERSALSELF-CONSISTENCYFORLARGELANGUAGEMODELGENERATION |
Among the 328 prompts we evaluated, Claude 2 gave a response judged more harmful than “I can’t help you
with that" in four cases, according to automated evaluation. On manual inspection, in three of the cases its
response did not seem harmful. However, in the other case, the model was disrupted by the jailbreak attempt... | ClaudeModels |
∗Equal contribution.
1
Universal Self-Consistency for Large Language Model Generation | UNIVERSALSELF-CONSISTENCYFORLARGELANGUAGEMODELGENERATION |
Modeling in Meeting Recognition.. In Interspeech, Vol. 11. 2877–2880.
[90] Tiffany H Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga,
Rimel Aggabao, Giezel Diaz-Candido, James Maningo, et al. 2023. Performance of ChatGPT on USMLE: Potential for
AI-assisted medic... | ASurveyonEvaluationofLargeLanguageModels |
Privacy Preserving Technologies. Personalized tool learning requires models to learn user preferences
from private user information, which inevitably raises privacy-preserving concerns. On the one hand, previous
work has shown that training data extraction attacks can be applied to recover sensitive personal privacy
fr... | Tool Learning with Foundation Models |
Appendices
The appendix presents supplementary details that extend
beyond the content of the manuscript, aiming to enhance
comprehension of the M2UGen model. Comprehensive
information is provided concerning the model’s training
dataset and training methodology, encompassing explicit
insights into the utilized training... | M2UGen |
not able to "cheat" the mechanism for their own benefit. Moreover, these mechanisms should perform
their computations reasonably (and provably) fast. How to design the trading mechanism in such a
way that these requirements are satisfied? | informatics-phd-projects-2022-23 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.