data stringlengths 115 7.61k |
|---|
bmk#1476: aight sounds good
Daj#7482: Got my GPT3 invite too yay
Noa Nabeshima#0290: me too ๐
Noa Nabeshima#0290: We should try some fine-tuning when it becomes available
Daj#7482: I think we can ask for access to the finetuning API
Noa Nabeshima#0290: But what are going to finetune it on that it hasn't meta-learned?
... |
Noa Nabeshima#0290: yeah
Daj#7482: Pretty sure the Google/Siri people aren't gonna share their data hah
Sid#2121: would it need finetuning to be a personal assistant tho ?
Noa Nabeshima#0290: @Sid I think so if you want it to interface with say google calendar and gmail.
I have faith it can be done with few examples
No... |
Daj#7482: haha nah actually surisingly my schedule is good lately
Noa Nabeshima#0290: Ooh I wonder if you could get it to consistently w/o profanity or messing up tell good children's stories
Noa Nabeshima#0290: 'Pirate stories', 'Fairytales', 'Sci-Fi'
Daj#7482: Oooh interesting idea
Sid#2121: lmao, child management id... |
Daj#7482: Realistically it's really astounding how far a bunch of totally unafiliated and unorganized dudes on a discord got already just because Google had spare compute laying around
Sid#2121: to be honest, I am still so confused about why google is giving out so much compute. Do ppl really hate TPUs that much
Sid#21... |
Daj#7482: Hey @JonathanFly ! Welcome to MIT License OpenAI! Check the channel topic for info on what we're doing and what you can do to help, if you want.
Daj#7482: Also, I think I follow you on Twitter hah
Sid#2121: ๐
Sid#2121: we're getting a proper webring going here, our new tfmesh nation is gathering steam
Jonath... |
Daj#7482: I was one of the very first people in TFRC, have met them personally, etc, so I think they give me a bit of special treatment. I've had a preemptible v3-2048 and a handful of v3-8s basically whenever I ask for it
asparagui#6391: ahh kk
Sid#2121: do you think with a poc we could get a non-preemptible 1024 / 20... |
Sid#2121: > eg tpu1 --> working --> prempted --> start tpu2 --> checkpoint?
@asparagui this would be a nice thing to get sorted
Daj#7482: Pretty sure that's what `pu babysit` is for
Daj#7482: (from tpunicorn)
Sid#2121: oh really?
Sid#2121: i was meaning to ask how to use that
Daj#7482: It basically starts the worker pr... |
bmk#1476: or are we the first to bring the glory of the webring to discord
Sid#2121: ๐คท
Sid#2121: what other AI discords are there?
bmk#1476: not sure
bmk#1476: 2min papers?
bmk#1476: i'm in the discord but i dont really visit
Sid#2121: i thought i remember someone saying there was an AI dungeon discord?
Sid#2121: or w... |
Sid#2121: got any links to papers?
bmk#1476: is this a continuation of a convo elsewhere?
Sid#2121: yeah sorry
Sid#2121: @Skylion thinks we won't be able to replicate GPT-3
bmk#1476: why not?
bmk#1476: I'm curious
Sid#2121: hasn't got to that part yet
bmk#1476: we have about 70% of the tech that OA used
Skylion#0368: R... |
bmk#1476: we dont have to use their exact code tied into lingvo though
bmk#1476: also if we dont have that many sections we can maybe even do without gpipe
Skylion#0368: https://github.com/tensorflow/lingvo/blob/master/lingvo/core/gpipe.pyhttps://github.com/tensorflow/lingvo/blob/master/lingvo/core/gpipe.py
Skylion#036... |
bmk#1476: for some reason i have to make the batch size *really small* to fit it on the 512 ;-;
bmk#1476: even with data parallel
Daj#7482: For the record I consider our chances of fully replicating GPT3 to not be top quartile either but it doesn't matter it's fun and educative and we'll make something cool
Sid#2121: w... |
"eval_steps": 0,
"max_steps": 500000,
"data_path": "gs://neo-datasets/bundestag",
"res_dropout": 0.1,
"predict_batch_size": 1,
"eval_batch_size": 32,
"iterations": 500,
"n_embd": 2048,
"datasets": [["bundestag_*.tfrecords", "", 10, "random_sample", 1.0]],
"data_path_": "gs://neo-datasets/openwebtext-fixed/",
"datasets_... |
"local": true,
"mesh_shape": "x:16,y:32",
"layout": "embd:y, heads:y, batch:x"
}
```
Sid#2121: that breaks? or that's the best we have running
bmk#1476: data parallel 16
bmk#1476: er
bmk#1476: i'm about to find out
Sid#2121: cool
Daj#7482: This is unsurprising, it's bigger than 1.5B right?
Sid#2121: n_ctx is only 128
b... |
Daj#7482: And I'm sure the model parallelism is adding some overhead somewhere
bmk#1476: @Daj how many adafactor batch size can you fit on 512?
bmk#1476: pre-meshtf
Skylion#0368: Don't use Adam
Sid#2121: yeah we should check the reshapes
Daj#7482: 512, one per core
Skylion#0368: Use the other optimizer
bmk#1476: 512!??... |
Skylion#0368: It was my understnading they used Adam until they go to Big and Very Big models
Daj#7482: Don't get me wrong it's good news if true lol
Skylion#0368: but I could be misremembering.
Skylion#0368: Like they used Adam for medium and small
Daj#7482: Is this GPT2 or 3?
bmk#1476: > To train all versions of GPT-... |
Sid#2121: what
bmk#1476: this does not look right at all why did the original code look like this
Sid#2121: the decay rate setting??
bmk#1476: yeah
bmk#1476: i am very confused
Sid#2121: I'm pretty sure that was in Daj's code
Daj#7482: Because I was experimentig I think?
Daj#7482: I don't remember
Daj#7482: Lol
bmk#147... |
print(f"N TRAINABLE VARS: {total_parameters:,}")
^
SyntaxError: invalid syntax
```
bmk#1476: @Sid
Sid#2121: Huh O.o one sec not at a computer
bmk#1476: i think this python just doesnt support fstrings
Sid#2121: I thought I pushed that earlier and it worked fine
bmk#1476: yeah its 3.5
Sid#2121: Ah ok Iโll take it out
bm... |
bmk#1476: we need to optimise
Sid#2121: what's the layout?
Sid#2121: ^yeah
bmk#1476: heads:x,embd:x,batch:y
bmk#1476: x:32,y:16
Sid#2121: wait so our batch size is 16??
Sid#2121: am i misunderstanding
bmk#1476: yes, 32x smaller than the 512 we should be getting
Sid#2121: i thought you had bigger batches earlier
Sid#212... |
bmk#1476: cant get more than a few dozen batch size ;-;
bmk#1476: what the hell
bmk#1476: i dont think daj used bf16 for the original gpt2 either
bmk#1476: we really need to do some serious optimization
bmk#1476: like, we cant even do the 512 batch size
bmk#1476: And this isn't even full attention
bmk#1476: This is loc... |
bmk#1476: who here has the charisma necessary
bmk#1476: "Noam Shazeer" sounds like the main person we need to contact
bmk#1476: First author on the paper and active committer on the repo
Sid#2121: ye
Sid#2121: i found a few of the people on twitter the other day hah
Sid#2121: I can write a message tomorrow
bmk#1476: Al... |
Sid#2121: Oh wow, Noam Shazeer co-authored Attention is all you need
Sid#2121: didn't realize
Sid#2121: seems like a big deal
zitterbewegung#4846: does anyone feel like more params = better
zitterbewegung#4846: do you think there will be diminishing returns?
bmk#1476: the modus operandi of libreai *is* moar params = mo... |
psgeorge#6388: > Hmmm idk. Weโre open to any suggestions but we need a place to store / process all the data before we put it onto the bucket, so we have a hetzner. Price calculations / storage etc are things I havenโt really been dealing w though.
@Sid Storing & Processing on a hetzner because data processing on gclou... |
aster#3007: Joined the server.
Deleted User#0000: Joined the server.
Sid#2121: ๐๐๐
kindiana#1016: Joined the server.
Sid#2121: Hey @kindiana
Sid#2121: Welcome to The AGI Wranglers! Check the channel description and resources channels for info and don't hesitate to ask if you have questions ๐
Sid#2121: your tokeniza... |
Daj#7482: Sure!
Sid#2121: I posted @kindiana's gist up in #links
Daj#7482: Give me like 5 minutes
Sid#2121: sorry to distract!
Daj#7482: Eh it's fine, I'm probably as prepared as I'm gonna be anyways
Sid#2121: when's the exam
Daj#7482: Tomorrow
Daj#7482: It's an easy test, I just didn't do shit for it hah
bmk#1476: Did... |
bmk#1476: Also try begging them for Gshard
Sid#2121: i keep getting gshard and gpipe confused, hah
Sid#2121: i'd also be interested *which* dimensions we should and can split exactly, and what's the best practice for our wpe / wte stuff
Sid#2121: but that's i guess not a question for them
Sid#2121: I found a good talk ... |
Sid#2121: oh man we should update the kanban :/ lots of new tasks popping up
Sid#2121: did you see the thing i posted in #links / can you see the new hidden channel?
bmk#1476: Please do
bmk#1476: And yes
Sid#2121: so ```- test MOE model, devise architecture. - reach out to TFM authors - reach out to Archivist``` to ad... |
bmk#1476: This reshape was in original GPT2 as well tho
Sid#2121: yeah but reshapes are different in TFM
bmk#1476: So why is it using so much memory for us
bmk#1476: Oh
Sid#2121: there's a whole section in the paper on it i thought we were clear on this
bmk#1476: Ok so reshape bad
Sid#2121: i mean, not *always*
Sid#212... |
bmk#1476: this should still use strictly less memory
Sid#2121: re memory use
bmk#1476: second, you *cannot* have two dims of same name in a tensor
bmk#1476: so if you want an embd -> embd, you have to *rename* one of the input or output
Sid#2121: no but i'm saying
Sid#2121: we have our input
bmk#1476: you cant have an ... |
bmk#1476: i think that still doesnt explain it
bmk#1476: that only uses more inter-tpu bandwdith
bmk#1476: so it's slower, yes
Sid#2121: no, that's not right
bmk#1476: but should never use more memory
Sid#2121: because combined_batch_sequence then isn't being split
bmk#1476: so what?
Sid#2121: and will be stored on eve... |
Sid#2121: Idk if having a stray bfloat would affect the memory, or if it was just that op that happened to be bfloat
bmk#1476: bfloat thing?
bmk#1476: oh
bmk#1476: what would it take to convert everything to bfloat?
Sid#2121: there's a commit
Sid#2121: i mean, i think theoretically not a lot
Sid#2121: just anywhere we ... |
bmk#1476: https://cdn.discordapp.com/attachments/729741769738158194/732974205687300126/unknown.png
bmk#1476: these too?
bmk#1476: why did you change
Daj#7482: lol the tensor is 32 but initializer is 16?
Daj#7482: Seems like a bug/typo
Sid#2121: huh? i don't think i changed that
bmk#1476: o.O
Sid#2121: from what to wha... |
Sid#2121: i mean
Sid#2121: let's do priorities ok
bmk#1476: i'll deal with bf16ification
Sid#2121: i'm so bad at self organisation
Sid#2121: and i still need to read the moe paper
Sid#2121: I'll do the kanban as first priority
Sid#2121: then i shall use the kanban to device my next priority, hah
Daj#7482: Have we teste... |
bmk#1476: which op is einsum_21/einsum/XlaEinsum ??? o.O
Daj#7482: v3s have 16GB
Daj#7482: just to explain why it OOMed on 13GB of memory use
Sid#2121: i have no idea which op that is, but i can fix the reshape and see if the error's the same
Sid#2121: einsum does the same thing? O.o
Sid#2121: praise the lord for einsu... |
comparator = less if exclusive else less_equal
m = cast(
comparator(mtf_range(x.mesh, dim, dtype=tf.float32),
mtf_range(x.mesh, new_dim, dtype=tf.float32)), x.dtype)
ret = einsum([x, m], output_shape=new_shape)``` like this (mtf.cumsum)
bmk#1476: I think I know why they did the even odd layers now
Sid#2121: altho i don... |
new_name = "tmp_seq"
new_dim = Dimension(new_name, sequence_dim.size)
new_shape = h.shape.rename_dimension(sequence_dim.name, new_name)
new_name = "tmp_vocab"
new_dim = Dimension(new_name, vocab_dim.size)
new_shape = h.shape.rename_dimension(vocab_dim.name, new_name)
logits = mtf.einsum([h, wte], output_shape=[batch_... |
Sid#2121: doesn't work
bmk#1476: which by definition we cant really ask about
bmk#1476: the what happened
Daj#7482: If we ask someone publishing papers at Google to look at our code we won't get a response
Sid#2121: same error
bmk#1476: hmm
Sid#2121: i guess rename_dimension doesn't rename inplace?
bmk#1476: what do we... |
old_name: a string
new_name: a string
Returns:
a Tensor
"""
return reshape(x, x.shape.rename_dimension(old_name, new_name))``` am i going mad, isn't this recursive
bmk#1476: the things that are causing us problems wont even fit in those 2 paragraphs
Daj#7482: Before we can formulate our questions it's probably not wort... |
Daj#7482: Cool so you might be best to contact him
Sid#2121: AGH I HATE THIS LIBRARY
bmk#1476: oh no i hate *interacting with people*
Sid#2121: mtf.rename_dimension() does a different thing to tensor.shape.rename_dimension()
bmk#1476: @Sid
bmk#1476: here's how i do it
Daj#7482: lol if you really don't want to contact h... |
Daj#7482: Oh no we will
bmk#1476: im all for writing one
Daj#7482: But we will give everyone a polite headsup
Daj#7482: imo
Daj#7482: Otherwise feels kinda rude
bmk#1476: yeah that's fair
Daj#7482: And we will be polite in the blogpost ofc
Daj#7482: Just frustrated lol
Sid#2121: @bmk i think you can just do this tho ``... |
Sid#2121: I mean that's github issue stuff right?
bmk#1476: when you rename it might do inter device comms
Sid#2121: it does
bmk#1476: and yes you do
Sid#2121: because internally, it's a reshape
bmk#1476: but it's a bit more complicated than that
bmk#1476: conv1d takes dim_in -> dim_out
bmk#1476: they *cannot be equal*... |
Sid#2121: ok i kinda get it
Sid#2121: AH
Sid#2121: yes
Sid#2121: the click happened
Sid#2121: alright woah ok
Sid#2121: so it wasn't just poc
Sid#2121: that's clever
Sid#2121: let's odd even
bmk#1476: *click*
Sid#2121: so if i'm understanding correctly: dim_in and dim_out need to differ
bmk#1476: yeah
Sid#2121: *but* y... |
bmk#1476: this is just in their toy model
Sid#2121: ok this is gonna help a lot
Sid#2121: ah we have so many optimizations to do
bmk#1476: so dims we can eliminate: `tmp_channels`
Sid#2121: i love it when something clicks. this good
bmk#1476: actually that's the main one we can eliminate i think
bmk#1476: there's one t... |
Sid#2121: you are splitting across all cores
bmk#1476: you want to split embd across both the a and the b
bmk#1476: you cant
Sid#2121: how would that even work
Sid#2121: like, imagine the tensor as a square like in the diagrams
bmk#1476: just send one chunk to each processor along both of those dimensions
bmk#1476: yea... |
Sid#2121: ``` # equivalent to tf.matmul
new_name = "tmp_batch"
old_name = h.shape[0].name
h = mtf.rename_dimension(h, old_name, new_name)
new_name = "tmp_seq"
old_name = h.shape[1].name
h = mtf.rename_dimension(h, old_name, new_name)
new_name = "tmp_vocab"
old_name = h.shape[2].name
h = mtf.rename_dimension(h, old... |
Sid#2121: because you need to rename for einsum no ?? i'm confused
Sid#2121: did you run the code you pushed?
bmk#1476: because dsometimes you *dont have* the "other" dimension
bmk#1476: yes my code works
Sid#2121: hm
bmk#1476: it breaks because of the rename to tmp_
Sid#2121: ah
bmk#1476: why are you renaming? o.O
Sid... |
bmk#1476: conv1d needs different names
bmk#1476: (only for the output dim)
Sid#2121: https://tenor.com/view/screaming-internally-dead-inside-screaming-snapped-gif-8097478
bmk#1476: err @Daj what does this mean https://cdn.discordapp.com/attachments/729741769738158194/732992864321142934/unknown.png
Daj#7482: Uhh probabl... |
Sid#2121: ah i'd have to log into my github on the server, wouldn't work
Sid#2121: @bmk i can't for the life of me understand why my einsum op isn't working. (I'm testing on gpt_moe.py). can you tell me what i'm doing wrong
Sid#2121: code: ``` print('INTO EINSUM SHAPE:')
print(h)
print(wte)
logits = mtf.einsum([h, w... |
Sid#2121: different shapes
bmk#1476: right now youre giving it two dimensions with different names how is einsum supposed to know theyre actually the same
bmk#1476: why is that dim called moe_out anyways
bmk#1476: why not just make it the same as output_dim
Sid#2121: it *is* output_dim
bmk#1476: ?
Sid#2121: like, moe_o... |
Sid#2121: the same shape?
bmk#1476: the same dim
bmk#1476: name
Sid#2121: O.o
bmk#1476: this naming is garbage
Sid#2121: ok vyou've explained enough lol, thanks
Sid#2121: but i still don't get it
Sid#2121: i'll keep reading
bmk#1476: please change every name to the same as the dim in it
Sid#2121: i don't wanna make you... |
Daj#7482: It shouldn't be?
bmk#1476: o.O
Daj#7482: You can try another pu recreate
bmk#1476: ok
Daj#7482: #the-faraday-cage-archive is now our go to spot for putting any fun GPT/GAN/etc stuff
aquajet#7800: Joined the server.
Pasha Khosravi#1571: Joined the server.
Daj#7482: Hello! Welcome to our AGI Breeding Program! C... |
Daj#7482: It would be a dream to work with Hugging Face
arfa#0882: I'm jealous
Daj#7482: Join us!
arfa#0882: :feck2:
TD-3#9327: Joined the server.
Daj#7482: Don't be too jealous, if you knew the horrors we've been through with TFM haha
Daj#7482: Hello @TD-3 ! Welcome to the LM Grazing Fields! Check the channel topic fo... |
Isaac McHorse#2007: ? JUST DO IT !
shawwn#3694: I think it should be at the top, because it's the first thing people see when they join the server. why does every decision have to be analyzed forever?
Daj#7482: Data processing scripts sans tfrecords encoding are in various other repos
zphang#7252: oh they're not centra... |
Sandeep#0543: Joined the server.
Daj#7482: >>>>>famous
Daj#7482: As if
arfa#0882: Well FWIW, if 1997 Webring is at the bottom and I *haven't* muted it, whenever a new server gets added there I'll be sure to check it out because it'll be my only ping
shawwn#3694: Yeah, I really think the webring should be its own catego... |
Daj#7482: Though test time now, talk to you guys later!
Sid#2121: ah! good luck!
ucalyptus#6163: Joined the server.
Sid#2121: Hey (again ?) @ucalyptus , welcome to H.A.L aka Help Access Language-Models aka LibreAI. Check the channel description for info, and please shoot any questions you have our way.
Polytropian#8925... |
mysterefrank#2954: ๐
shawwn#3694: I knew it. it's called pray, not high-five. Everyone always says that's a high-five
cc_#1010: Joined the server.
mysterefrank#2954: oh damn I never saw the high five
cc_#1010: oi
Sid#2121: oi oi
shawwn#3694: o/ ceeps
cc_#1010: i wave hellow
Sid#2121: welcome @cc_ pls halp. pls gib cpu... |
Sid#2121: awesome! welcome
cc_#1010: i did like... 6 ish days ago?
cc_#1010: dunno how long the waiting list is
Sid#2121: I think they're just being selective
Sid#2121: also they're gonna start making people pay soon-ish
cc_#1010: hmm
cc_#1010: thats lame
cc_#1010: also it was ten days ago, time is an illusion
cc_#1010... |
cc_#1010: uhhh i have
wintermute#5623: Joined the server.
cc_#1010: two linodes lmao
cc_#1010: and
cc_#1010: a 2015 macbook pro
cc_#1010: and
cc_#1010: intel core i7 9750H with 2.6 ghz
Sid#2121: Hello @wintermute ! Welcome to the tensorflow mesh wastelands! Read the Channel description for more information on the proje... |
cc_#1010: my parents are also rich and i can siphon money off them lmao
cc_#1010: im not sure if i can single handedly manage 6 terabytes money but i can probably put a dent in it
Sid#2121: > my parents are also rich and i can siphon money off them lmao
@cc_ lmao
Sid#2121: i hope you're not joking and i will take advan... |
Sid#2121: yeah
cc_#1010: advantage #3 of me: i have a moderate amount of clout?
cc_#1010: lol
Sid#2121: @ghost2718 ๐ ๐ welcome we make big GPT pls halp us
cc_#1010: but also its 4 am so im going to bed
Sid#2121: well nice, welcome in, i'm sure we could use your clout at some point for sure
cc_#1010: y'know you could... |
cc_#1010: lets you do searches, scrape metadata, etc.
maxime#4123: Joined the server.
cc_#1010: working on an in-discord reader with tts options for people with needs for that
cc_#1010: bookmark exact spots in the page
cc_#1010: the like
Sid#2121: that sounds cool
cc_#1010: anyway im gonna head to bed now because i cou... |
cc_#1010: Not Yours
cc_#1010: i spent 8 hours today prying around with mongodb so i could keep track of server statuses (so no kids can accidentally search for the porn) and user win/loss records because it has art prompts too
cc_#1010: so that was gratifying
Sid#2121: Can we get drilbot in here lol
cc_#1010: i mean
cc... |
Isaac McHorse#2007: WHY WOULD I PLAY?! YOU ARE THE SOBBING ONE
Sid#2121: Hey there @dmytrodee ! Welcome to git clone openai; git branch LibreAI
Sid#2121: hahahaha ok shawwn
Sid#2121: you are correct
Manju#1531: Joined the server.
Perditus#2503: Joined the server.
Daj#7482: Hello @Manju @Perditus welcome to the Tensorfl... |
Daj#7482: Hello @Narsil ! Welcome to The Merry Band of LM Trainers aka LibreAI! Check out the channel topic for info and don't hesitate to ask questions!
Daj#7482: Man at this point it's just sport to see how many more of these I can come up with
Narsil#9151: @Daj Don't you have model to generate these ? ๐ Thanks bt... |
pragmaticml#1730: Joined the server.
justhoughts#6515: Joined the server.
Daj#7482: Hello @lugig @pragmaticml @justhoughts ! Welcome to the Home For Wayward Language Models! Check out the channel topic for info and don't hesitate to ask questions!
BalGadot#9361: Joined the server.
AlexM#2612: Joined the server.
Sid#... |
mathew#7618: Hello everyone glad to be here!
pwang99#3791: I have only two GPUs to spare
Daj#7482: Lurkers are always welcome! of course people wanting to help is even better
Daj#7482: Our bottleneck is currently CPU to preprocess data funnily enough
Sid#2121: > I have only two GPUs to spare
@pwang99 we're using TPUs f... |
Daj#7482: GPT3 is _big_
Daj#7482: (and we want bigger!)
lillux#2099: Joined the server.
Sri Harsha M End-October 2021#1627: Joined the server.
adam90#4807: Joined the server.
binal#2982: Joined the server.
Sid#2121: Hey @lillux , @Sri Harsha M End-October 2021 , @adam90 , @binal ! Welcome to the AI Fallout Shelter โข๏ธ
S... |
Sid#2121: Hey @murbard , welcome to OpenAI's even more open younger brother, LibreAI โข๏ธ ! check the google doc in the project description for more info
murbard#5141: > Hey @murbard , welcome to OpenAI's even more open younger brother, LibreAI โข๏ธ ! check the google doc in the project description for more info
@Sid ty
S... |
Daj#7482: Hi @wobbithobbit @MarcAK ! Welcome to the OpenAI of OpenAI! Please see the channel topic for info and don't hesitate to ask questions!
Ivanc2#9346: Joined the server.
bmk#1476: Wow so many new folks!
Daj#7482: Hello @Ivanc2 ! Welcome to the Library of Babylon (Compressed into latent space for TPU compatibili... |
Daj#7482: We're making real progress for sure
Daj#7482: But it's a big project
Daj#7482: I'm surprised we got this far tbh
bmk#1476: does anyone of the recently joined have access to a lot of cpu power
shawwn#3694: we have 64 cores, though I'm not sure that's "lots"
Daj#7482: Better than our 8 lol
bmk#1476: Er, about a... |
Daj#7482: Would be super cool Vanya! Happy to spin out some research papers too
Daj#7482: We have plenty of experiments worth writing up
bmk#1476: oh yes, research papers would be awesome
shawwn#3694: by the way, I've been thinking of making my own attempt at GPT-3. Feel free to revoke my access to avoid conflict of in... |
shawwn#3694: hm, why's it closed then? just curious
bmk#1476: I believe the original intent was to not put out unfinished code
Daj#7482: Yup
shawwn#3694: ah
Daj#7482: I literally never deny a request to see it
Daj#7482: We'll release it the second we're comfortable saying it works
Daj#7482: Which might be soon
spiantin... |
Daj#7482: Since TPUs do the GPU work for us
Daj#7482: Also a formal hello to @thesofakillers @jackclark @Teven ! Welcome to the AGI Proving Grounds! Please see the channel topic for info and don't hesitate to ask questions!
Teven#6831: Wait you're not a bot you actually come up with the memey introductions? pretty impr... |
Daj#7482: Also @jackclark and anyone else here for research/ethics/policy reasons, I (and almost surely the others) would be happy to answer any questions, discuss issues or hop on a call or whatever any time. We wanna be open after all :)
bmk#1476: I'd sure love to discuss anytime
sunrisetofu#6997: Joined the server.... |
Daj#7482: Hello @zap @Deleted User ! Welcome to the Shitposting AIs Containment Zone! Please see the channel topic for info and don't hesitate to ask questions!
shawwn#3694: (Is it really a shitposting AI containment zone when there isn't a #memes channel?)
shawwn#3694: (or perhaps it's the anti-shitposting channel, wh... |
Daj#7482: Hey @turmeric13 ! Welcome to the Wild World of Word Models! Please see the channel topic for info and don't hesitate to ask questions!
goolulusaurs#1571: One thing I've been thinking about is that with models like iGPT, people are finding ways to turn other types of data into a sequence for prediction. Howeve... |
gsastry#9119: Joined the server.
baragonaru#7305: Joined the server.
Daj#7482: Hey @gsastry @baragonaru ! Welcome to The International LM Watch Organization! Please see the channel topic for info and don't hesitate to ask questions!
jeffrafter#8838: Joined the server.
fnord#5810: Joined the server.
Sid#2121: Hello @jef... |
yurak#0640: Joined the server.
Daj#7482: Hi @yurak ! Welcome to Bargain Bin OpenAI! Please see the channel topic for info and don't hesitate to ask questions!
tushar#8521: Joined the server.
Daj#7482: Hi @tushar ! Welcome to the Mountain of ~~Madness~~ Tensorflow Documentation! Please see the channel topic for info and... |
```
Daj#7482: I just got my encoding script fixed hah
Daj#7482: `ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCbYsj64DQ3sII+I65MjTalQ9cqPp1avh1n4IMfvV2ZhHCXiBVM+bOj1KtjC5+fxPbwJcksSlszhLtt0le3mGFhBYkBlaYhQfQO0xqRU46lfLWSkzrdSoya8OrMnhZZBNXdYsFn28fYpyMJTw17TnJojQ5+D+rIJlzbPE3I25qep4VkqBq6hKvayDjsEWjpTCSJczy5kCxpTshicTyHJnD9Gsc... |
Ruby#6688: I can contribute around 200$ worth of cpu if that helps.
Ruby#6688: on AWS.
Sid#2121: oh awesome
Sid#2121: it may well do. @bmk would be the person to ask about that. I'm not 100% how we're going to distribute processing yet since we've had several people reach out
Sid#2121: can i put your name on the doc?
R... |
Daj#7482: You didn't do a silly custom message though :D
shawwn#3694: True
Sid#2121: aw i had a good one ๐ฆ
Ronaldo#4812: Joined the server.
Daj#7482: @Sid your chance!!!
Sid#2121: Hey @Ronaldo ! Welcome to the AGI Wizards' Meeting Hall! Let us know if you have any questions, or can spare any galleons
shawwn#3694: @Ron... |
Daj#7482: Our current bottlenecks are mostly CPUs for data processing and people willing to brave Tensorflow Mesh/TPU coding
Daj#7482: Applies to everyone ofc heh
Commutative Conjecture#6969: Where can I find more details?
Are there people with new projects?
CPU as in money?
What's required wrt coding?
Commutative Conj... |
@zphang I see several zphangs?
Commutative Conjecture#6969: Also, sorry, I didn't notice that the channel topic was collapsed and I missed most of it
Daj#7482: All good we're delighted to help any willing contributors onboard :)
georgejrjrjr#0817: Joined the server.
shawwn#3694: Just make the repo open
Daj#7482: Yea at... |
Daj#7482: Money can make things complicated too
Daj#7482: We're not sayig no, but it is extra overhead
Sid#2121: @shawwn please don't take that as me turning down funding lol
Sid#2121: wasn't what i meant
Sid#2121: I just don't think anyone is that sure of how much we'll need right now
Commutative Conjecture#6969: @Sid... |
Sid#2121: you get an *even* more customised non formulaic welcome message from me @dvs because ya make great vids
dvs#3865: aw thanks ๐
Daj#7482: link said vids pls
dvs#3865: lets see if you still feel that way when I add ads to the videos
dvs#3865: https://www.youtube.com/channel/UCaZuPdmZ380SFUMKHVsv_AA
Commutative ... |
Daj#7482: Hey @gstqtfr ! Welcome to the World's Most Disorganized AI Lab! Please see the channel topic for info and don't hesitate to ask questions!
bmk#1476: aaaaa so much happening
bmk#1476: I'm writing up the CC data stuff once and for all
Sid#2121: @dvs lmao at the big bernie at the top of your channel
shawwn#3694:... |
bmk#1476: no u gotta help me with CCTC
Sid#2121: I can go wherever
Sid#2121: what do you need
bmk#1476: (after mtf)
Sid#2121: @bmk i thought we were doing odd-even?
bmk#1476: ?
bmk#1476: I'm working on CCTC
Sid#2121: ah ok
bmk#1476: i was saying after mtf is up
Sid#2121: wait so
Sid#2121: you're working on mtf or cctc
... |
Sid#2121: sorry, we were slacking from the important work of greeting people by working on our silly model ๐
shawwn#3694: alright, I'm back on laptop. Let me get your SSH set up...
shawwn#3694: _pulls up pubkeys_
Daj#7482: Hey @step @randomuser ! Welcome to the Back Alley LM Market! Please see the channel topic for in... |
krysis#2720: Joined the server.
bmk#1476: CCTC: see where it says English
bmk#1476: https://github.com/miso-belica/jusText/tree/dev/justext/stoplists
bmk#1476: these are the languages it supports
Sid#2121: nice
bmk#1476: https://github.com/miso-belica/jusText/tree/dev/justext/stoplists
bmk#1476: this is a language dete... |
Daj#7482: Hey @aegis ! Welcome to 2 Devs, One Tensorflow! Please see the channel topic for info and don't hesitate to ask questions!
Sid#2121: lmao
Daj#7482: @aegis Through the TFRC we've got access to a ton of TPUs
Daj#7482: It's still a _huge_ beast of a model to train but it's not _completely_ infeasible
aegis#2320:... |
GptForMe#9886: Joined the server.
GptForMe#9886: @Daj Cores? As in CPU OR gpu?
Daj#7482: CPU atm
Daj#7482: We don't use GPUs
Daj#7482: We need CPU to process the dataset, we train on TPUs
aegis#2320: do you have an idea of what hardware you'll need for inference yet?
GptForMe#9886: @Daj Your own or in the cloud? Are ... |
aegis#2320: do you have a lot of internet bandwidth gptforme?
Daj#7482: Hey @eigenjoy ! Welcome to the Freerange Tensor Farm! Please see the channel topic for info and don't hesitate to ask questions!
Daj#7482: > @Daj Nice! Well done. Ok, still can't find a use for my 64-core (CPU) box. No TPU's. ๐
@GptForMe We can... |
bmk#1476: what we're doing is collecting way more than we need for future use basically
shawwn#3694: I would recommend doing the math on how much of this data you're going to be able to train on
shawwn#3694: yes
shawwn#3694: by "way more" you mean "far, far more than the model could feasibly be trained on"
bmk#1476: we... |
aegis#2320: I use it for my openwebtext work
Daj#7482: We're just really at/beyond the limit of our available dev time lol
aegis#2320: it's way faster than cpython at the basic stuff I've been doing
Daj#7482: We're pouring every free minute we have into this and need more people!
aegis#2320: you can literally just run ... |
tapanc#8821: Joined the server.
Daj#7482: Hey @tapanc ! Welcome to A Mutually Abusive Relationship Between A Few Devs And Their TPUs! Please see the channel topic for info and don't hesitate to ask questions!
shawwn#3694: cool, so the server has more people in two days than I had in two months
Daj#7482: Today was a pre... |
Daj#7482: We don't really have any funding or figured out how we wanna handle money, but definitely interesting @aegis
Daj#7482: We'll see how everything develops
shawwn#3694: the dataset might become more valuable than the project, depending on how the training goes
Daj#7482: Yea I think that's a likely outcome
shawwn... |
shawwn#3694: fwiw, torrents almost always die, except for super popular datasets like imagenet
Zach Dwiel#0475: you might also check out dat
shawwn#3694: did dat ever go anywhere?
shawwn#3694: I briefly heard about it like, two years ago
shawwn#3694: is it really suitable for storing TB's of data?
Zach Dwiel#0475: They... |
shawwn#3694: yes. definite F
Daj#7482: To be fair, I and l is the worst
shawwn#3694: bikeshedding
Isaac McHorse#2007: WHAT ARE YOU DOING BIKESHEDDING? BACK TO WORK!
shawwn#3694: man I love that bot.
Daj#7482: Haha
Daj#7482: I'm gonna go to bed soon anyways
shawwn#3694: what other features would you add to McHorseFace? ... |
Daj#7482: I may have already done that many lol
Daj#7482: Haha yeah I know, but it's a funny little tradition
Deleted User#0000: Joined the server.
shawwn#3694: @Deleted User Hi, welcome to Ethi
Daj#7482: Hey @Deleted User ! Welcome to the Large Language Model Appreciation Society! Please see the channel topic for info... |
Daj#7482: It definitely was a nuisance today, I'll turn it down for now, thanks for alerting us @Deleted User !
Daj#7482: Yea it seemed to not cause too much trouble, we'll see what a lower setting does _shrug_
shawwn#3694: one thing that keeps me from lurking on this server more is that there's no real place to show o... |
Daj#7482: Haha I turned the wait time off for now shawwn
Daj#7482: I'mma be heading to bed (read: Continue checking Discord until I fall asleep like a degenerate). Crazy day today, thanks for everyone being so awesome and can't wait to see where this project goes next ๐
Sid#2121: man, you have a better sleep schedule ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.